[GH-ISSUE #9252] Performance downgrade on 2-way AMD EPYC 9654 after updating to Ollama 0.5.11 #52541

Closed
opened 2026-04-28 23:38:07 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @PC-DOS on GitHub (Feb 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9252

Originally assigned to: @mxyng on GitHub.

What is the issue?

Issue Description

Greetings,

Recently I adopted Ollama 0.5.7 to deploy DeepSeek-R1 (671b and 70b) on my personal server. Due to limited GPU resource, the 671b version mainly run on my CPU. However, after updating to Ollama 0.5.11, I found that the inference performance downgrades severely (about 40%, dropped to about 1.5 tok/s from 2.5 tok/s).

Here is a simple benchmark. Using model SIGJNF/deepseek-r1-671b-1.58bit:latest pulled from Ollama repository. I created a copy of this model, named it as deepseek-r1:671b-q1.58-gpulimited and set num_gpu to 18 in order to fit my hardware environment. I ran the model with ollama run deepseek-r1:671b-q1.58-gpulimited --verbose and tested the performance in 3 different Ollama installations, using the same prompt "介绍一下自己吧" (which means "Please introduce yourself" in English). The testing process was sequential, no reboot or shutdown was executed during the process.

In Ollama 0.5.11, upgraded from Ollama 0.5.7:

total duration:       3m13.7901213s
load duration:        14.1334ms
prompt eval count:    6 token(s)
prompt eval duration: 749ms
prompt eval rate:     8.01 tokens/s
eval count:           258 token(s)
eval duration:        3m13.026s
eval rate:            1.34 tokens/s

In Ollama 0.5.11, installed after uninstalling Ollama 0.5.11:

total duration:       52.0926227s
load duration:        13.4893ms
prompt eval count:    6 token(s)
prompt eval duration: 3.279s
prompt eval rate:     1.83 tokens/s
eval count:           69 token(s)
eval duration:        48.799s
eval rate:            1.41 tokens/s

In Ollama 0.5.7, installed after uninstalling Ollama 0.5.11:

total duration:       21.1420094s
load duration:        14.63ms
prompt eval count:    6 token(s)
prompt eval duration: 625ms
prompt eval rate:     9.60 tokens/s
eval count:           54 token(s)
eval duration:        20.501s
eval rate:            2.63 tokens/s

Also, in Ollama 0.5.7, the inference process only utilizes about 80% of CPU. However, in Ollama 0.5.11, the inference process utilizes 100% of my CPU.

OS Version

Windows Server 2022 Datacenter, Build 20348.2700

CPU

2x AMD EPYC 9654 (AVX512 enabled)

GPU

2x NVIDIA GeForce RTX 3090 24GB (NVLink not connected)

Driver version 561.09

CUDA Toolkit version 12.6

RAM

16x 32GB DDR5 RECC 4800MHz

Ollama Environment Variables

SetX OLLAMA_DEBUG false
SetX OLLAMA_FLASH_ATTENTION false
SetX OLLAMA_HOST http://0.0.0.0:12450
SetX OLLAMA_INTEL_GPU false
SetX OLLAMA_KEEP_ALIVE 2h45m
SetX OLLAMA_LOAD_TIMEOUT 25m0s 
SetX OLLAMA_MAX_QUEUE 512 
SetX OLLAMA_MODELS E:\Ollama\Models 
SetX OLLAMA_MULTIUSER_CACHE true 
SetX OLLAMA_NOHISTORY false 
SetX OLLAMA_NOPRUNE false 
SetX OLLAMA_ORIGINS *
SetX OLLAMA_SCHED_SPREAD true

Relevant log output


OS

Windows

GPU

AMD, Nvidia

CPU

AMD

Ollama version

0.5.7 / 0.5.11

Originally created by @PC-DOS on GitHub (Feb 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9252 Originally assigned to: @mxyng on GitHub. ### What is the issue? # Issue Description Greetings, Recently I adopted Ollama 0.5.7 to deploy DeepSeek-R1 (671b and 70b) on my personal server. Due to limited GPU resource, the 671b version mainly run on my CPU. However, after updating to Ollama 0.5.11, I found that the inference performance downgrades severely (about 40%, dropped to about 1.5 tok/s from 2.5 tok/s). Here is a simple benchmark. Using model `SIGJNF/deepseek-r1-671b-1.58bit:latest` pulled from Ollama repository. I created a copy of this model, named it as `deepseek-r1:671b-q1.58-gpulimited` and set `num_gpu` to `18` in order to fit my hardware environment. I ran the model with `ollama run deepseek-r1:671b-q1.58-gpulimited --verbose` and tested the performance in 3 different Ollama installations, using the same prompt "`介绍一下自己吧`" (which means "Please introduce yourself" in English). The testing process was sequential, no reboot or shutdown was executed during the process. In Ollama 0.5.11, upgraded from Ollama 0.5.7: ``` total duration: 3m13.7901213s load duration: 14.1334ms prompt eval count: 6 token(s) prompt eval duration: 749ms prompt eval rate: 8.01 tokens/s eval count: 258 token(s) eval duration: 3m13.026s eval rate: 1.34 tokens/s ``` In Ollama 0.5.11, installed after uninstalling Ollama 0.5.11: ``` total duration: 52.0926227s load duration: 13.4893ms prompt eval count: 6 token(s) prompt eval duration: 3.279s prompt eval rate: 1.83 tokens/s eval count: 69 token(s) eval duration: 48.799s eval rate: 1.41 tokens/s ``` In Ollama 0.5.7, installed after uninstalling Ollama 0.5.11: ``` total duration: 21.1420094s load duration: 14.63ms prompt eval count: 6 token(s) prompt eval duration: 625ms prompt eval rate: 9.60 tokens/s eval count: 54 token(s) eval duration: 20.501s eval rate: 2.63 tokens/s ``` Also, in Ollama 0.5.7, the inference process only utilizes about 80% of CPU. However, in Ollama 0.5.11, the inference process utilizes 100% of my CPU. # OS Version Windows Server 2022 Datacenter, Build `20348.2700` # CPU 2x AMD EPYC 9654 (AVX512 enabled) # GPU 2x NVIDIA GeForce RTX 3090 24GB (NVLink not connected) Driver version 561.09 CUDA Toolkit version 12.6 # RAM 16x 32GB DDR5 RECC 4800MHz # Ollama Environment Variables ``` SetX OLLAMA_DEBUG false SetX OLLAMA_FLASH_ATTENTION false SetX OLLAMA_HOST http://0.0.0.0:12450 SetX OLLAMA_INTEL_GPU false SetX OLLAMA_KEEP_ALIVE 2h45m SetX OLLAMA_LOAD_TIMEOUT 25m0s SetX OLLAMA_MAX_QUEUE 512 SetX OLLAMA_MODELS E:\Ollama\Models SetX OLLAMA_MULTIUSER_CACHE true SetX OLLAMA_NOHISTORY false SetX OLLAMA_NOPRUNE false SetX OLLAMA_ORIGINS * SetX OLLAMA_SCHED_SPREAD true ``` ### Relevant log output ```shell ``` ### OS Windows ### GPU AMD, Nvidia ### CPU AMD ### Ollama version 0.5.7 / 0.5.11
GiteaMirror added the buildbug labels 2026-04-28 23:38:07 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 20, 2025):

Server logs with OLLAMA_DEBUG=1 may aid in debugging. At a guess, I'd say that ollama is not loading/finding the appropriate backend and is falling back to using a vanilla CPU build.

<!-- gh-comment-id:2671757060 --> @rick-github commented on GitHub (Feb 20, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) with `OLLAMA_DEBUG=1` may aid in debugging. At a guess, I'd say that ollama is not loading/finding the appropriate backend and is falling back to using a vanilla CPU build.
Author
Owner

@PC-DOS commented on GitHub (Feb 20, 2025):

Server logs with OLLAMA_DEBUG=1 may aid in debugging. At a guess, I'd say that ollama is not loading/finding the appropriate backend and is falling back to using a vanilla CPU build.

Sorry for forgetting the log file.

From Ollama 0.5.7, bootstrapping only:

2025/02/20 23:13:28 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12450 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2h45m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:25m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]"
time=2025-02-20T23:13:28.449+08:00 level=INFO source=images.go:432 msg="total blobs: 27"
time=2025-02-20T23:13:28.450+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-20T23:13:28.451+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:12450 (version 0.5.7)"
time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:80 msg="runners located" dir=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners
time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cpu_avx\ollama_llama_server.exe
time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cpu_avx2\ollama_llama_server.exe
time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v11_avx\ollama_llama_server.exe
time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v12_avx\ollama_llama_server.exe
time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\rocm_avx\ollama_llama_server.exe
time=2025-02-20T23:13:28.451+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2025-02-20T23:13:28.451+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-20T23:13:28.452+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-02-20T23:13:28.452+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=192 efficiency=0 threads=384
time=2025-02-20T23:13:28.452+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=192 efficiency=0 threads=384
time=2025-02-20T23:13:28.452+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-02-20T23:13:28.452+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=nvml.dll
time=2025-02-20T23:13:28.452+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp\\nvml.dll C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\Program Files\\MATLAB\\R2024a\\runtime\\win64\\nvml.dll C:\\Program Files\\MATLAB\\R2024a\\bin\\nvml.dll C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.1\\nvml.dll C:\\MinGW\\bin\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\Administrator\\.dotnet\\tools\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\Administrator\\.lmstudio\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-02-20T23:13:28.453+08:00 level=DEBUG source=gpu.go:547 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll"
time=2025-02-20T23:13:28.453+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-02-20T23:13:28.470+08:00 level=DEBUG source=gpu.go:120 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll
time=2025-02-20T23:13:28.471+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=nvcuda.dll
time=2025-02-20T23:13:28.471+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp\\nvcuda.dll C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\Program Files\\MATLAB\\R2024a\\runtime\\win64\\nvcuda.dll C:\\Program Files\\MATLAB\\R2024a\\bin\\nvcuda.dll C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.1\\nvcuda.dll C:\\MinGW\\bin\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\Administrator\\.dotnet\\tools\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\Administrator\\.lmstudio\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]"
time=2025-02-20T23:13:28.471+08:00 level=DEBUG source=gpu.go:547 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll"
time=2025-02-20T23:13:28.472+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll]
initializing C:\Windows\system32\nvcuda.dll
dlsym: cuInit - 00007FFAC8690240
dlsym: cuDriverGetVersion - 00007FFAC86902E0
dlsym: cuDeviceGetCount - 00007FFAC8690AD6
dlsym: cuDeviceGet - 00007FFAC8690AD0
dlsym: cuDeviceGetAttribute - 00007FFAC8690430
dlsym: cuDeviceGetUuid - 00007FFAC8690AE2
dlsym: cuDeviceGetName - 00007FFAC8690ADC
dlsym: cuCtxCreate_v3 - 00007FFAC8690B54
dlsym: cuMemGetInfo_v2 - 00007FFAC8690C56
dlsym: cuCtxDestroy - 00007FFAC8690B66
calling cuInit
calling cuDriverGetVersion
raw version 0x2f1c
CUDA driver version: 12.6
calling cuDeviceGetCount
device count 2
time=2025-02-20T23:13:28.542+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=2 library=C:\Windows\system32\nvcuda.dll
[GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] CUDA totalMem 24575 mb
[GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] CUDA freeMem 23306 mb
[GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] Compute Capability 8.6
time=2025-02-20T23:13:28.668+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="823.9 MiB"
[GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] CUDA totalMem 24575 mb
[GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] CUDA freeMem 23306 mb
[GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] Compute Capability 8.6
time=2025-02-20T23:13:28.786+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="956.0 MiB"
time=2025-02-20T23:13:28.786+08:00 level=DEBUG source=amd_windows.go:35 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found."
releasing cuda driver library
releasing nvml library
time=2025-02-20T23:13:28.787+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
time=2025-02-20T23:13:28.787+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"

From Ollama 0.5.11, bootstrapping only:

2025/02/20 23:09:10 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12450 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2h45m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:25m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]"
time=2025-02-20T23:09:10.788+08:00 level=INFO source=images.go:432 msg="total blobs: 27"
time=2025-02-20T23:09:10.789+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-20T23:09:10.790+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:12450 (version 0.5.11)"
time=2025-02-20T23:09:10.790+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2025-02-20T23:09:10.790+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-20T23:09:10.790+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-02-20T23:09:10.790+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=192 efficiency=0 threads=384
time=2025-02-20T23:09:10.790+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=192 efficiency=0 threads=384
time=2025-02-20T23:09:10.790+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-02-20T23:09:10.791+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll
time=2025-02-20T23:09:10.791+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp\\nvml.dll C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\Program Files\\MATLAB\\R2024a\\runtime\\win64\\nvml.dll C:\\Program Files\\MATLAB\\R2024a\\bin\\nvml.dll C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.1\\nvml.dll C:\\MinGW\\bin\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\Administrator\\.dotnet\\tools\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\Administrator\\.lmstudio\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-02-20T23:09:10.791+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll"
time=2025-02-20T23:09:10.792+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-02-20T23:09:10.808+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll
time=2025-02-20T23:09:10.809+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll
time=2025-02-20T23:09:10.809+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp\\nvcuda.dll C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\Program Files\\MATLAB\\R2024a\\runtime\\win64\\nvcuda.dll C:\\Program Files\\MATLAB\\R2024a\\bin\\nvcuda.dll C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.1\\nvcuda.dll C:\\MinGW\\bin\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\Administrator\\.dotnet\\tools\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\Administrator\\.lmstudio\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]"
time=2025-02-20T23:09:10.810+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll"
time=2025-02-20T23:09:10.810+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll]
initializing C:\Windows\system32\nvcuda.dll
dlsym: cuInit - 00007FFAC8690240
dlsym: cuDriverGetVersion - 00007FFAC86902E0
dlsym: cuDeviceGetCount - 00007FFAC8690AD6
dlsym: cuDeviceGet - 00007FFAC8690AD0
dlsym: cuDeviceGetAttribute - 00007FFAC8690430
dlsym: cuDeviceGetUuid - 00007FFAC8690AE2
dlsym: cuDeviceGetName - 00007FFAC8690ADC
dlsym: cuCtxCreate_v3 - 00007FFAC8690B54
dlsym: cuMemGetInfo_v2 - 00007FFAC8690C56
dlsym: cuCtxDestroy - 00007FFAC8690B66
calling cuInit
calling cuDriverGetVersion
raw version 0x2f1c
CUDA driver version: 12.6
calling cuDeviceGetCount
device count 2
time=2025-02-20T23:09:10.856+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=2 library=C:\Windows\system32\nvcuda.dll
[GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] CUDA totalMem 24575 mb
[GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] CUDA freeMem 23306 mb
[GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] Compute Capability 8.6
time=2025-02-20T23:09:10.999+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="828.5 MiB"
[GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] CUDA totalMem 24575 mb
[GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] CUDA freeMem 23306 mb
[GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] Compute Capability 8.6
time=2025-02-20T23:09:11.133+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="956.0 MiB"
time=2025-02-20T23:09:11.134+08:00 level=DEBUG source=amd_windows.go:34 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found."
releasing cuda driver library
releasing nvml library
time=2025-02-20T23:09:11.134+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
time=2025-02-20T23:09:11.134+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"

Full log including a model loading sequence:

server-full-0.5.7.0.log

server-full-0.5.11.0.log

<!-- gh-comment-id:2671812856 --> @PC-DOS commented on GitHub (Feb 20, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) with `OLLAMA_DEBUG=1` may aid in debugging. At a guess, I'd say that ollama is not loading/finding the appropriate backend and is falling back to using a vanilla CPU build. Sorry for forgetting the log file. From Ollama 0.5.7, bootstrapping only: ``` 2025/02/20 23:13:28 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12450 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2h45m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:25m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]" time=2025-02-20T23:13:28.449+08:00 level=INFO source=images.go:432 msg="total blobs: 27" time=2025-02-20T23:13:28.450+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-20T23:13:28.451+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:12450 (version 0.5.7)" time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:80 msg="runners located" dir=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cpu_avx\ollama_llama_server.exe time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cpu_avx2\ollama_llama_server.exe time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v11_avx\ollama_llama_server.exe time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v12_avx\ollama_llama_server.exe time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\rocm_avx\ollama_llama_server.exe time=2025-02-20T23:13:28.451+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2025-02-20T23:13:28.451+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2025-02-20T23:13:28.451+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-20T23:13:28.452+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-02-20T23:13:28.452+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=192 efficiency=0 threads=384 time=2025-02-20T23:13:28.452+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=192 efficiency=0 threads=384 time=2025-02-20T23:13:28.452+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" time=2025-02-20T23:13:28.452+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=nvml.dll time=2025-02-20T23:13:28.452+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp\\nvml.dll C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\Program Files\\MATLAB\\R2024a\\runtime\\win64\\nvml.dll C:\\Program Files\\MATLAB\\R2024a\\bin\\nvml.dll C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.1\\nvml.dll C:\\MinGW\\bin\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\Administrator\\.dotnet\\tools\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\Administrator\\.lmstudio\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-02-20T23:13:28.453+08:00 level=DEBUG source=gpu.go:547 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll" time=2025-02-20T23:13:28.453+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-02-20T23:13:28.470+08:00 level=DEBUG source=gpu.go:120 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll time=2025-02-20T23:13:28.471+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=nvcuda.dll time=2025-02-20T23:13:28.471+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp\\nvcuda.dll C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\Program Files\\MATLAB\\R2024a\\runtime\\win64\\nvcuda.dll C:\\Program Files\\MATLAB\\R2024a\\bin\\nvcuda.dll C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.1\\nvcuda.dll C:\\MinGW\\bin\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\Administrator\\.dotnet\\tools\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\Administrator\\.lmstudio\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-02-20T23:13:28.471+08:00 level=DEBUG source=gpu.go:547 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll" time=2025-02-20T23:13:28.472+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll] initializing C:\Windows\system32\nvcuda.dll dlsym: cuInit - 00007FFAC8690240 dlsym: cuDriverGetVersion - 00007FFAC86902E0 dlsym: cuDeviceGetCount - 00007FFAC8690AD6 dlsym: cuDeviceGet - 00007FFAC8690AD0 dlsym: cuDeviceGetAttribute - 00007FFAC8690430 dlsym: cuDeviceGetUuid - 00007FFAC8690AE2 dlsym: cuDeviceGetName - 00007FFAC8690ADC dlsym: cuCtxCreate_v3 - 00007FFAC8690B54 dlsym: cuMemGetInfo_v2 - 00007FFAC8690C56 dlsym: cuCtxDestroy - 00007FFAC8690B66 calling cuInit calling cuDriverGetVersion raw version 0x2f1c CUDA driver version: 12.6 calling cuDeviceGetCount device count 2 time=2025-02-20T23:13:28.542+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=2 library=C:\Windows\system32\nvcuda.dll [GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] CUDA totalMem 24575 mb [GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] CUDA freeMem 23306 mb [GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] Compute Capability 8.6 time=2025-02-20T23:13:28.668+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="823.9 MiB" [GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] CUDA totalMem 24575 mb [GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] CUDA freeMem 23306 mb [GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] Compute Capability 8.6 time=2025-02-20T23:13:28.786+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="956.0 MiB" time=2025-02-20T23:13:28.786+08:00 level=DEBUG source=amd_windows.go:35 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found." releasing cuda driver library releasing nvml library time=2025-02-20T23:13:28.787+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" time=2025-02-20T23:13:28.787+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" ``` From Ollama 0.5.11, bootstrapping only: ``` 2025/02/20 23:09:10 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12450 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2h45m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:25m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]" time=2025-02-20T23:09:10.788+08:00 level=INFO source=images.go:432 msg="total blobs: 27" time=2025-02-20T23:09:10.789+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-20T23:09:10.790+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:12450 (version 0.5.11)" time=2025-02-20T23:09:10.790+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2025-02-20T23:09:10.790+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-20T23:09:10.790+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-02-20T23:09:10.790+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=192 efficiency=0 threads=384 time=2025-02-20T23:09:10.790+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=192 efficiency=0 threads=384 time=2025-02-20T23:09:10.790+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-02-20T23:09:10.791+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll time=2025-02-20T23:09:10.791+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp\\nvml.dll C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\Program Files\\MATLAB\\R2024a\\runtime\\win64\\nvml.dll C:\\Program Files\\MATLAB\\R2024a\\bin\\nvml.dll C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.1\\nvml.dll C:\\MinGW\\bin\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\Administrator\\.dotnet\\tools\\nvml.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\Administrator\\.lmstudio\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-02-20T23:09:10.791+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll" time=2025-02-20T23:09:10.792+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-02-20T23:09:10.808+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll time=2025-02-20T23:09:10.809+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll time=2025-02-20T23:09:10.809+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp\\nvcuda.dll C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\Program Files\\MATLAB\\R2024a\\runtime\\win64\\nvcuda.dll C:\\Program Files\\MATLAB\\R2024a\\bin\\nvcuda.dll C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.1\\nvcuda.dll C:\\MinGW\\bin\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\Administrator\\.dotnet\\tools\\nvcuda.dll C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\Administrator\\.lmstudio\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-02-20T23:09:10.810+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll" time=2025-02-20T23:09:10.810+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll] initializing C:\Windows\system32\nvcuda.dll dlsym: cuInit - 00007FFAC8690240 dlsym: cuDriverGetVersion - 00007FFAC86902E0 dlsym: cuDeviceGetCount - 00007FFAC8690AD6 dlsym: cuDeviceGet - 00007FFAC8690AD0 dlsym: cuDeviceGetAttribute - 00007FFAC8690430 dlsym: cuDeviceGetUuid - 00007FFAC8690AE2 dlsym: cuDeviceGetName - 00007FFAC8690ADC dlsym: cuCtxCreate_v3 - 00007FFAC8690B54 dlsym: cuMemGetInfo_v2 - 00007FFAC8690C56 dlsym: cuCtxDestroy - 00007FFAC8690B66 calling cuInit calling cuDriverGetVersion raw version 0x2f1c CUDA driver version: 12.6 calling cuDeviceGetCount device count 2 time=2025-02-20T23:09:10.856+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=2 library=C:\Windows\system32\nvcuda.dll [GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] CUDA totalMem 24575 mb [GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] CUDA freeMem 23306 mb [GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a] Compute Capability 8.6 time=2025-02-20T23:09:10.999+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="828.5 MiB" [GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] CUDA totalMem 24575 mb [GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] CUDA freeMem 23306 mb [GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e] Compute Capability 8.6 time=2025-02-20T23:09:11.133+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="956.0 MiB" time=2025-02-20T23:09:11.134+08:00 level=DEBUG source=amd_windows.go:34 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found." releasing cuda driver library releasing nvml library time=2025-02-20T23:09:11.134+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" time=2025-02-20T23:09:11.134+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" ``` Full log including a model loading sequence: [server-full-0.5.7.0.log](https://github.com/user-attachments/files/18888724/server-full-0.5.7.0.log) [server-full-0.5.11.0.log](https://github.com/user-attachments/files/18888723/server-full-0.5.11.0.log)
Author
Owner

@rick-github commented on GitHub (Feb 20, 2025):

These logs don't show any problems.

They allocated exactly the same resources on the GPUs:

llama_kv_cache_init:        CPU KV buffer size =  6880.00 MiB		   <
llama_kv_cache_init:      CUDA0 KV buffer size =  1440.00 MiB			llama_kv_cache_init:      CUDA0 KV buffer size =  1440.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =  1440.00 MiB			llama_kv_cache_init:      CUDA1 KV buffer size =  1440.00 MiB
									   >	 level=DEBUG source=server msg="model load completed, waiting for server
									   >	llama_kv_cache_init:        CPU KV buffer size =  6880.00 MiB
llama_new_context_with_model: KV self size  = 9760.00 MiB, K (f16): 5856	llama_new_context_with_model: KV self size  = 9760.00 MiB, K (f16): 5856
llama_new_context_with_model:        CPU  output buffer size =     0.52 	llama_new_context_with_model:        CPU  output buffer size =     0.52 
llama_new_context_with_model:      CUDA0 compute buffer size =  1698.00 	llama_new_context_with_model:      CUDA0 compute buffer size =  1698.00 
llama_new_context_with_model:      CUDA1 compute buffer size =   670.00 	llama_new_context_with_model:      CUDA1 compute buffer size =   670.00 
llama_new_context_with_model:  CUDA_Host compute buffer size =    84.01 	llama_new_context_with_model:  CUDA_Host compute buffer size =    84.01 

And the generation takes the same amount of time (+/- 100ms):

--- server-full-0.5.7.0.log
+++ server-full-0.5.11.0.log
@@ -1 +1 @@
-[GIN] 2025/02/20 - hh:mm:ss | 200 |   33.7205692s |       127.0.0.1 | POST     "/api/generate"
+[GIN] 2025/02/20 - hh:mm:ss | 200 |   33.9061341s |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:2671900718 --> @rick-github commented on GitHub (Feb 20, 2025): These logs don't show any problems. They allocated exactly the same resources on the GPUs: ```diff llama_kv_cache_init: CPU KV buffer size = 6880.00 MiB < llama_kv_cache_init: CUDA0 KV buffer size = 1440.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 1440.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 1440.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 1440.00 MiB > level=DEBUG source=server msg="model load completed, waiting for server > llama_kv_cache_init: CPU KV buffer size = 6880.00 MiB llama_new_context_with_model: KV self size = 9760.00 MiB, K (f16): 5856 llama_new_context_with_model: KV self size = 9760.00 MiB, K (f16): 5856 llama_new_context_with_model: CPU output buffer size = 0.52 llama_new_context_with_model: CPU output buffer size = 0.52 llama_new_context_with_model: CUDA0 compute buffer size = 1698.00 llama_new_context_with_model: CUDA0 compute buffer size = 1698.00 llama_new_context_with_model: CUDA1 compute buffer size = 670.00 llama_new_context_with_model: CUDA1 compute buffer size = 670.00 llama_new_context_with_model: CUDA_Host compute buffer size = 84.01 llama_new_context_with_model: CUDA_Host compute buffer size = 84.01 ``` And the generation takes the same amount of time (+/- 100ms): ```diff --- server-full-0.5.7.0.log +++ server-full-0.5.11.0.log @@ -1 +1 @@ -[GIN] 2025/02/20 - hh:mm:ss | 200 | 33.7205692s | 127.0.0.1 | POST "/api/generate" +[GIN] 2025/02/20 - hh:mm:ss | 200 | 33.9061341s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@PC-DOS commented on GitHub (Feb 20, 2025):

These logs don't show any problems.

They allocated exactly the same resources on the GPUs:

llama_kv_cache_init: CPU KV buffer size = 6880.00 MiB <
llama_kv_cache_init: CUDA0 KV buffer size = 1440.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 1440.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 1440.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 1440.00 MiB
> level=DEBUG source=server msg="model load completed, waiting for server
> llama_kv_cache_init: CPU KV buffer size = 6880.00 MiB
llama_new_context_with_model: KV self size = 9760.00 MiB, K (f16): 5856 llama_new_context_with_model: KV self size = 9760.00 MiB, K (f16): 5856
llama_new_context_with_model: CPU output buffer size = 0.52 llama_new_context_with_model: CPU output buffer size = 0.52
llama_new_context_with_model: CUDA0 compute buffer size = 1698.00 llama_new_context_with_model: CUDA0 compute buffer size = 1698.00
llama_new_context_with_model: CUDA1 compute buffer size = 670.00 llama_new_context_with_model: CUDA1 compute buffer size = 670.00
llama_new_context_with_model: CUDA_Host compute buffer size = 84.01 llama_new_context_with_model: CUDA_Host compute buffer size = 84.01

And the generation takes the same amount of time (+/- 100ms):

--- server-full-0.5.7.0.log
+++ server-full-0.5.11.0.log
@@ -1 +1 @@
-[GIN] 2025/02/20 - hh:mm:ss | 200 | 33.7205692s | 127.0.0.1 | POST "/api/generate"
+[GIN] 2025/02/20 - hh:mm:ss | 200 | 33.9061341s | 127.0.0.1 | POST "/api/generate"

Thanks for replying and debugging. In these 2 logs I just loaded a model using ollama run deepseek-r1:671b-q1.58-gpulimited command for a quick testing (after the model was loaded, I invoked /bye and quitted), no actual prompt was inputted.

Here are 2 logs with a same prompt "j请为我详细介绍一下您自己" (the "j" character is a typo caused by IME). Invoked by command:

ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "j请为我详细介绍一下您自己"

In Ollama 0.5.11, the response was:

C:\Users\Administrator>ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "j请为我详细介绍一下您自己"
<Think & Response removed for shorter thread>

total duration:       4m48.6442557s
load duration:        34.2382522s
prompt eval count:    10 token(s)
prompt eval duration: 3.647s
prompt eval rate:     2.74 tokens/s
eval count:           334 token(s)
eval duration:        4m10.757s
eval rate:            1.33 tokens/s

In Ollama 0.5.7, due to a short response to the previous prompt, I tried more prompts:

C:\Users\Administrator>ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "j请为我详细介绍一下您自己"
<Think & Response removed for shorter thread>

total duration:       1m6.0613927s
load duration:        37.21912s
prompt eval count:    10 token(s)
prompt eval duration: 972ms
prompt eval rate:     10.29 tokens/s
eval count:           77 token(s)
eval duration:        27.868s
eval rate:            2.76 tokens/s

C:\Users\Administrator>ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "j请为我详细介绍一下您的能力、训练数据、工作原理和特点"
<Think & Response removed for shorter thread>

total duration:       20.703831s
load duration:        14.6141ms
prompt eval count:    17 token(s)
prompt eval duration: 830ms
prompt eval rate:     20.48 tokens/s
eval count:           57 token(s)
eval duration:        19.857s
eval rate:            2.87 tokens/s

C:\Users\Administrator>ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "能为我介绍一下您对大语言模型发展现状的认知吗?希望输出长度超过400字"
<Think & Response removed for shorter thread>

total duration:       5m56.6133931s
load duration:        14.0437ms
prompt eval count:    22 token(s)
prompt eval duration: 1.282s
prompt eval rate:     17.16 tokens/s
eval count:           914 token(s)
eval duration:        5m55.316s
eval rate:            2.57 tokens/s

Full server logs:

server-full-prompt-0.5.7.0.log

server-full-prompt-0.5.11.0.log

Log of console callings:

server-full-prompt-0.5.7.0 - Console.log

server-full-prompt-0.5.11.0 - Console.log

<!-- gh-comment-id:2672022061 --> @PC-DOS commented on GitHub (Feb 20, 2025): > These logs don't show any problems. > > They allocated exactly the same resources on the GPUs: > > llama_kv_cache_init: CPU KV buffer size = 6880.00 MiB < > llama_kv_cache_init: CUDA0 KV buffer size = 1440.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 1440.00 MiB > llama_kv_cache_init: CUDA1 KV buffer size = 1440.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 1440.00 MiB > > level=DEBUG source=server msg="model load completed, waiting for server > > llama_kv_cache_init: CPU KV buffer size = 6880.00 MiB > llama_new_context_with_model: KV self size = 9760.00 MiB, K (f16): 5856 llama_new_context_with_model: KV self size = 9760.00 MiB, K (f16): 5856 > llama_new_context_with_model: CPU output buffer size = 0.52 llama_new_context_with_model: CPU output buffer size = 0.52 > llama_new_context_with_model: CUDA0 compute buffer size = 1698.00 llama_new_context_with_model: CUDA0 compute buffer size = 1698.00 > llama_new_context_with_model: CUDA1 compute buffer size = 670.00 llama_new_context_with_model: CUDA1 compute buffer size = 670.00 > llama_new_context_with_model: CUDA_Host compute buffer size = 84.01 llama_new_context_with_model: CUDA_Host compute buffer size = 84.01 > > And the generation takes the same amount of time (+/- 100ms): > > --- server-full-0.5.7.0.log > +++ server-full-0.5.11.0.log > @@ -1 +1 @@ > -[GIN] 2025/02/20 - hh:mm:ss | 200 | 33.7205692s | 127.0.0.1 | POST "/api/generate" > +[GIN] 2025/02/20 - hh:mm:ss | 200 | 33.9061341s | 127.0.0.1 | POST "/api/generate" Thanks for replying and debugging. In these 2 logs I just loaded a model using `ollama run deepseek-r1:671b-q1.58-gpulimited` command for a quick testing (after the model was loaded, I invoked `/bye` and quitted), no actual prompt was inputted. Here are 2 logs with a same prompt "`j请为我详细介绍一下您自己`" (the "j" character is a typo caused by IME). Invoked by command: ``` ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "j请为我详细介绍一下您自己" ``` In Ollama 0.5.11, the response was: ``` C:\Users\Administrator>ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "j请为我详细介绍一下您自己" <Think & Response removed for shorter thread> total duration: 4m48.6442557s load duration: 34.2382522s prompt eval count: 10 token(s) prompt eval duration: 3.647s prompt eval rate: 2.74 tokens/s eval count: 334 token(s) eval duration: 4m10.757s eval rate: 1.33 tokens/s ``` In Ollama 0.5.7, due to a short response to the previous prompt, I tried more prompts: ``` C:\Users\Administrator>ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "j请为我详细介绍一下您自己" <Think & Response removed for shorter thread> total duration: 1m6.0613927s load duration: 37.21912s prompt eval count: 10 token(s) prompt eval duration: 972ms prompt eval rate: 10.29 tokens/s eval count: 77 token(s) eval duration: 27.868s eval rate: 2.76 tokens/s C:\Users\Administrator>ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "j请为我详细介绍一下您的能力、训练数据、工作原理和特点" <Think & Response removed for shorter thread> total duration: 20.703831s load duration: 14.6141ms prompt eval count: 17 token(s) prompt eval duration: 830ms prompt eval rate: 20.48 tokens/s eval count: 57 token(s) eval duration: 19.857s eval rate: 2.87 tokens/s C:\Users\Administrator>ollama run deepseek-r1:671b-q1.58-gpulimited --verbose "能为我介绍一下您对大语言模型发展现状的认知吗?希望输出长度超过400字" <Think & Response removed for shorter thread> total duration: 5m56.6133931s load duration: 14.0437ms prompt eval count: 22 token(s) prompt eval duration: 1.282s prompt eval rate: 17.16 tokens/s eval count: 914 token(s) eval duration: 5m55.316s eval rate: 2.57 tokens/s ``` Full server logs: [server-full-prompt-0.5.7.0.log](https://github.com/user-attachments/files/18891589/server-full-prompt-0.5.7.0.log) [server-full-prompt-0.5.11.0.log](https://github.com/user-attachments/files/18891590/server-full-prompt-0.5.11.0.log) Log of console callings: [server-full-prompt-0.5.7.0 - Console.log](https://github.com/user-attachments/files/18891655/server-full-prompt-0.5.7.0.-.Console.log) [server-full-prompt-0.5.11.0 - Console.log](https://github.com/user-attachments/files/18891654/server-full-prompt-0.5.11.0.-.Console.log)
Author
Owner

@mrdg-sys commented on GitHub (Feb 20, 2025):

#9087

same issue here

my dual xeon lost performance with 0.5.9 and 0.5.11versions

also if one cpu is disabled and I test performance with only one cpu there is performance loss upgrading from 0.5.7 so its not related to only dual cpu systems

<!-- gh-comment-id:2672059565 --> @mrdg-sys commented on GitHub (Feb 20, 2025): #9087 same issue here my dual xeon lost performance with 0.5.9 and 0.5.11versions also if one cpu is disabled and I test performance with only one cpu there is performance loss upgrading from 0.5.7 so its not related to only dual cpu systems
Author
Owner

@rick-github commented on GitHub (Feb 20, 2025):

ollama is loading the icelake CPU backend, which was launched three years before the EPYC 9654, so I suspect that the scoring mechanism used to choose the CPU backend may not be discovering all of the features of the EPYC. You can try renaming all of the gglm-cpu DLLs in C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama to xxx.old except for gglm-cpu-alderlake.dll and see if performance improves when the runner loads a model. It might just crash, I don't know what features alderlake has that are not supported on EPYC. This seems like a build issue.

sandybridge: 2011
haswell: 2013
skylake: 2015
icelake: 2019
alderlake: 2021
EPYC 9654: 2022

<!-- gh-comment-id:2672169129 --> @rick-github commented on GitHub (Feb 20, 2025): ollama is loading the icelake CPU backend, which was launched three years before the EPYC 9654, so I suspect that the scoring mechanism used to choose the CPU backend may not be discovering all of the features of the EPYC. You can try renaming all of the gglm-cpu DLLs in `C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama` to xxx.old except for gglm-cpu-alderlake.dll and see if performance improves when the runner loads a model. It might just crash, I don't know what features alderlake has that are not supported on EPYC. This seems like a build issue. sandybridge: 2011 haswell: 2013 skylake: 2015 icelake: 2019 alderlake: 2021 EPYC 9654: 2022
Author
Owner

@mxyng commented on GitHub (Feb 20, 2025):

@mrdg-sys that's a different problem which is mitigated by disabling amx. amx is a negative performance impact on sapphire rapids. this issue is related to icelake and amd epyc

<!-- gh-comment-id:2672187691 --> @mxyng commented on GitHub (Feb 20, 2025): @mrdg-sys that's a different problem which is mitigated by disabling amx. amx is a negative performance impact on sapphire rapids. this issue is related to icelake and amd epyc
Author
Owner

@mrdg-sys commented on GitHub (Feb 20, 2025):

in my case with xeon 6126 cpu ollama is loading correct backend:

load_backend: loaded CPU backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll

yet inference performance loss

<!-- gh-comment-id:2672192085 --> @mrdg-sys commented on GitHub (Feb 20, 2025): in my case with xeon 6126 cpu ollama is loading correct backend: load_backend: loaded CPU backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll yet inference performance loss
Author
Owner

@mrdg-sys commented on GitHub (Feb 20, 2025):

@mrdg-sys that's a different problem which is mitigated by disabling amx. amx is a negative performance impact on sapphire rapids. this issue is related to icelake and amd epyc

my cpu is skylake (xeon 6126) it has nothing to do with saphire rapids

<!-- gh-comment-id:2672195270 --> @mrdg-sys commented on GitHub (Feb 20, 2025): > [@mrdg-sys](https://github.com/mrdg-sys) that's a different problem which is mitigated by disabling amx. amx is a negative performance impact on sapphire rapids. this issue is related to icelake and amd epyc my cpu is skylake (xeon 6126) it has nothing to do with saphire rapids
Author
Owner

@rick-github commented on GitHub (Feb 20, 2025):

except for gglm-cpu-alderlake.dll

Actually, that won't work because alderlake scores 0 on the feature scale and llama.cpp will fall back to the basic CPU build. Maybe try each DLL in turn.

<!-- gh-comment-id:2672205839 --> @rick-github commented on GitHub (Feb 20, 2025): > except for gglm-cpu-alderlake.dll Actually, that won't work because alderlake scores 0 on the feature scale and llama.cpp will fall back to the basic CPU build. Maybe try each DLL in turn.
Author
Owner

@mxyng commented on GitHub (Feb 20, 2025):

my cpu is skylake (xeon 6126) it has nothing to do with saphire rapids

You're right my mistake

<!-- gh-comment-id:2672254406 --> @mxyng commented on GitHub (Feb 20, 2025): > my cpu is skylake (xeon 6126) it has nothing to do with saphire rapids You're right my mistake
Author
Owner

@ice6 commented on GitHub (Feb 25, 2025):

@PC-DOS I wonder how do you find SIGJNF/deepseek-r1-671b-1.58bit:latest?

I see it, just search deepseek-r1 will find it. thank you.

<!-- gh-comment-id:2682039141 --> @ice6 commented on GitHub (Feb 25, 2025): @PC-DOS I wonder how do you find `SIGJNF/deepseek-r1-671b-1.58bit:latest`? I see it, just search `deepseek-r1` will find it. thank you.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52541