[GH-ISSUE #9836] ollama serve crashes on windows after ~1 minute when using CUDA_VISIBLE_DEVICES=-1 #6437

Open
opened 2026-04-12 17:59:44 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @davidfiala on GitHub (Mar 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9836

What is the issue?

When running ollama on Windows 11 with an NVIDIA GPU installed but disabled via env var CUDA_VISIBLE_DEVICES=-1, the ollama serve command will silently terminate after about 1 minute, regardless of whether it has served any requests. Upon self terminating, no additional logs are generated.

Note: If I set CUDA_VISIBLE_DEVICES=0, then it detects my NVIDIA GPU and will operate indefinitely.

The incorrectness only comes if I set CUDA_VISIBLE_DEVICES=-1. Setting to -1 works as expected at first, and requests are served using my CPU for inference, but only until ollama terminates itself without reason.

If I run the command repeatedly, I cannot say that there's an exact number of seconds before termination as I've seen it range anywhere between 30-60 seconds.

Windows 11 Pro: Version 10.0.26100 Build 26100
NVIDIA Driver: 572.16

Relevant log output

In Windows Powershell run:

Get-Date ; $env:OLLAMA_DEBUG="1"; $env:CUDA_VISIBLE_DEVICES=-1; ollama serve ; Get-Date

The program will run. In this case, no API requests were made, and the program terminates itself shortly after starting. If a request had been made, the request(s) will succeed, but the program still self-terminates after about a minute of having started. To be clear, requests are not necessary for ollama to self terminate.


PS C:\Users\fiala> Get-Date ; $env:OLLAMA_DEBUG="1"; $env:CUDA_VISIBLE_DEVICES=-1; ollama serve ; Get-Date

Monday, March 17, 2025 4:06:39 PM

2025/03/17 16:06:39 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:-1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12345 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\fiala\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-17T16:06:39.728-07:00 level=INFO source=images.go:432 msg="total blobs: 18"
time=2025-03-17T16:06:39.729-07:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-17T16:06:39.729-07:00 level=INFO source=routes.go:1297 msg="Listening on [::]:12345 (version 0.6.1)"
time=2025-03-17T16:06:39.729-07:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-03-17T16:06:39.729-07:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-17T16:06:39.729-07:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-03-17T16:06:39.729-07:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-03-17T16:06:39.729-07:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-03-17T16:06:39.729-07:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-17T16:06:39.729-07:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll
time=2025-03-17T16:06:39.730-07:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\WindowsApps\\Microsoft.PowerShell_7.5.0.0_x64__8wekyb3d8bbwe\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll C:\\WINDOWS\\nvml.dll C:\\WINDOWS\\System32\\Wbem\\nvml.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\WINDOWS\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\nodejs\\nvml.dll C:\\Users\\fiala\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\fiala\\AppData\\Roaming\\npm\\nvml.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\cursor\\resources\\app\\bin\\nvml.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-03-17T16:06:39.730-07:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll"
time=2025-03-17T16:06:39.730-07:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\WINDOWS\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-03-17T16:06:39.740-07:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\WINDOWS\system32\nvml.dll
time=2025-03-17T16:06:39.740-07:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll
time=2025-03-17T16:06:39.741-07:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\WindowsApps\\Microsoft.PowerShell_7.5.0.0_x64__8wekyb3d8bbwe\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll C:\\WINDOWS\\nvcuda.dll C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\nodejs\\nvcuda.dll C:\\Users\\fiala\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\fiala\\AppData\\Roaming\\npm\\nvcuda.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\cursor\\resources\\app\\bin\\nvcuda.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]"
time=2025-03-17T16:06:39.741-07:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll"
time=2025-03-17T16:06:39.741-07:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\WINDOWS\system32\nvcuda.dll]
initializing C:\WINDOWS\system32\nvcuda.dll
dlsym: cuInit - 00007FFAD5FF5F80
dlsym: cuDriverGetVersion - 00007FFAD5FF6020
dlsym: cuDeviceGetCount - 00007FFAD5FF6816
dlsym: cuDeviceGet - 00007FFAD5FF6810
dlsym: cuDeviceGetAttribute - 00007FFAD5FF6170
dlsym: cuDeviceGetUuid - 00007FFAD5FF6822
dlsym: cuDeviceGetName - 00007FFAD5FF681C
dlsym: cuCtxCreate_v3 - 00007FFAD5FF6894
dlsym: cuMemGetInfo_v2 - 00007FFAD5FF6996
dlsym: cuCtxDestroy - 00007FFAD5FF68A6
calling cuInit
cuInit err: 100
time=2025-03-17T16:06:39.745-07:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library C:\\WINDOWS\\system32\\nvcuda.dll"
time=2025-03-17T16:06:39.745-07:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=cudart64_*.dll
time=2025-03-17T16:06:39.745-07:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cudart64_*.dll C:\\Program Files\\WindowsApps\\Microsoft.PowerShell_7.5.0.0_x64__8wekyb3d8bbwe\\cudart64_*.dll C:\\WINDOWS\\system32\\cudart64_*.dll C:\\WINDOWS\\cudart64_*.dll C:\\WINDOWS\\System32\\Wbem\\cudart64_*.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll C:\\WINDOWS\\System32\\OpenSSH\\cudart64_*.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR\\cudart64_*.dll C:\\Program Files\\dotnet\\cudart64_*.dll C:\\Program Files\\nodejs\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Roaming\\npm\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\cursor\\resources\\app\\bin\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v*\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]"
time=2025-03-17T16:06:39.751-07:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll"
time=2025-03-17T16:06:39.751-07:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v11\\cudart64_110.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12\\cudart64_12.dll]"
cudaSetDevice err: 100
time=2025-03-17T16:06:39.759-07:00 level=DEBUG source=gpu.go:574 msg="Unable to load cudart library C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v11\\cudart64_110.dll: cudart init failure: 100"
cudaSetDevice err: 100
time=2025-03-17T16:06:39.764-07:00 level=DEBUG source=gpu.go:574 msg="Unable to load cudart library C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12\\cudart64_12.dll: cudart init failure: 100"
time=2025-03-17T16:06:39.765-07:00 level=DEBUG source=amd_windows.go:34 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found."
time=2025-03-17T16:06:39.765-07:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
releasing nvml library
time=2025-03-17T16:06:39.766-07:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="95.8 GiB" available="22.3 GiB"

Monday, March 17, 2025 4:07:40 PM

No further logs are written to %LOCALAPPDATA%\Ollama

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.13 and 0.6.1 are both affected

Originally created by @davidfiala on GitHub (Mar 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9836 ### What is the issue? When running ollama on Windows 11 with an NVIDIA GPU installed but disabled via env var `CUDA_VISIBLE_DEVICES=-1`, the `ollama serve` command will silently terminate after about 1 minute, regardless of whether it has served any requests. Upon self terminating, no additional logs are generated. Note: If I set `CUDA_VISIBLE_DEVICES=0`, then it detects my NVIDIA GPU and will operate indefinitely. The incorrectness only comes if I set `CUDA_VISIBLE_DEVICES=-1`. Setting to `-1` works as expected at first, and requests are served using my CPU for inference, but only until ollama terminates itself without reason. If I run the command repeatedly, I cannot say that there's an exact number of seconds before termination as I've seen it range anywhere between 30-60 seconds. Windows 11 Pro: `Version 10.0.26100 Build 26100` NVIDIA Driver: `572.16` ### Relevant log output In Windows Powershell run: `Get-Date ; $env:OLLAMA_DEBUG="1"; $env:CUDA_VISIBLE_DEVICES=-1; ollama serve ; Get-Date` The program will run. In this case, no API requests were made, and the program terminates itself shortly after starting. If a request had been made, the request(s) will succeed, but the program still self-terminates after about a minute of having started. To be clear, requests are not necessary for ollama to self terminate. ``` PS C:\Users\fiala> Get-Date ; $env:OLLAMA_DEBUG="1"; $env:CUDA_VISIBLE_DEVICES=-1; ollama serve ; Get-Date Monday, March 17, 2025 4:06:39 PM 2025/03/17 16:06:39 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:-1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12345 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\fiala\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-17T16:06:39.728-07:00 level=INFO source=images.go:432 msg="total blobs: 18" time=2025-03-17T16:06:39.729-07:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-17T16:06:39.729-07:00 level=INFO source=routes.go:1297 msg="Listening on [::]:12345 (version 0.6.1)" time=2025-03-17T16:06:39.729-07:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-03-17T16:06:39.729-07:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-17T16:06:39.729-07:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-03-17T16:06:39.729-07:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-03-17T16:06:39.729-07:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-03-17T16:06:39.729-07:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-03-17T16:06:39.729-07:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll time=2025-03-17T16:06:39.730-07:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\WindowsApps\\Microsoft.PowerShell_7.5.0.0_x64__8wekyb3d8bbwe\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll C:\\WINDOWS\\nvml.dll C:\\WINDOWS\\System32\\Wbem\\nvml.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\WINDOWS\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\nodejs\\nvml.dll C:\\Users\\fiala\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\fiala\\AppData\\Roaming\\npm\\nvml.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\cursor\\resources\\app\\bin\\nvml.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-03-17T16:06:39.730-07:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll" time=2025-03-17T16:06:39.730-07:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\WINDOWS\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-03-17T16:06:39.740-07:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\WINDOWS\system32\nvml.dll time=2025-03-17T16:06:39.740-07:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll time=2025-03-17T16:06:39.741-07:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\WindowsApps\\Microsoft.PowerShell_7.5.0.0_x64__8wekyb3d8bbwe\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll C:\\WINDOWS\\nvcuda.dll C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\nodejs\\nvcuda.dll C:\\Users\\fiala\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\fiala\\AppData\\Roaming\\npm\\nvcuda.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\cursor\\resources\\app\\bin\\nvcuda.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-03-17T16:06:39.741-07:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll" time=2025-03-17T16:06:39.741-07:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\WINDOWS\system32\nvcuda.dll] initializing C:\WINDOWS\system32\nvcuda.dll dlsym: cuInit - 00007FFAD5FF5F80 dlsym: cuDriverGetVersion - 00007FFAD5FF6020 dlsym: cuDeviceGetCount - 00007FFAD5FF6816 dlsym: cuDeviceGet - 00007FFAD5FF6810 dlsym: cuDeviceGetAttribute - 00007FFAD5FF6170 dlsym: cuDeviceGetUuid - 00007FFAD5FF6822 dlsym: cuDeviceGetName - 00007FFAD5FF681C dlsym: cuCtxCreate_v3 - 00007FFAD5FF6894 dlsym: cuMemGetInfo_v2 - 00007FFAD5FF6996 dlsym: cuCtxDestroy - 00007FFAD5FF68A6 calling cuInit cuInit err: 100 time=2025-03-17T16:06:39.745-07:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library C:\\WINDOWS\\system32\\nvcuda.dll" time=2025-03-17T16:06:39.745-07:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=cudart64_*.dll time=2025-03-17T16:06:39.745-07:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cudart64_*.dll C:\\Program Files\\WindowsApps\\Microsoft.PowerShell_7.5.0.0_x64__8wekyb3d8bbwe\\cudart64_*.dll C:\\WINDOWS\\system32\\cudart64_*.dll C:\\WINDOWS\\cudart64_*.dll C:\\WINDOWS\\System32\\Wbem\\cudart64_*.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll C:\\WINDOWS\\System32\\OpenSSH\\cudart64_*.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR\\cudart64_*.dll C:\\Program Files\\dotnet\\cudart64_*.dll C:\\Program Files\\nodejs\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Roaming\\npm\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\cursor\\resources\\app\\bin\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v*\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]" time=2025-03-17T16:06:39.751-07:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll" time=2025-03-17T16:06:39.751-07:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v11\\cudart64_110.dll C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12\\cudart64_12.dll]" cudaSetDevice err: 100 time=2025-03-17T16:06:39.759-07:00 level=DEBUG source=gpu.go:574 msg="Unable to load cudart library C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v11\\cudart64_110.dll: cudart init failure: 100" cudaSetDevice err: 100 time=2025-03-17T16:06:39.764-07:00 level=DEBUG source=gpu.go:574 msg="Unable to load cudart library C:\\Users\\fiala\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12\\cudart64_12.dll: cudart init failure: 100" time=2025-03-17T16:06:39.765-07:00 level=DEBUG source=amd_windows.go:34 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found." time=2025-03-17T16:06:39.765-07:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" releasing nvml library time=2025-03-17T16:06:39.766-07:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="95.8 GiB" available="22.3 GiB" Monday, March 17, 2025 4:07:40 PM ``` No further logs are written to `%LOCALAPPDATA%\Ollama` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.13 and 0.6.1 are both affected
GiteaMirror added the bug label 2026-04-12 17:59:44 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 17, 2025):

#9496

The Nvidia Windows driver has some issues with setting CUDA_VISIBLE_DEVICES to an invalid value, even though it's a documented featured. I recommend setting num_gpu=0, or when it's working again (seems to be non-functional in 0.6.*), setting OLLAMA_LLM_LIBRARY=cpu.

<!-- gh-comment-id:2731200479 --> @rick-github commented on GitHub (Mar 17, 2025): #9496 The Nvidia Windows driver has some issues with setting `CUDA_VISIBLE_DEVICES` to an invalid value, even though it's a [documented featured](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#cuda-environment-variables:~:text=If%20one%20of%20the%20indices%20is%20invalid). I recommend setting `num_gpu=0`, or when it's working again (seems to be non-functional in 0.6.*), setting `OLLAMA_LLM_LIBRARY=cpu`.
Author
Owner

@davidfiala commented on GitHub (Mar 18, 2025):

Thank you for the note and x-ref to the other bug.

Note: It seems that OLLAMA_LLM_LIBRARY does not have an effect. I tried setting to cpu, cpu_avx2, wrong (to see if an invalid value would crash startup, which it did not). In all cases, the GPU is still detected and used.

However, with that hint in hand, I was able to modify the API request to include num_gpu: 0 and it appears to run on the CPU. But since this isn't global to the whole ollama process, I wonder if there's any caveats to forcing CPU use via a per-request option? Will it use the maximum CPU parallelism possible? (I don't see all of my cores getting pegged during inference)

<!-- gh-comment-id:2731229436 --> @davidfiala commented on GitHub (Mar 18, 2025): Thank you for the note and x-ref to the other bug. Note: It seems that `OLLAMA_LLM_LIBRARY` does not have an effect. I tried setting to `cpu`, `cpu_avx2`, `wrong` (to see if an invalid value would crash startup, which it did not). In all cases, the GPU is still detected and used. However, with that hint in hand, I was able to modify the API request to include `num_gpu: 0` and it appears to run on the CPU. But since this isn't global to the whole ollama process, I wonder if there's any caveats to forcing CPU use via a per-request option? Will it use the maximum CPU parallelism possible? (I don't see all of my cores getting pegged during inference)
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

OLLAMA_LLM_LIBRARY has been left behind by the evolution of the runners in ollama. Now that the rate of change in the architecture has slowed with the release of 0.6.*, it should be easier to make library matching work again, perhaps in a couple of releases.

There is currently no global config option for num_gpu (unlike, eg, OLLAMA_CONTEXT_LENGTH for num_ctx) so the only options are either in the API call, or creating a copy of the model:

C:\> echo FROM llama3.1 > Modelfile
C:\> echo PARAMETER num_gpu 0 >> Modelfile
C:\> ollama create llama3.1:cpu

By default ollama will only use non-efficiency CPU cores for running inference. You can override this by setting num_thread either in the API call or in the Modelfile as for num_gpu above. The only caveat is that if you run multiple models and set num_thread to the maximum core count of the system, there's a risk of over-subscribing CPU cycles and causing poor performance and possible system wedging.

<!-- gh-comment-id:2731254084 --> @rick-github commented on GitHub (Mar 18, 2025): `OLLAMA_LLM_LIBRARY` has been left behind by the evolution of the runners in ollama. Now that the rate of change in the architecture has slowed with the release of 0.6.*, it should be easier to make library matching work again, perhaps in a couple of releases. There is currently no global config option for `num_gpu` (unlike, eg, OLLAMA_CONTEXT_LENGTH for `num_ctx`) so the only options are either in the API call, or creating a copy of the model: ```console C:\> echo FROM llama3.1 > Modelfile C:\> echo PARAMETER num_gpu 0 >> Modelfile C:\> ollama create llama3.1:cpu ``` By default ollama will only use non-efficiency CPU cores for running inference. You can override this by setting `num_thread` either in the API call or in the Modelfile as for `num_gpu` above. The only caveat is that if you run multiple models and set `num_thread` to the maximum core count of the system, there's a risk of over-subscribing CPU cycles and causing poor performance and possible system wedging.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6437