[GH-ISSUE #13308] failure during GPU discovery, failed to finish discovery before timeout #34550

Closed
opened 2026-04-22 18:13:38 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @wOvAN on GitHub (Dec 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13308

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

failure during GPU discovery

Relevant log output

ollama  | time=2025-12-03T03:16:50.172Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:socks5://10.45.50.10:1088 HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:30m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:100 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:true OLLAMA_NOPRUNE:true OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama  | time=2025-12-03T03:16:50.174Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.1)"
ollama  | time=2025-12-03T03:16:50.174Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
ollama  | time=2025-12-03T03:16:50.175Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35795"
ollama  | time=2025-12-03T03:16:54.814Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39971"
ollama  | time=2025-12-03T03:16:58.307Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
ollama  | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45907"
ollama  | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43225"
ollama  | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34595"
ollama  | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43961"
ollama  | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41189"
ollama  | time=2025-12-03T03:16:58.308Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42855"
ollama  | time=2025-12-03T03:16:58.308Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36715"
ollama  | time=2025-12-03T03:16:58.308Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36365"
ollama  | time=2025-12-03T03:16:58.308Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36787"
ollama  | time=2025-12-03T03:16:58.312Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40525"
ollama  | time=2025-12-03T03:16:58.312Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43829"
ollama  | time=2025-12-03T03:16:58.312Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32835"
ollama  | time=2025-12-03T03:16:58.314Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33919"
ollama  | time=2025-12-03T03:16:58.314Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40997"
ollama  | time=2025-12-03T03:16:58.315Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40525"
ollama  | time=2025-12-03T03:16:58.315Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37701"
ollama  | time=2025-12-03T03:16:58.316Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34011"
ollama  | time=2025-12-03T03:16:58.318Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41069"
ollama  | time=2025-12-03T03:16:58.319Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45981"
ollama  | time=2025-12-03T03:16:58.312Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34527"
ollama  | time=2025-12-03T03:16:58.320Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35375"
ollama  | time=2025-12-03T03:16:58.320Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34437"
ollama  | time=2025-12-03T03:16:58.345Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40397"
ollama  | time=2025-12-03T03:16:58.347Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38087"
ollama  | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout"
ollama  | time=2025-12-03T03:17:28.310Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="24.0 GiB" available="18.7 GiB"
ollama  | time=2025-12-03T03:17:28.310Z level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @wOvAN on GitHub (Dec 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13308 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? failure during GPU discovery ### Relevant log output ```shell ollama | time=2025-12-03T03:16:50.172Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:socks5://10.45.50.10:1088 HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:30m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:100 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:true OLLAMA_NOPRUNE:true OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ollama | time=2025-12-03T03:16:50.174Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.1)" ollama | time=2025-12-03T03:16:50.174Z level=INFO source=runner.go:67 msg="discovering available GPUs..." ollama | time=2025-12-03T03:16:50.175Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35795" ollama | time=2025-12-03T03:16:54.814Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39971" ollama | time=2025-12-03T03:16:58.307Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" ollama | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45907" ollama | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43225" ollama | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34595" ollama | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43961" ollama | time=2025-12-03T03:16:58.307Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41189" ollama | time=2025-12-03T03:16:58.308Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42855" ollama | time=2025-12-03T03:16:58.308Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36715" ollama | time=2025-12-03T03:16:58.308Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36365" ollama | time=2025-12-03T03:16:58.308Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36787" ollama | time=2025-12-03T03:16:58.312Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40525" ollama | time=2025-12-03T03:16:58.312Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43829" ollama | time=2025-12-03T03:16:58.312Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32835" ollama | time=2025-12-03T03:16:58.314Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33919" ollama | time=2025-12-03T03:16:58.314Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40997" ollama | time=2025-12-03T03:16:58.315Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40525" ollama | time=2025-12-03T03:16:58.315Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37701" ollama | time=2025-12-03T03:16:58.316Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34011" ollama | time=2025-12-03T03:16:58.318Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41069" ollama | time=2025-12-03T03:16:58.319Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45981" ollama | time=2025-12-03T03:16:58.312Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34527" ollama | time=2025-12-03T03:16:58.320Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35375" ollama | time=2025-12-03T03:16:58.320Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34437" ollama | time=2025-12-03T03:16:58.345Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40397" ollama | time=2025-12-03T03:16:58.347Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38087" ollama | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.309Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.308Z level=INFO source=runner.go:463 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[GGML_CUDA_INIT:1] error="failed to finish discovery before timeout" ollama | time=2025-12-03T03:17:28.310Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="24.0 GiB" available="18.7 GiB" ollama | time=2025-12-03T03:17:28.310Z level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the nvidiabug labels 2026-04-22 18:13:39 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 3, 2025):

Setting OLLAMA_DEBUG=2 in the server environment will add more information about the detection process.

<!-- gh-comment-id:3605978139 --> @rick-github commented on GitHub (Dec 3, 2025): Setting `OLLAMA_DEBUG=2` in the server environment will add more information about the detection process.
Author
Owner

@wOvAN commented on GitHub (Dec 3, 2025):

logs.txt

<!-- gh-comment-id:3606039579 --> @wOvAN commented on GitHub (Dec 3, 2025): [logs.txt](https://github.com/user-attachments/files/23903208/logs.txt)
Author
Owner

@rick-github commented on GitHub (Dec 3, 2025):

From the log you have 12 Nvidia devices attached to the machine. Are these all in PCI slots? If not, how are they attached? What motherboard?

<!-- gh-comment-id:3606065089 --> @rick-github commented on GitHub (Dec 3, 2025): From the log you have 12 Nvidia devices attached to the machine. Are these all in PCI slots? If not, how are they attached? What motherboard?
Author
Owner

@wOvAN commented on GitHub (Dec 3, 2025):

logs_0.13.1-rc0.txt

it worked with v0.13.1-rc0 until 0.13.1-rc2, thou memory managements was almost broken, never released memory after reloading models without ollama restart.

yes pci x1

<!-- gh-comment-id:3606165195 --> @wOvAN commented on GitHub (Dec 3, 2025): [logs_0.13.1-rc0.txt](https://github.com/user-attachments/files/23903773/logs_0.13.1-rc0.txt) it worked with v0.13.1-rc0 until 0.13.1-rc2, thou memory managements was almost broken, never released memory after reloading models without ollama restart. yes pci x1
Author
Owner

@rick-github commented on GitHub (Dec 3, 2025):

Might be related to https://github.com/ollama/ollama/pull/13298, seems the most likely commit in the affected changelog. @dhiltgen

<!-- gh-comment-id:3606216708 --> @rick-github commented on GitHub (Dec 3, 2025): Might be related to https://github.com/ollama/ollama/pull/13298, seems the most likely commit in the affected changelog. @dhiltgen
Author
Owner

@dhiltgen commented on GitHub (Dec 3, 2025):

@wOvAN can you run the new version with OLLAMA_DEBUG=2 so we can get a little more detail on how things played out during startup and GPU discovery? If you stop then immediately restart ollama does it manage to finish the parallel discovery without timing out? This might be as simple as softening our timeout to handle large systems like this more gracefully, or there may be another defect here.

update: never mind - I see you did run with trace.

<!-- gh-comment-id:3607906034 --> @dhiltgen commented on GitHub (Dec 3, 2025): @wOvAN can you run the new version with OLLAMA_DEBUG=2 so we can get a little more detail on how things played out during startup and GPU discovery? If you stop then immediately restart ollama does it manage to finish the parallel discovery without timing out? This might be as simple as softening our timeout to handle large systems like this more gracefully, or there may be another defect here. update: never mind - I see you did run with trace.
Author
Owner

@dhiltgen commented on GitHub (Dec 3, 2025):

My suspicion is with 12 devices, and 2 CUDA libraries, we're spawning 24 parallel discovery processes all trying to initialize CUDA (twice per device) and that's overwhelming the system and causing the timeout. A potential workaround to mitigate this is set OLLAMA_LLM_LIBRARY=cuda_v13 so we're only trying to probe each device once, so only 12 will be happening in parallel (once per device). If that works, I'll look at adding logic to either throttle the number of parallel discoveries, or make sure each PCI device only gets probed one at a time.

<!-- gh-comment-id:3607962056 --> @dhiltgen commented on GitHub (Dec 3, 2025): My suspicion is with 12 devices, and 2 CUDA libraries, we're spawning 24 parallel discovery processes all trying to initialize CUDA (twice per device) and that's overwhelming the system and causing the timeout. A potential workaround to mitigate this is set `OLLAMA_LLM_LIBRARY=cuda_v13` so we're only trying to probe each device once, so only 12 will be happening in parallel (once per device). If that works, I'll look at adding logic to either throttle the number of parallel discoveries, or make sure each PCI device only gets probed one at a time.
Author
Owner

@dhiltgen commented on GitHub (Dec 3, 2025):

Actually, there's another bug - it seems the CUDA device filtering logic for the secondary discovery isn't working correctly, so each subprocess is discovering all the GPUs, which explains the increased load and timeouts. The workaround I mentioned above may help mitigate until we get a fix out.

<!-- gh-comment-id:3607987396 --> @dhiltgen commented on GitHub (Dec 3, 2025): Actually, there's another bug - it seems the CUDA device filtering logic for the secondary discovery isn't working correctly, so each subprocess is discovering all the GPUs, which explains the increased load and timeouts. The workaround I mentioned above may help mitigate until we get a fix out.
Author
Owner

@wOvAN commented on GitHub (Dec 4, 2025):

logs_0.13.2-rc0.txt

looks ok

<!-- gh-comment-id:3610407189 --> @wOvAN commented on GitHub (Dec 4, 2025): [logs_0.13.2-rc0.txt](https://github.com/user-attachments/files/23923939/logs_0.13.2-rc0.txt) looks ok
Author
Owner

@Bottlecap202 commented on GitHub (Dec 4, 2025):

GitHub CoPilot, Please and thanks. With the virus total scan and the bugs fixed!

<!-- gh-comment-id:3613105790 --> @Bottlecap202 commented on GitHub (Dec 4, 2025): GitHub CoPilot, Please and thanks. With the virus total scan and the bugs fixed!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34550