[GH-ISSUE #14792] 0.17.7 sometimes cant get 5090 gpu #35314

Closed
opened 2026-04-22 19:44:19 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @TigerHH6866 on GitHub (Mar 12, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14792

What is the issue?

0.17.7 sometimes cant get 5090 gpu,sometimes is ok,sometimes failed to get gpu within running

Relevant log output

ubuntu 24.04
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Tue_May_27_02:21:03_PDT_2025
Cuda compilation tools, release 12.9, V12.9.86
Build cuda_12.9.r12.9/compiler.36037853_0


(main) root@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 ollama serve
time=2026-03-12T07:13:58.528Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-03-12T07:13:58.528Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false"
time=2026-03-12T07:13:58.529Z level=INFO source=images.go:477 msg="total blobs: 24"
time=2026-03-12T07:13:58.530Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2026-03-12T07:13:58.530Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)"
time=2026-03-12T07:13:58.531Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-12T07:13:58.531Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7
time=2026-03-12T07:13:58.531Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
time=2026-03-12T07:13:58.532Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38551"
time=2026-03-12T07:14:02.659Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35241"
time=2026-03-12T07:14:02.916Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-03-12T07:14:02.916Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="483.4 GiB" available="34.5 GiB"
time=2026-03-12T07:14:02.916Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
time=2026-03-12T07:14:23.056Z level=INFO source=server.go:246 msg="enabling flash attention"
time=2026-03-12T07:14:23.056Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-eebf93fe1af74695f4768535489d2af75c862a6bad443fa4b16e1f5a96d04394 --port 39191"
time=2026-03-12T07:14:23.057Z level=INFO source=sched.go:489 msg="system memory" total="483.4 GiB" free="34.6 GiB" free_swap="0 B"
time=2026-03-12T07:14:23.057Z level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1
time=2026-03-12T07:14:23.075Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-12T07:14:23.075Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:39191"
time=2026-03-12T07:14:23.080Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T07:14:23.139Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
time=2026-03-12T07:14:23.145Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-03-12T07:14:23.625Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T07:14:24.820Z level=INFO source=runner.go:1302 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T07:14:24.820Z level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
time=2026-03-12T07:14:24.820Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-03-12T07:14:24.820Z level=INFO source=ggml.go:494 msg="offloaded 0/33 layers to GPU"
time=2026-03-12T07:14:24.820Z level=INFO source=device.go:245 msg="model weights" device=CPU size="6.1 GiB"
time=2026-03-12T07:14:24.820Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.4 GiB"
time=2026-03-12T07:14:24.820Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="433.7 MiB"
time=2026-03-12T07:14:24.820Z level=INFO source=device.go:272 msg="total memory" size="7.9 GiB"
time=2026-03-12T07:14:24.820Z level=INFO source=sched.go:565 msg="loaded runners" count=1
time=2026-03-12T07:14:24.821Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-03-12T07:14:24.821Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-03-12T07:14:25.827Z level=INFO source=server.go:1388 msg="llama runner started in 2.77 seconds"

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @TigerHH6866 on GitHub (Mar 12, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14792 ### What is the issue? 0.17.7 sometimes cant get 5090 gpu,sometimes is ok,sometimes failed to get gpu within running ### Relevant log output ```shell ubuntu 24.04 nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Tue_May_27_02:21:03_PDT_2025 Cuda compilation tools, release 12.9, V12.9.86 Build cuda_12.9.r12.9/compiler.36037853_0 (main) root@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 ollama serve time=2026-03-12T07:13:58.528Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-03-12T07:13:58.528Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false" time=2026-03-12T07:13:58.529Z level=INFO source=images.go:477 msg="total blobs: 24" time=2026-03-12T07:13:58.530Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2026-03-12T07:13:58.530Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)" time=2026-03-12T07:13:58.531Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-12T07:13:58.531Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7 time=2026-03-12T07:13:58.531Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" time=2026-03-12T07:13:58.532Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38551" time=2026-03-12T07:14:02.659Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35241" time=2026-03-12T07:14:02.916Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-03-12T07:14:02.916Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="483.4 GiB" available="34.5 GiB" time=2026-03-12T07:14:02.916Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 time=2026-03-12T07:14:23.056Z level=INFO source=server.go:246 msg="enabling flash attention" time=2026-03-12T07:14:23.056Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-eebf93fe1af74695f4768535489d2af75c862a6bad443fa4b16e1f5a96d04394 --port 39191" time=2026-03-12T07:14:23.057Z level=INFO source=sched.go:489 msg="system memory" total="483.4 GiB" free="34.6 GiB" free_swap="0 B" time=2026-03-12T07:14:23.057Z level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1 time=2026-03-12T07:14:23.075Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-12T07:14:23.075Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:39191" time=2026-03-12T07:14:23.080Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T07:14:23.139Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52 load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so time=2026-03-12T07:14:23.145Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-03-12T07:14:23.625Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T07:14:24.820Z level=INFO source=runner.go:1302 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T07:14:24.820Z level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" time=2026-03-12T07:14:24.820Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-03-12T07:14:24.820Z level=INFO source=ggml.go:494 msg="offloaded 0/33 layers to GPU" time=2026-03-12T07:14:24.820Z level=INFO source=device.go:245 msg="model weights" device=CPU size="6.1 GiB" time=2026-03-12T07:14:24.820Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.4 GiB" time=2026-03-12T07:14:24.820Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="433.7 MiB" time=2026-03-12T07:14:24.820Z level=INFO source=device.go:272 msg="total memory" size="7.9 GiB" time=2026-03-12T07:14:24.820Z level=INFO source=sched.go:565 msg="loaded runners" count=1 time=2026-03-12T07:14:24.821Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-03-12T07:14:24.821Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-03-12T07:14:25.827Z level=INFO source=server.go:1388 msg="llama runner started in 2.77 seconds" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 19:44:19 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 12, 2026):

Why are you setting CUDA_VISIBLE_DEVICES=7? What's the output of nvidia-smi?

<!-- gh-comment-id:4045252557 --> @rick-github commented on GitHub (Mar 12, 2026): Why are you setting `CUDA_VISIBLE_DEVICES=7`? What's the output of `nvidia-smi`?
Author
Owner

@TigerHH6866 commented on GitHub (Mar 12, 2026):

Why are you setting CUDA_VISIBLE_DEVICES=7? What's the output of nvidia-smi?

there are 8 cards
Image

<!-- gh-comment-id:4045260655 --> @TigerHH6866 commented on GitHub (Mar 12, 2026): > Why are you setting `CUDA_VISIBLE_DEVICES=7`? What's the output of `nvidia-smi`? there are 8 cards <img width="646" height="510" alt="Image" src="https://github.com/user-attachments/assets/3e331dc3-06fb-45f8-81de-4934fb342392" />
Author
Owner

@rick-github commented on GitHub (Mar 12, 2026):

What's the output of CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve?

<!-- gh-comment-id:4045328452 --> @rick-github commented on GitHub (Mar 12, 2026): What's the output of `CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve`?
Author
Owner

@TigerHH6866 commented on GitHub (Mar 12, 2026):

What's the output of CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve?

sometimes ok then failed like that

root@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 ollama serve
time=2026-03-12T08:24:22.930Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-03-12T08:24:22.931Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false"
time=2026-03-12T08:24:22.933Z level=INFO source=images.go:477 msg="total blobs: 24"
time=2026-03-12T08:24:22.933Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2026-03-12T08:24:22.934Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)"
time=2026-03-12T08:24:22.934Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-12T08:24:22.934Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7
time=2026-03-12T08:24:22.934Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
time=2026-03-12T08:24:22.935Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38063"
time=2026-03-12T08:24:26.086Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34985"
time=2026-03-12T08:24:26.185Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-03-12T08:24:26.186Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="483.4 GiB" available="52.6 GiB"
time=2026-03-12T08:24:26.186Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
^Croot@C.32098470:/workspace$ ^C
root@C.32098470:/workspace$ ^C
root@C.32098470:/workspace$ ^C
root@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 ollama serve
time=2026-03-12T08:28:48.190Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-03-12T08:28:48.190Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false"
time=2026-03-12T08:28:48.191Z level=INFO source=images.go:477 msg="total blobs: 24"
time=2026-03-12T08:28:48.191Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2026-03-12T08:28:48.191Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)"
time=2026-03-12T08:28:48.192Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-12T08:28:48.192Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7
time=2026-03-12T08:28:48.192Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
time=2026-03-12T08:28:48.192Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41649"
time=2026-03-12T08:28:48.291Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-03-12T08:28:48.292Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41117"
time=2026-03-12T08:28:49.002Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42341"
time=2026-03-12T08:28:49.633Z level=INFO source=types.go:42 msg="inference compute" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:d8:00.0 type=discrete total="31.8 GiB" available="31.3 GiB"
time=2026-03-12T08:28:49.633Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="31.8 GiB" default_num_ctx=32768
time=2026-03-12T08:29:20.095Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39451"
time=2026-03-12T08:29:20.916Z level=INFO source=server.go:246 msg="enabling flash attention"
time=2026-03-12T08:29:20.916Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-eebf93fe1af74695f4768535489d2af75c862a6bad443fa4b16e1f5a96d04394 --port 38851"
time=2026-03-12T08:29:20.916Z level=INFO source=sched.go:489 msg="system memory" total="483.4 GiB" free="58.1 GiB" free_swap="0 B"
time=2026-03-12T08:29:20.916Z level=INFO source=sched.go:496 msg="gpu memory" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-03-12T08:29:20.916Z level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1
time=2026-03-12T08:29:20.938Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-12T08:29:20.938Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:38851"
time=2026-03-12T08:29:20.951Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T08:29:21.014Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-03-12T08:29:21.422Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-03-12T08:29:22.622Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T08:29:23.405Z level=INFO source=runner.go:1302 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T08:29:23.405Z level=INFO source=ggml.go:482 msg="offloading 32 repeating layers to GPU"
time=2026-03-12T08:29:23.406Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-03-12T08:29:23.406Z level=INFO source=ggml.go:494 msg="offloaded 33/33 layers to GPU"
time=2026-03-12T08:29:23.406Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.6 GiB"
time=2026-03-12T08:29:23.406Z level=INFO source=device.go:245 msg="model weights" device=CPU size="563.7 MiB"
time=2026-03-12T08:29:23.406Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB"
time=2026-03-12T08:29:23.406Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="649.0 MiB"
time=2026-03-12T08:29:23.406Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="31.7 MiB"
time=2026-03-12T08:29:23.406Z level=INFO source=device.go:272 msg="total memory" size="8.2 GiB"
time=2026-03-12T08:29:23.406Z level=INFO source=sched.go:565 msg="loaded runners" count=1
time=2026-03-12T08:29:23.406Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-03-12T08:29:23.407Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-03-12T08:29:24.912Z level=INFO source=server.go:1388 msg="llama runner started in 4.00 seconds"
[GIN] 2026/03/12 - 08:29:28 | 200 |  8.160117699s |      172.17.0.1 | POST     "/api/chat"
ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 24761663488 total: 34190917632
time=2026-03-12T08:34:31.470Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36505"
<!-- gh-comment-id:4045573749 --> @TigerHH6866 commented on GitHub (Mar 12, 2026): > What's the output of `CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve`? sometimes ok then failed like that ``` root@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 ollama serve time=2026-03-12T08:24:22.930Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-03-12T08:24:22.931Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false" time=2026-03-12T08:24:22.933Z level=INFO source=images.go:477 msg="total blobs: 24" time=2026-03-12T08:24:22.933Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2026-03-12T08:24:22.934Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)" time=2026-03-12T08:24:22.934Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-12T08:24:22.934Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7 time=2026-03-12T08:24:22.934Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" time=2026-03-12T08:24:22.935Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38063" time=2026-03-12T08:24:26.086Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34985" time=2026-03-12T08:24:26.185Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-03-12T08:24:26.186Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="483.4 GiB" available="52.6 GiB" time=2026-03-12T08:24:26.186Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 ^Croot@C.32098470:/workspace$ ^C root@C.32098470:/workspace$ ^C root@C.32098470:/workspace$ ^C root@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 ollama serve time=2026-03-12T08:28:48.190Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-03-12T08:28:48.190Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false" time=2026-03-12T08:28:48.191Z level=INFO source=images.go:477 msg="total blobs: 24" time=2026-03-12T08:28:48.191Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2026-03-12T08:28:48.191Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)" time=2026-03-12T08:28:48.192Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-12T08:28:48.192Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7 time=2026-03-12T08:28:48.192Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" time=2026-03-12T08:28:48.192Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41649" time=2026-03-12T08:28:48.291Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-03-12T08:28:48.292Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41117" time=2026-03-12T08:28:49.002Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42341" time=2026-03-12T08:28:49.633Z level=INFO source=types.go:42 msg="inference compute" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:d8:00.0 type=discrete total="31.8 GiB" available="31.3 GiB" time=2026-03-12T08:28:49.633Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="31.8 GiB" default_num_ctx=32768 time=2026-03-12T08:29:20.095Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39451" time=2026-03-12T08:29:20.916Z level=INFO source=server.go:246 msg="enabling flash attention" time=2026-03-12T08:29:20.916Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-eebf93fe1af74695f4768535489d2af75c862a6bad443fa4b16e1f5a96d04394 --port 38851" time=2026-03-12T08:29:20.916Z level=INFO source=sched.go:489 msg="system memory" total="483.4 GiB" free="58.1 GiB" free_swap="0 B" time=2026-03-12T08:29:20.916Z level=INFO source=sched.go:496 msg="gpu memory" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-03-12T08:29:20.916Z level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1 time=2026-03-12T08:29:20.938Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-12T08:29:20.938Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:38851" time=2026-03-12T08:29:20.951Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T08:29:21.014Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52 load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so time=2026-03-12T08:29:21.422Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-03-12T08:29:22.622Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T08:29:23.405Z level=INFO source=runner.go:1302 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T08:29:23.405Z level=INFO source=ggml.go:482 msg="offloading 32 repeating layers to GPU" time=2026-03-12T08:29:23.406Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-03-12T08:29:23.406Z level=INFO source=ggml.go:494 msg="offloaded 33/33 layers to GPU" time=2026-03-12T08:29:23.406Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.6 GiB" time=2026-03-12T08:29:23.406Z level=INFO source=device.go:245 msg="model weights" device=CPU size="563.7 MiB" time=2026-03-12T08:29:23.406Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB" time=2026-03-12T08:29:23.406Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="649.0 MiB" time=2026-03-12T08:29:23.406Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="31.7 MiB" time=2026-03-12T08:29:23.406Z level=INFO source=device.go:272 msg="total memory" size="8.2 GiB" time=2026-03-12T08:29:23.406Z level=INFO source=sched.go:565 msg="loaded runners" count=1 time=2026-03-12T08:29:23.406Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-03-12T08:29:23.407Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-03-12T08:29:24.912Z level=INFO source=server.go:1388 msg="llama runner started in 4.00 seconds" [GIN] 2026/03/12 - 08:29:28 | 200 | 8.160117699s | 172.17.0.1 | POST "/api/chat" ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 24761663488 total: 34190917632 time=2026-03-12T08:34:31.470Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36505" ```
Author
Owner

@TigerHH6866 commented on GitHub (Mar 12, 2026):

just now after one time gpu running, back to cpu

ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 24797315072 total: 34190917632
time=2026-03-12T08:44:34.027Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44929"
time=2026-03-12T08:49:53.040Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 37585"
time=2026-03-12T08:49:53.914Z level=INFO source=server.go:246 msg="enabling flash attention"
time=2026-03-12T08:49:53.914Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-eebf93fe1af74695f4768535489d2af75c862a6bad443fa4b16e1f5a96d04394 --port 36143"
time=2026-03-12T08:49:53.915Z level=INFO source=sched.go:489 msg="system memory" total="483.4 GiB" free="42.5 GiB" free_swap="0 B"
time=2026-03-12T08:49:53.915Z level=INFO source=sched.go:496 msg="gpu memory" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-03-12T08:49:53.915Z level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1
time=2026-03-12T08:49:53.936Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-12T08:49:53.937Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:36143"
time=2026-03-12T08:49:53.949Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T08:49:54.013Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-03-12T08:49:54.467Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-03-12T08:49:55.821Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T08:49:56.565Z level=INFO source=runner.go:1302 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T08:49:56.566Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.6 GiB"
time=2026-03-12T08:49:56.566Z level=INFO source=device.go:245 msg="model weights" device=CPU size="563.7 MiB"
time=2026-03-12T08:49:56.566Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB"
time=2026-03-12T08:49:56.566Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="649.0 MiB"
time=2026-03-12T08:49:56.566Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="31.7 MiB"
time=2026-03-12T08:49:56.566Z level=INFO source=device.go:272 msg="total memory" size="8.2 GiB"
time=2026-03-12T08:49:56.566Z level=INFO source=sched.go:565 msg="loaded runners" count=1
time=2026-03-12T08:49:56.566Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-03-12T08:49:56.566Z level=INFO source=ggml.go:482 msg="offloading 32 repeating layers to GPU"
time=2026-03-12T08:49:56.566Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-03-12T08:49:56.566Z level=INFO source=ggml.go:494 msg="offloaded 33/33 layers to GPU"
time=2026-03-12T08:49:56.566Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-03-12T08:49:58.072Z level=INFO source=server.go:1388 msg="llama runner started in 4.16 seconds"
[GIN] 2026/03/12 - 08:50:01 | 200 |   9.06293249s |      172.17.0.1 | POST     "/api/chat"
ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 24759566336 total: 34190917632
time=2026-03-12T08:55:05.359Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41219"
time=2026-03-12T10:27:53.474Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35955"
time=2026-03-12T10:27:53.885Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values"
time=2026-03-12T10:27:54.068Z level=INFO source=server.go:246 msg="enabling flash attention"
time=2026-03-12T10:27:54.069Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-eebf93fe1af74695f4768535489d2af75c862a6bad443fa4b16e1f5a96d04394 --port 39601"
time=2026-03-12T10:27:54.069Z level=INFO source=sched.go:489 msg="system memory" total="483.4 GiB" free="33.1 GiB" free_swap="0 B"
time=2026-03-12T10:27:54.069Z level=INFO source=sched.go:496 msg="gpu memory" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-03-12T10:27:54.069Z level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1
time=2026-03-12T10:27:54.091Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-12T10:27:54.091Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:39601"
time=2026-03-12T10:27:54.105Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:5120 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T10:27:54.170Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: failed to initialize CUDA: initialization error
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-03-12T10:28:00.585Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-03-12T10:28:01.160Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:5120 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T10:28:01.737Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:5120 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T10:28:03.148Z level=INFO source=runner.go:1302 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:5120 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-12T10:28:03.148Z level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
time=2026-03-12T10:28:03.148Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-03-12T10:28:03.148Z level=INFO source=ggml.go:494 msg="offloaded 0/33 layers to GPU"
time=2026-03-12T10:28:03.148Z level=INFO source=device.go:245 msg="model weights" device=CPU size="6.1 GiB"
time=2026-03-12T10:28:03.148Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.4 GiB"
time=2026-03-12T10:28:03.148Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="433.7 MiB"
time=2026-03-12T10:28:03.148Z level=INFO source=device.go:272 msg="total memory" size="7.9 GiB"
time=2026-03-12T10:28:03.148Z level=INFO source=sched.go:565 msg="loaded runners" count=1
time=2026-03-12T10:28:03.148Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-03-12T10:28:03.149Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-03-12T10:28:04.154Z level=INFO source=server.go:1388 msg="llama runner started in 10.08 seconds"
<!-- gh-comment-id:4045640227 --> @TigerHH6866 commented on GitHub (Mar 12, 2026): just now after one time gpu running, back to cpu ``` ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 24797315072 total: 34190917632 time=2026-03-12T08:44:34.027Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44929" time=2026-03-12T08:49:53.040Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 37585" time=2026-03-12T08:49:53.914Z level=INFO source=server.go:246 msg="enabling flash attention" time=2026-03-12T08:49:53.914Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-eebf93fe1af74695f4768535489d2af75c862a6bad443fa4b16e1f5a96d04394 --port 36143" time=2026-03-12T08:49:53.915Z level=INFO source=sched.go:489 msg="system memory" total="483.4 GiB" free="42.5 GiB" free_swap="0 B" time=2026-03-12T08:49:53.915Z level=INFO source=sched.go:496 msg="gpu memory" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-03-12T08:49:53.915Z level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1 time=2026-03-12T08:49:53.936Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-12T08:49:53.937Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:36143" time=2026-03-12T08:49:53.949Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T08:49:54.013Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52 load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so time=2026-03-12T08:49:54.467Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-03-12T08:49:55.821Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T08:49:56.565Z level=INFO source=runner.go:1302 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:6144 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T08:49:56.566Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.6 GiB" time=2026-03-12T08:49:56.566Z level=INFO source=device.go:245 msg="model weights" device=CPU size="563.7 MiB" time=2026-03-12T08:49:56.566Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB" time=2026-03-12T08:49:56.566Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="649.0 MiB" time=2026-03-12T08:49:56.566Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="31.7 MiB" time=2026-03-12T08:49:56.566Z level=INFO source=device.go:272 msg="total memory" size="8.2 GiB" time=2026-03-12T08:49:56.566Z level=INFO source=sched.go:565 msg="loaded runners" count=1 time=2026-03-12T08:49:56.566Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-03-12T08:49:56.566Z level=INFO source=ggml.go:482 msg="offloading 32 repeating layers to GPU" time=2026-03-12T08:49:56.566Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-03-12T08:49:56.566Z level=INFO source=ggml.go:494 msg="offloaded 33/33 layers to GPU" time=2026-03-12T08:49:56.566Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-03-12T08:49:58.072Z level=INFO source=server.go:1388 msg="llama runner started in 4.16 seconds" [GIN] 2026/03/12 - 08:50:01 | 200 | 9.06293249s | 172.17.0.1 | POST "/api/chat" ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 24759566336 total: 34190917632 time=2026-03-12T08:55:05.359Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41219" time=2026-03-12T10:27:53.474Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35955" time=2026-03-12T10:27:53.885Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values" time=2026-03-12T10:27:54.068Z level=INFO source=server.go:246 msg="enabling flash attention" time=2026-03-12T10:27:54.069Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-eebf93fe1af74695f4768535489d2af75c862a6bad443fa4b16e1f5a96d04394 --port 39601" time=2026-03-12T10:27:54.069Z level=INFO source=sched.go:489 msg="system memory" total="483.4 GiB" free="33.1 GiB" free_swap="0 B" time=2026-03-12T10:27:54.069Z level=INFO source=sched.go:496 msg="gpu memory" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-03-12T10:27:54.069Z level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1 time=2026-03-12T10:27:54.091Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-12T10:27:54.091Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:39601" time=2026-03-12T10:27:54.105Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:5120 KvCacheType: NumThreads:122 GPULayers:33[ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T10:27:54.170Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52 load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: failed to initialize CUDA: initialization error load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so time=2026-03-12T10:28:00.585Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-03-12T10:28:01.160Z level=INFO source=runner.go:1302 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:5120 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T10:28:01.737Z level=INFO source=runner.go:1302 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:5120 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T10:28:03.148Z level=INFO source=runner.go:1302 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:5120 KvCacheType: NumThreads:122 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-12T10:28:03.148Z level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" time=2026-03-12T10:28:03.148Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-03-12T10:28:03.148Z level=INFO source=ggml.go:494 msg="offloaded 0/33 layers to GPU" time=2026-03-12T10:28:03.148Z level=INFO source=device.go:245 msg="model weights" device=CPU size="6.1 GiB" time=2026-03-12T10:28:03.148Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.4 GiB" time=2026-03-12T10:28:03.148Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="433.7 MiB" time=2026-03-12T10:28:03.148Z level=INFO source=device.go:272 msg="total memory" size="7.9 GiB" time=2026-03-12T10:28:03.148Z level=INFO source=sched.go:565 msg="loaded runners" count=1 time=2026-03-12T10:28:03.148Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-03-12T10:28:03.149Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-03-12T10:28:04.154Z level=INFO source=server.go:1388 msg="llama runner started in 10.08 seconds" ```
Author
Owner

@rick-github commented on GitHub (Mar 12, 2026):

What's the output of CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve?

<!-- gh-comment-id:4045964644 --> @rick-github commented on GitHub (Mar 12, 2026): What's the output of `CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve`?
Author
Owner

@TigerHH6866 commented on GitHub (Mar 12, 2026):

What's the output of CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve?

oot@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve
time=2026-03-12T13:13:39.493Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-03-12T13:13:39.494Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false"
time=2026-03-12T13:13:39.495Z level=INFO source=images.go:477 msg="total blobs: 24"
time=2026-03-12T13:13:39.496Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2026-03-12T13:13:39.496Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)"
time=2026-03-12T13:13:39.496Z level=DEBUG source=sched.go:145 msg="starting llm scheduler"
time=2026-03-12T13:13:39.497Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-12T13:13:39.497Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7
time=2026-03-12T13:13:39.497Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
time=2026-03-12T13:13:39.497Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-03-12T13:13:39.497Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[]
time=2026-03-12T13:13:39.498Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38287"
time=2026-03-12T13:13:39.498Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=7 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
time=2026-03-12T13:13:39.518Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-12T13:13:39.519Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:38287"
^[]11;rgb:0000/0000/0000^[\^[[62;1Rtime=2026-03-12T13:13:39.520Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-03-12T13:13:39.520Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
time=2026-03-12T13:13:39.521Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-03-12T13:13:39.521Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
time=2026-03-12T13:13:39.527Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-03-12T13:13:39.998Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-03-12T13:13:39.999Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-03-12T13:13:39.999Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-03-12T13:13:39.999Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=478.799059ms
ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 33650769920 total: 34190917632
time=2026-03-12T13:13:40.144Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=145.907111ms
time=2026-03-12T13:13:40.145Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices="[{DeviceID:{ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 5090 FilterID: Integrated:false PCIID:0000:d8:00.0 TotalMemory:34190917632 FreeMemory:33650769920 ComputeMajor:12 ComputeMinor:0 DriverMajor:12 DriverMinor:9 LibraryPath:[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]}]"
time=2026-03-12T13:13:40.145Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=648.230187ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
time=2026-03-12T13:13:40.145Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[]
time=2026-03-12T13:13:40.146Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40857"
time=2026-03-12T13:13:40.146Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=7 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13
time=2026-03-12T13:13:40.168Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-12T13:13:40.168Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:40857"
time=2026-03-12T13:13:40.180Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-03-12T13:13:40.180Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
time=2026-03-12T13:13:40.180Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
time=2026-03-12T13:13:40.189Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-03-12T13:13:40.260Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=81.787315ms
time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=835ns
time=2026-03-12T13:13:40.261Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[]
time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=116.111964ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[]
time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1
time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/local/lib/ollama/cuda_v12 description="NVIDIA GeForce RTX 5090" compute=12.0 id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 pci_id=0000:d8:00.0
time=2026-03-12T13:13:40.262Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs="map[CUDA_VISIBLE_DEVICES:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 GGML_CUDA_INIT:1]"
time=2026-03-12T13:13:40.262Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46855"
time=2026-03-12T13:13:40.262Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 GGML_CUDA_INIT=1
time=2026-03-12T13:13:40.281Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-12T13:13:40.281Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:46855"
time=2026-03-12T13:13:40.286Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-03-12T13:13:40.286Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
time=2026-03-12T13:13:40.286Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
time=2026-03-12T13:13:40.293Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-03-12T13:13:40.738Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-03-12T13:13:40.739Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=453.233514ms
ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 33650769920 total: 34190917632
time=2026-03-12T13:13:40.884Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=145.416706ms
time=2026-03-12T13:13:40.885Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices="[{DeviceID:{ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 5090 FilterID: Integrated:false PCIID:0000:d8:00.0 TotalMemory:34190917632 FreeMemory:33650769920 ComputeMajor:12 ComputeMinor:0 DriverMajor:12 DriverMinor:9 LibraryPath:[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]}]"
time=2026-03-12T13:13:40.885Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=623.451664ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 GGML_CUDA_INIT:1]"
time=2026-03-12T13:13:40.885Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[CUDA:map[/usr/local/lib/ollama/cuda_v12:map[GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0:0]]]
time=2026-03-12T13:13:40.885Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=1.389015716s
time=2026-03-12T13:13:40.885Z level=INFO source=types.go:42 msg="inference compute" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:d8:00.0 type=discrete total="31.8 GiB" available="31.3 GiB"
time=2026-03-12T13:13:40.885Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="31.8 GiB" default_num_ctx=32768
<!-- gh-comment-id:4046666524 --> @TigerHH6866 commented on GitHub (Mar 12, 2026): > What's the output of `CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve`? ``` oot@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve time=2026-03-12T13:13:39.493Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-03-12T13:13:39.494Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false" time=2026-03-12T13:13:39.495Z level=INFO source=images.go:477 msg="total blobs: 24" time=2026-03-12T13:13:39.496Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2026-03-12T13:13:39.496Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)" time=2026-03-12T13:13:39.496Z level=DEBUG source=sched.go:145 msg="starting llm scheduler" time=2026-03-12T13:13:39.497Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-12T13:13:39.497Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7 time=2026-03-12T13:13:39.497Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" time=2026-03-12T13:13:39.497Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-03-12T13:13:39.497Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[] time=2026-03-12T13:13:39.498Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38287" time=2026-03-12T13:13:39.498Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=7 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 time=2026-03-12T13:13:39.518Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-12T13:13:39.519Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:38287" ^[]11;rgb:0000/0000/0000^[\^[[62;1Rtime=2026-03-12T13:13:39.520Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-03-12T13:13:39.520Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" time=2026-03-12T13:13:39.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" time=2026-03-12T13:13:39.521Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-03-12T13:13:39.521Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so time=2026-03-12T13:13:39.527Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so time=2026-03-12T13:13:39.998Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-03-12T13:13:39.998Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-03-12T13:13:39.999Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-03-12T13:13:39.999Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-03-12T13:13:39.999Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=478.799059ms ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 33650769920 total: 34190917632 time=2026-03-12T13:13:40.144Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=145.907111ms time=2026-03-12T13:13:40.145Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices="[{DeviceID:{ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 5090 FilterID: Integrated:false PCIID:0000:d8:00.0 TotalMemory:34190917632 FreeMemory:33650769920 ComputeMajor:12 ComputeMinor:0 DriverMajor:12 DriverMinor:9 LibraryPath:[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]}]" time=2026-03-12T13:13:40.145Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=648.230187ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] time=2026-03-12T13:13:40.145Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[] time=2026-03-12T13:13:40.146Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40857" time=2026-03-12T13:13:40.146Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=7 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 time=2026-03-12T13:13:40.168Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-12T13:13:40.168Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:40857" time=2026-03-12T13:13:40.180Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-03-12T13:13:40.180Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" time=2026-03-12T13:13:40.180Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-03-12T13:13:40.180Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so time=2026-03-12T13:13:40.189Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13 ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so time=2026-03-12T13:13:40.260Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-03-12T13:13:40.261Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=81.787315ms time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=835ns time=2026-03-12T13:13:40.261Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[] time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=116.111964ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1 time=2026-03-12T13:13:40.261Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/local/lib/ollama/cuda_v12 description="NVIDIA GeForce RTX 5090" compute=12.0 id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 pci_id=0000:d8:00.0 time=2026-03-12T13:13:40.262Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs="map[CUDA_VISIBLE_DEVICES:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 GGML_CUDA_INIT:1]" time=2026-03-12T13:13:40.262Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46855" time=2026-03-12T13:13:40.262Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 GGML_CUDA_INIT=1 time=2026-03-12T13:13:40.281Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-12T13:13:40.281Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:46855" time=2026-03-12T13:13:40.286Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-03-12T13:13:40.286Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" time=2026-03-12T13:13:40.286Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-03-12T13:13:40.286Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so time=2026-03-12T13:13:40.293Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so time=2026-03-12T13:13:40.738Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-03-12T13:13:40.739Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-03-12T13:13:40.739Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=453.233514ms ggml_backend_cuda_device_get_memory device GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 utilizing NVML memory reporting free: 33650769920 total: 34190917632 time=2026-03-12T13:13:40.884Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=145.416706ms time=2026-03-12T13:13:40.885Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices="[{DeviceID:{ID:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 5090 FilterID: Integrated:false PCIID:0000:d8:00.0 TotalMemory:34190917632 FreeMemory:33650769920 ComputeMajor:12 ComputeMinor:0 DriverMajor:12 DriverMinor:9 LibraryPath:[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]}]" time=2026-03-12T13:13:40.885Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=623.451664ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 GGML_CUDA_INIT:1]" time=2026-03-12T13:13:40.885Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[CUDA:map[/usr/local/lib/ollama/cuda_v12:map[GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0:0]]] time=2026-03-12T13:13:40.885Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=1.389015716s time=2026-03-12T13:13:40.885Z level=INFO source=types.go:42 msg="inference compute" id=GPU-c14a8291-082d-cdf0-bd42-d74f880e3be0 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:d8:00.0 type=discrete total="31.8 GiB" available="31.3 GiB" time=2026-03-12T13:13:40.885Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="31.8 GiB" default_num_ctx=32768 ```
Author
Owner

@rick-github commented on GitHub (Mar 12, 2026):

Do you have an example of discovery failure when OLLAMA_DEBUG=2?

<!-- gh-comment-id:4047165168 --> @rick-github commented on GitHub (Mar 12, 2026): Do you have an example of discovery failure when `OLLAMA_DEBUG=2`?
Author
Owner

@TigerHH6866 commented on GitHub (Mar 13, 2026):

Do you have an example of discovery failure when OLLAMA_DEBUG=2?

root@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve
time=2026-03-13T00:16:57.773Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-03-13T00:16:57.774Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false"
time=2026-03-13T00:16:57.776Z level=INFO source=images.go:477 msg="total blobs: 24"
time=2026-03-13T00:16:57.776Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2026-03-13T00:16:57.777Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)"
time=2026-03-13T00:16:57.778Z level=DEBUG source=sched.go:145 msg="starting llm scheduler"
time=2026-03-13T00:16:57.780Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-13T00:16:57.780Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7
time=2026-03-13T00:16:57.780Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
time=2026-03-13T00:16:57.781Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[]
time=2026-03-13T00:16:57.783Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 37527"
time=2026-03-13T00:16:57.783Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=7 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
^[]11;rgb:0000/0000/0000^[\^[[62;1Rtime=2026-03-13T00:16:57.808Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-13T00:16:57.809Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:37527"
time=2026-03-13T00:16:57.816Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-03-13T00:16:57.816Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-03-13T00:16:57.816Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-13T00:16:57.816Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-13T00:16:57.817Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
time=2026-03-13T00:16:57.817Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
time=2026-03-13T00:16:57.817Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
time=2026-03-13T00:16:57.817Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-03-13T00:16:57.817Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
time=2026-03-13T00:16:57.832Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
ggml_cuda_init: failed to initialize CUDA: initialization error
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-03-13T00:17:00.594Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-03-13T00:17:00.594Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=2.778655414s
time=2026-03-13T00:17:00.594Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=941ns
time=2026-03-13T00:17:00.595Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[]
time=2026-03-13T00:17:00.595Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=2.814924813s OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
time=2026-03-13T00:17:00.595Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[]
time=2026-03-13T00:17:00.595Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40663"
time=2026-03-13T00:17:00.595Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=7 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13
time=2026-03-13T00:17:00.615Z level=INFO source=runner.go:1429 msg="starting ollama engine"
time=2026-03-13T00:17:00.616Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:40663"
time=2026-03-13T00:17:00.618Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-03-13T00:17:00.618Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-03-13T00:17:00.619Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
time=2026-03-13T00:17:00.620Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
time=2026-03-13T00:17:00.627Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-03-13T00:17:00.839Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=221.928224ms
time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=556ns
time=2026-03-13T00:17:00.840Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[]
time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=245.361845ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[]
time=2026-03-13T00:17:00.840Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
time=2026-03-13T00:17:00.840Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=3.062462947s
time=2026-03-13T00:17:00.841Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="483.4 GiB" available="42.8 GiB"
time=2026-03-13T00:17:00.841Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
<!-- gh-comment-id:4051282807 --> @TigerHH6866 commented on GitHub (Mar 13, 2026): > Do you have an example of discovery failure when `OLLAMA_DEBUG=2`? ``` root@C.32098470:/workspace$ CUDA_VISIBLE_DEVICES=7 OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 ollama serve time=2026-03-13T00:16:57.773Z level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES:7 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-03-13T00:16:57.774Z level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false" time=2026-03-13T00:16:57.776Z level=INFO source=images.go:477 msg="total blobs: 24" time=2026-03-13T00:16:57.776Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2026-03-13T00:16:57.777Z level=INFO source=routes.go:1713 msg="Listening on [::]:11434 (version 0.17.7)" time=2026-03-13T00:16:57.778Z level=DEBUG source=sched.go:145 msg="starting llm scheduler" time=2026-03-13T00:16:57.780Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-13T00:16:57.780Z level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=7 time=2026-03-13T00:16:57.780Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" time=2026-03-13T00:16:57.781Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[] time=2026-03-13T00:16:57.783Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 37527" time=2026-03-13T00:16:57.783Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=7 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 ^[]11;rgb:0000/0000/0000^[\^[[62;1Rtime=2026-03-13T00:16:57.808Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-13T00:16:57.809Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:37527" time=2026-03-13T00:16:57.816Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-03-13T00:16:57.816Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-03-13T00:16:57.816Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-13T00:16:57.816Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-13T00:16:57.817Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 time=2026-03-13T00:16:57.817Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" time=2026-03-13T00:16:57.817Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" time=2026-03-13T00:16:57.817Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-03-13T00:16:57.817Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so time=2026-03-13T00:16:57.832Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 ggml_cuda_init: failed to initialize CUDA: initialization error load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so time=2026-03-13T00:17:00.594Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-03-13T00:17:00.594Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-03-13T00:17:00.594Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=2.778655414s time=2026-03-13T00:17:00.594Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=941ns time=2026-03-13T00:17:00.595Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[] time=2026-03-13T00:17:00.595Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=2.814924813s OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] time=2026-03-13T00:17:00.595Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[] time=2026-03-13T00:17:00.595Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40663" time=2026-03-13T00:17:00.595Z level=DEBUG source=server.go:431 msg=subprocess OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=2 CUDA_VISIBLE_DEVICES=7 GPU_COUNT=8 CUDA_VERSION=12.9.1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 CUDA_HOME=/usr/local/cuda PATH=/opt/miniforge3/condabin:/opt/nvm/versions/node/v24.12.0/bin:/opt/instance-tools/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 time=2026-03-13T00:17:00.615Z level=INFO source=runner.go:1429 msg="starting ollama engine" time=2026-03-13T00:17:00.616Z level=INFO source=runner.go:1464 msg="Server listening on 127.0.0.1:40663" time=2026-03-13T00:17:00.618Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-03-13T00:17:00.618Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-03-13T00:17:00.619Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" time=2026-03-13T00:17:00.620Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-03-13T00:17:00.620Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so time=2026-03-13T00:17:00.627Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13 ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so time=2026-03-13T00:17:00.839Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-03-13T00:17:00.839Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-03-13T00:17:00.840Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:1404 msg="dummy model load took" duration=221.928224ms time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:1409 msg="gathering device infos took" duration=556ns time=2026-03-13T00:17:00.840Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[] time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=245.361845ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] time=2026-03-13T00:17:00.840Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 time=2026-03-13T00:17:00.840Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] time=2026-03-13T00:17:00.840Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=3.062462947s time=2026-03-13T00:17:00.841Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="483.4 GiB" available="42.8 GiB" time=2026-03-13T00:17:00.841Z level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 ```
Author
Owner

@rick-github commented on GitHub (Mar 13, 2026):

ggml_cuda_init: failed to initialize CUDA: initialization error
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so

CUDA_v12 failed to initialize and CUDA_v13 failed because the Nvidia driver is 12.9. It's not clear why v12 failed but I noticed this: OLLAMA_MODELS:/root/.ollama/models. Is this running in a container, eg docker?

<!-- gh-comment-id:4051323569 --> @rick-github commented on GitHub (Mar 13, 2026): ``` ggml_cuda_init: failed to initialize CUDA: initialization error load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so ``` CUDA_v12 failed to initialize and CUDA_v13 failed because the Nvidia driver is 12.9. It's not clear why v12 failed but I noticed this: `OLLAMA_MODELS:/root/.ollama/models`. Is this running in a container, eg docker?
Author
Owner

@TigerHH6866 commented on GitHub (Mar 13, 2026):

ggml_cuda_init: failed to initialize CUDA: initialization error
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so

CUDA_v12 failed to initialize and CUDA_v13 failed because the Nvidia driver is 12.9. It's not clear why v12 failed but I noticed this: OLLAMA_MODELS:/root/.ollama/models. Is this running in a container, eg docker?

yes,is running in a container

<!-- gh-comment-id:4051495546 --> @TigerHH6866 commented on GitHub (Mar 13, 2026): > ``` > ggml_cuda_init: failed to initialize CUDA: initialization error > load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so > ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version > load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so > ``` > > CUDA_v12 failed to initialize and CUDA_v13 failed because the Nvidia driver is 12.9. It's not clear why v12 failed but I noticed this: `OLLAMA_MODELS:/root/.ollama/models`. Is this running in a container, eg docker? yes,is running in a container
Author
Owner

@rick-github commented on GitHub (Mar 13, 2026):

Docker?

<!-- gh-comment-id:4051547614 --> @rick-github commented on GitHub (Mar 13, 2026): Docker?
Author
Owner

@TigerHH6866 commented on GitHub (Mar 13, 2026):

Docker?

yes
(main) root@C.30041716:/workspace/ComfyUI/output$ ls -la /.dockerenv
-rwxr-xr-x 1 root root 0 Jan 15 08:35 /.dockerenv

(main) root@C.30041716:/workspace/ComfyUI/output$ cat /proc/self/mountinfo | grep -E "(/docker|/lxc)"
2787 2742 0:74 / / rw,relatime - overlay overlay rw,lowerdir=79918/fs:79917/fs:79916/fs:79915/fs:79914/fs:79913/fs:79912/fs:79911/fs:79910/fs:72069/fs:72041/fs:72040/fs:72039/fs:72038/fs:72037/fs:72034/fs:72033/fs:72031/fs:72029/fs:72028/fs:72027/fs:72026/fs:72025/fs:72024/fs:72023/fs:72022/fs:72021/fs:72020/fs:72019/fs:72018/fs:72017/fs:72016/fs:71881/fs:71880/fs:71879/fs:71878/fs:71507/fs:71506/fs:71505/fs:71454/fs:71453/fs:71452/fs,upperdir=/var/lib/docker/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79919/fs,workdir=/var/lib/docker/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79919/work,nouserxattr

<!-- gh-comment-id:4051559571 --> @TigerHH6866 commented on GitHub (Mar 13, 2026): > Docker? yes (main) root@C.30041716:/workspace/ComfyUI/output$ ls -la /.dockerenv -rwxr-xr-x 1 root root 0 Jan 15 08:35 /.dockerenv (main) root@C.30041716:/workspace/ComfyUI/output$ cat /proc/self/mountinfo | grep -E "(/docker|/lxc)" 2787 2742 0:74 / / rw,relatime - overlay overlay rw,lowerdir=79918/fs:79917/fs:79916/fs:79915/fs:79914/fs:79913/fs:79912/fs:79911/fs:79910/fs:72069/fs:72041/fs:72040/fs:72039/fs:72038/fs:72037/fs:72034/fs:72033/fs:72031/fs:72029/fs:72028/fs:72027/fs:72026/fs:72025/fs:72024/fs:72023/fs:72022/fs:72021/fs:72020/fs:72019/fs:72018/fs:72017/fs:72016/fs:71881/fs:71880/fs:71879/fs:71878/fs:71507/fs:71506/fs:71505/fs:71454/fs:71453/fs:71452/fs,upperdir=/var/lib/docker/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79919/fs,workdir=/var/lib/docker/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79919/work,nouserxattr
Author
Owner

@rick-github commented on GitHub (Mar 13, 2026):

What's the result of the following in the host:

cat /etc/docker/daemon.json
<!-- gh-comment-id:4051576378 --> @rick-github commented on GitHub (Mar 13, 2026): What's the result of the following in the host: ``` cat /etc/docker/daemon.json ```
Author
Owner

@TigerHH6866 commented on GitHub (Mar 13, 2026):

What's the result of the following in the host:

cat /etc/docker/daemon.json

(main) root@C.30041716:/workspace/ComfyUI/output$ cat /etc/docker/daemon.json
cat: /etc/docker/daemon.json: No such file or directory

<!-- gh-comment-id:4051584898 --> @TigerHH6866 commented on GitHub (Mar 13, 2026): > What's the result of the following in the host: > > ``` > cat /etc/docker/daemon.json > ``` (main) root@C.30041716:/workspace/ComfyUI/output$ cat /etc/docker/daemon.json cat: /etc/docker/daemon.json: No such file or directory
Author
Owner

@rick-github commented on GitHub (Mar 13, 2026):

In the host, not the container.

<!-- gh-comment-id:4051601408 --> @rick-github commented on GitHub (Mar 13, 2026): In the host, not the container.
Author
Owner

@TigerHH6866 commented on GitHub (Mar 13, 2026):

In the host, not the container.

cloud server, i cant visit host

<!-- gh-comment-id:4051606953 --> @TigerHH6866 commented on GitHub (Mar 13, 2026): > In the host, not the container. cloud server, i cant visit host
Author
Owner

@rick-github commented on GitHub (Mar 13, 2026):

OK, I was going to suggest https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx#linux-docker but if you can't access the host then that's not a viable solution. If it's the cgroup problem then you need to arrange the change with the cloud provider.

<!-- gh-comment-id:4051663548 --> @rick-github commented on GitHub (Mar 13, 2026): OK, I was going to suggest https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx#linux-docker but if you can't access the host then that's not a viable solution. If it's the cgroup problem then you need to arrange the change with the cloud provider.
Author
Owner

@TigerHH6866 commented on GitHub (Mar 13, 2026):

OK, I was going to suggest https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx#linux-docker but if you can't access the host then that's not a viable solution. If it's the cgroup problem then you need to arrange the change with the cloud provider.

thanks i looking for otherway

<!-- gh-comment-id:4051676080 --> @TigerHH6866 commented on GitHub (Mar 13, 2026): > OK, I was going to suggest https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx#linux-docker but if you can't access the host then that's not a viable solution. If it's the cgroup problem then you need to arrange the change with the cloud provider. thanks i looking for otherway
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35314