[GH-ISSUE #12699] ollama 0.12.4+ on GPU-less Windows machines gets wedged loading models #34187

Closed
opened 2026-04-22 17:34:09 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @rick-github on GitHub (Oct 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12699

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I initially thought this was a quirk of my Windows VM, but other Discord users are experiencing what looks like the same issue (#1, #2).

server 0.12.3, client: ollama run qwen2.5:0.5b hello:

time=2025-10-20T01:58:48.722+01:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-20T01:58:48.734+01:00 level=INFO source=images.go:518 msg="total blobs: 20"
time=2025-10-20T01:58:48.739+01:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
time=2025-10-20T01:58:48.744+01:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)"
time=2025-10-20T01:58:48.744+01:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler"
time=2025-10-20T01:58:48.744+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-10-20T01:58:48.744+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-10-20T01:58:48.744+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=1 efficiency=0 threads=1
time=2025-10-20T01:58:48.744+01:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-10-20T01:58:48.745+01:00 level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=nvml.dll
time=2025-10-20T01:58:48.745+01:00 level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll C:\\WINDOWS\\nvml.dll C:\\WINDOWS\\System32\\Wbem\\nvml.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\WINDOWS\\System32\\OpenSSH\\nvml.dll C:\\Program Files\\Go\\bin\\nvml.dll C:\\Program Files\\CMake\\bin\\nvml.dll C:\\Program Files\\TDM-GCC-64\\bin\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\Program Files\\Neovim\\bin\\nvml.dll C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\bill\\go\\bin\\nvml.dll c:\\program files\\vim\\vim91\\nvml.dll C:\\Users\\bill\\tmp\\nvml.dll C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-10-20T01:58:48.746+01:00 level=DEBUG source=gpu.go:548 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll"
time=2025-10-20T01:58:48.748+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[]
time=2025-10-20T01:58:48.748+01:00 level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=nvcuda.dll
time=2025-10-20T01:58:48.748+01:00 level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll C:\\WINDOWS\\nvcuda.dll C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files\\Go\\bin\\nvcuda.dll C:\\Program Files\\CMake\\bin\\nvcuda.dll C:\\Program Files\\TDM-GCC-64\\bin\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\Program Files\\Neovim\\bin\\nvcuda.dll C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\bill\\go\\bin\\nvcuda.dll c:\\program files\\vim\\vim91\\nvcuda.dll C:\\Users\\bill\\tmp\\nvcuda.dll C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]"
time=2025-10-20T01:58:48.749+01:00 level=DEBUG source=gpu.go:548 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll"
time=2025-10-20T01:58:48.750+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[]
time=2025-10-20T01:58:48.751+01:00 level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=cudart64_*.dll
time=2025-10-20T01:58:48.751+01:00 level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\cudart64_*.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_*.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\cudart64_*.dll C:\\WINDOWS\\system32\\cudart64_*.dll C:\\WINDOWS\\cudart64_*.dll C:\\WINDOWS\\System32\\Wbem\\cudart64_*.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll C:\\WINDOWS\\System32\\OpenSSH\\cudart64_*.dll C:\\Program Files\\Go\\bin\\cudart64_*.dll C:\\Program Files\\CMake\\bin\\cudart64_*.dll C:\\Program Files\\TDM-GCC-64\\bin\\cudart64_*.dll C:\\Program Files\\Git\\cmd\\cudart64_*.dll C:\\Program Files\\Neovim\\bin\\cudart64_*.dll C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\cudart64_*.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\cudart64_*.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll C:\\Users\\bill\\go\\bin\\cudart64_*.dll c:\\program files\\vim\\vim91\\cudart64_*.dll C:\\Users\\bill\\tmp\\cudart64_*.dll C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\cuda_v*\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]"
time=2025-10-20T01:58:48.767+01:00 level=DEBUG source=gpu.go:548 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll"
time=2025-10-20T01:58:48.770+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_12.dll C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\cuda_v12\\cudart64_12.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_12.dll]"
cudaSetDevice err: 35
time=2025-10-20T01:58:48.773+01:00 level=DEBUG source=gpu.go:593 msg="Unable to load cudart library C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_12.dll: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
cudaSetDevice err: 35
time=2025-10-20T01:58:48.777+01:00 level=DEBUG source=gpu.go:593 msg="Unable to load cudart library C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\cuda_v12\\cudart64_12.dll: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
cudaSetDevice err: 35
time=2025-10-20T01:58:48.781+01:00 level=DEBUG source=gpu.go:593 msg="Unable to load cudart library c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_12.dll: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
time=2025-10-20T01:58:48.783+01:00 level=DEBUG source=amd_windows.go:34 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found."
time=2025-10-20T01:58:48.784+01:00 level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered"
time=2025-10-20T01:58:48.784+01:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="8.0 GiB" available="4.0 GiB"
time=2025-10-20T01:58:48.784+01:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"
[GIN] 2025/10/20 - 01:58:54 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-10-20T01:58:54.318+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/10/20 - 01:58:54 | 200 |     76.7432ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-20T01:58:54.409+01:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="8.0 GiB" before.free="4.0 GiB" before.free_swap="4.0 GiB" now.total="8.0 GiB" now.free="3.9 GiB" now.free_swap="3.9 GiB"
time=2025-10-20T01:58:54.410+01:00 level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-10-20T01:58:54.429+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T01:58:54.431+01:00 level=DEBUG source=sched.go:208 msg="loading first model" model=C:\Users\bill\.ollama\models\blobs\sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515
time=2025-10-20T01:58:54.506+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T01:58:54.508+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.pooling_type default=0
time=2025-10-20T01:58:54.509+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-10-20T01:58:54.509+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-10-20T01:58:54.510+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.attention.key_length default=0
time=2025-10-20T01:58:54.510+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.rope.dimension_count default=0
time=2025-10-20T01:58:54.510+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.rope.scaling.factor default=1
time=2025-10-20T01:58:54.512+01:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="8.0 GiB" before.free="3.9 GiB" before.free_swap="3.9 GiB" now.total="8.0 GiB" now.free="3.9 GiB" now.free_swap="3.9 GiB"
time=2025-10-20T01:58:54.529+01:00 level=INFO source=server.go:399 msg="starting runner" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.3\\ollama.exe runner --ollama-engine --model C:\\Users\\bill\\.ollama\\models\\blobs\\sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 --port 63918"
time=2025-10-20T01:58:54.531+01:00 level=DEBUG source=server.go:400 msg=subprocess CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" CUDA_PATH_V12_8="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 OLLAMA_LLM_LIBRARY="\"\"" OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_NEWENGINE=1 OLLAMA_NEW_ENGINE=1 PATH="C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\Go\\bin;C:\\Program Files\\CMake\\bin;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Neovim\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\bill\\go\\bin;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama;C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama" OLLAMA_LIBRARY_PATH=C:\Users\bill\tmp\ollama-0.12.3\lib\ollama
time=2025-10-20T01:58:54.542+01:00 level=INFO source=server.go:672 msg="loading model" "model layers"=25 requested=-1
time=2025-10-20T01:58:54.593+01:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="8.0 GiB" before.free="3.9 GiB" before.free_swap="3.9 GiB" now.total="8.0 GiB" now.free="3.9 GiB" now.free_swap="3.9 GiB"
time=2025-10-20T01:58:54.596+01:00 level=INFO source=server.go:678 msg="system memory" total="8.0 GiB" free="3.9 GiB" free_swap="3.9 GiB"
time=2025-10-20T01:58:54.590+01:00 level=INFO source=runner.go:1252 msg="starting ollama engine"
time=2025-10-20T01:58:54.592+01:00 level=INFO source=runner.go:1287 msg="Server listening on 127.0.0.1:63918"
time=2025-10-20T01:58:54.607+01:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:1 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-20T01:58:54.637+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T01:58:54.637+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-10-20T01:58:54.637+01:00 level=INFO source=ggml.go:131 msg="" architecture=qwen2 file_type=Q4_K_M name="Qwen2.5 0.5B Instruct" description="" num_tensors=290 num_key_values=35
time=2025-10-20T01:58:54.637+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.3\lib\ollama
load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.3\lib\ollama\ggml-cpu-haswell.dll
time=2025-10-20T01:58:54.671+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
... tensors loaded, output generated

0.12.4, client: ollama run qwen2.5:0.5b hello:

time=2025-10-20T02:00:41.124+01:00 level=INFO source=routes.go:1479 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-20T02:00:41.134+01:00 level=INFO source=images.go:522 msg="total blobs: 20"
time=2025-10-20T02:00:41.142+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-20T02:00:41.145+01:00 level=INFO source=routes.go:1532 msg="Listening on 127.0.0.1:11434 (version 0.12.4)"
time=2025-10-20T02:00:41.146+01:00 level=DEBUG source=sched.go:122 msg="starting llm scheduler"
time=2025-10-20T02:00:41.149+01:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-20T02:00:41.149+01:00 level=DEBUG source=runner.go:411 msg="spawing runner with" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12]" extra_envs=[]
time=2025-10-20T02:00:41.165+01:00 level=TRACE source=runner.go:491 msg="starting runner for device discovery" env="[=C:=C:\\Users\\bill\\tmp =ExitCode=00000000 ALLUSERSPROFILE=C:\\ProgramData APPDATA=C:\\Users\\bill\\AppData\\Roaming CommonProgramFiles=C:\\Program Files\\Common Files CommonProgramFiles(x86)=C:\\Program Files (x86)\\Common Files CommonProgramW6432=C:\\Program Files\\Common Files COMPUTERNAME=DESKTOP-U51LGBR ComSpec=C:\\WINDOWS\\system32\\cmd.exe CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 CUDA_PATH_V12_8=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 DriverData=C:\\Windows\\System32\\Drivers\\DriverData GOPATH=C:\\Users\\bill\\go HOME=C:\\Users\\bill HOMEDRIVE=C: HOMEPATH=\\Users\\bill LOCALAPPDATA=C:\\Users\\bill\\AppData\\Local LOGNAME=bill NUMBER_OF_PROCESSORS=1 OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 OLLAMA_LLM_LIBRARY=\"\" OLLAMA_NEWENGINE=1 OLLAMA_NEW_ENGINE=1 OneDrive=C:\\Users\\bill\\OneDrive OS=Windows_NT PATH=C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\Go\\bin;C:\\Program Files\\CMake\\bin;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Neovim\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\bill\\go\\bin;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 158 Stepping 13, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=9e0d ProgramData=C:\\ProgramData ProgramFiles=C:\\Program Files ProgramFiles(x86)=C:\\Program Files (x86) ProgramW6432=C:\\Program Files PROMPT=bill@DESKTOP-U51LGBR $P$G PSModulePath=C:\\Program Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules PUBLIC=C:\\Users\\Public SHELL=c:\\windows\\system32\\cmd.exe SSH_CLIENT=10.10.22.186 45704 22 SSH_CONNECTION=10.10.22.186 45704 10.10.210.139 22 SSH_TTY=windows-pty SystemDrive=C: SystemRoot=C:\\WINDOWS TEMP=C:\\Users\\bill\\AppData\\Local\\Temp TERM=xterm-256color TMP=C:\\Users\\bill\\AppData\\Local\\Temp USER=bill USERDOMAIN=WORKGROUP USERNAME=bill USERPROFILE=C:\\Users\\bill windir=C:\\WINDOWS xHTTPS_PROXY=http://proxy-au.lan:8080 OLLAMA_LIBRARY_PATH=C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12]" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.4\\ollama.exe runner --ollama-engine --port 63934"
time=2025-10-20T02:00:41.216+01:00 level=INFO source=runner.go:1299 msg="starting ollama engine"
time=2025-10-20T02:00:41.220+01:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:63934"
time=2025-10-20T02:00:41.226+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T02:00:41.227+01:00 level=DEBUG source=gguf.go:578 msg=general.architecture type=string
time=2025-10-20T02:00:41.227+01:00 level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string
time=2025-10-20T02:00:41.228+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T02:00:41.229+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-10-20T02:00:41.229+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-10-20T02:00:41.231+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-10-20T02:00:41.231+01:00 level=INFO source=ggml.go:133 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-10-20T02:00:41.231+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.4\lib\ollama
load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.4\lib\ollama\ggml-cpu-haswell.dll
time=2025-10-20T02:00:41.274+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.4\lib\ollama\cuda_v12
time=2025-10-20T02:00:41.293+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-10-20T02:00:41.299+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=runner.go:1274 msg="dummy model load took" duration=79.0491ms
time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=runner.go:1279 msg="gathering device infos took" duration=0s
time=2025-10-20T02:00:41.304+01:00 level=TRACE source=runner.go:510 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12]" devices=[]
time=2025-10-20T02:00:41.305+01:00 level=DEBUG source=runner.go:414 msg="bootstrap discovery took" duration=155.8434ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12]" extra_envs=[]
time=2025-10-20T02:00:41.311+01:00 level=DEBUG source=runner.go:411 msg="spawing runner with" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" extra_envs=[]
time=2025-10-20T02:00:41.312+01:00 level=TRACE source=runner.go:491 msg="starting runner for device discovery" env="[=C:=C:\\Users\\bill\\tmp =ExitCode=00000000 ALLUSERSPROFILE=C:\\ProgramData APPDATA=C:\\Users\\bill\\AppData\\Roaming CommonProgramFiles=C:\\Program Files\\Common Files CommonProgramFiles(x86)=C:\\Program Files (x86)\\Common Files CommonProgramW6432=C:\\Program Files\\Common Files COMPUTERNAME=DESKTOP-U51LGBR ComSpec=C:\\WINDOWS\\system32\\cmd.exe CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 CUDA_PATH_V12_8=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 DriverData=C:\\Windows\\System32\\Drivers\\DriverData GOPATH=C:\\Users\\bill\\go HOME=C:\\Users\\bill HOMEDRIVE=C: HOMEPATH=\\Users\\bill LOCALAPPDATA=C:\\Users\\bill\\AppData\\Local LOGNAME=bill NUMBER_OF_PROCESSORS=1 OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 OLLAMA_LLM_LIBRARY=\"\" OLLAMA_NEWENGINE=1 OLLAMA_NEW_ENGINE=1 OneDrive=C:\\Users\\bill\\OneDrive OS=Windows_NT PATH=C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\Go\\bin;C:\\Program Files\\CMake\\bin;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Neovim\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\bill\\go\\bin;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 158 Stepping 13, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=9e0d ProgramData=C:\\ProgramData ProgramFiles=C:\\Program Files ProgramFiles(x86)=C:\\Program Files (x86) ProgramW6432=C:\\Program Files PROMPT=bill@DESKTOP-U51LGBR $P$G PSModulePath=C:\\Program Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules PUBLIC=C:\\Users\\Public SHELL=c:\\windows\\system32\\cmd.exe SSH_CLIENT=10.10.22.186 45704 22 SSH_CONNECTION=10.10.22.186 45704 10.10.210.139 22 SSH_TTY=windows-pty SystemDrive=C: SystemRoot=C:\\WINDOWS TEMP=C:\\Users\\bill\\AppData\\Local\\Temp TERM=xterm-256color TMP=C:\\Users\\bill\\AppData\\Local\\Temp USER=bill USERDOMAIN=WORKGROUP USERNAME=bill USERPROFILE=C:\\Users\\bill windir=C:\\WINDOWS xHTTPS_PROXY=http://proxy-au.lan:8080 OLLAMA_LIBRARY_PATH=C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.4\\ollama.exe runner --ollama-engine --port 63940"
time=2025-10-20T02:00:41.359+01:00 level=INFO source=runner.go:1299 msg="starting ollama engine"
time=2025-10-20T02:00:41.363+01:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:63940"
time=2025-10-20T02:00:41.372+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T02:00:41.374+01:00 level=DEBUG source=gguf.go:578 msg=general.architecture type=string
time=2025-10-20T02:00:41.376+01:00 level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string
time=2025-10-20T02:00:41.378+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T02:00:41.379+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-10-20T02:00:41.379+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-10-20T02:00:41.380+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-10-20T02:00:41.380+01:00 level=INFO source=ggml.go:133 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-10-20T02:00:41.380+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.4\lib\ollama
load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.4\lib\ollama\ggml-cpu-haswell.dll
time=2025-10-20T02:00:41.423+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.4\lib\ollama\cuda_v13
CUDA error: (null)
  current device: -1, in function ggml_backend_cuda_reg at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:4150
  cudaDriverGetVersion(&driverVersion)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:88: CUDA error
time=2025-10-20T02:00:41.611+01:00 level=TRACE source=runner.go:505 msg="runner exited" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" extra_envs=[] code=3221226505
time=2025-10-20T02:00:41.612+01:00 level=TRACE source=runner.go:510 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" devices=[]
time=2025-10-20T02:00:41.613+01:00 level=DEBUG source=runner.go:414 msg="bootstrap discovery took" duration=300.6264ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" extra_envs=[]
time=2025-10-20T02:00:41.614+01:00 level=DEBUG source=runner.go:117 msg="filtering out unsupported or overlapping GPU library combinations" count=0
time=2025-10-20T02:00:41.615+01:00 level=TRACE source=runner.go:164 msg="supported GPU library combinations" supported=map[]
time=2025-10-20T02:00:41.615+01:00 level=DEBUG source=runner.go:45 msg="GPU bootstrap discovery took" duration=468.3728ms
time=2025-10-20T02:00:41.616+01:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="8.0 GiB" available="3.9 GiB"
time=2025-10-20T02:00:41.616+01:00 level=INFO source=routes.go:1573 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2025/10/20 - 02:00:49 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-10-20T02:00:49.502+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/10/20 - 02:00:49 | 200 |     68.4827ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-20T02:00:49.632+01:00 level=DEBUG source=runner.go:250 msg="refreshing free memory"
time=2025-10-20T02:00:49.635+01:00 level=DEBUG source=runner.go:45 msg="overall device VRAM discovery took" duration=2.4972ms
time=2025-10-20T02:00:49.636+01:00 level=DEBUG source=sched.go:194 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-10-20T02:00:49.651+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T02:00:49.653+01:00 level=DEBUG source=sched.go:214 msg="loading first model" model=C:\Users\bill\.ollama\models\blobs\sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515
time=2025-10-20T02:00:49.730+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-20T02:00:49.731+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.pooling_type default=0
time=2025-10-20T02:00:49.733+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-10-20T02:00:49.733+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-10-20T02:00:49.734+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.attention.key_length default=0
time=2025-10-20T02:00:49.735+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.rope.dimension_count default=0
time=2025-10-20T02:00:49.735+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.rope.scaling.factor default=1

The ollama server prints no more output, even if left running for hours. The ollama client updates the spinner and makes no progress.

Relevant log output


OS

Windows

GPU

No response

CPU

Intel

Ollama version

0.12.4, 0.12.5, 0.12.6

Originally created by @rick-github on GitHub (Oct 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12699 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I initially thought this was a quirk of my Windows VM, but other Discord users are experiencing what looks like the same issue ([#1](https://discord.com/channels/1128867683291627614/1211804431340019753/threads/1427267551804788736), [#2](https://discord.com/channels/1128867683291627614/1429024489462566975)). server 0.12.3, client: `ollama run qwen2.5:0.5b hello`: ``` time=2025-10-20T01:58:48.722+01:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-20T01:58:48.734+01:00 level=INFO source=images.go:518 msg="total blobs: 20" time=2025-10-20T01:58:48.739+01:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-10-20T01:58:48.744+01:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)" time=2025-10-20T01:58:48.744+01:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-10-20T01:58:48.744+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-20T01:58:48.744+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-10-20T01:58:48.744+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=1 efficiency=0 threads=1 time=2025-10-20T01:58:48.744+01:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-10-20T01:58:48.745+01:00 level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=nvml.dll time=2025-10-20T01:58:48.745+01:00 level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll C:\\WINDOWS\\nvml.dll C:\\WINDOWS\\System32\\Wbem\\nvml.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\WINDOWS\\System32\\OpenSSH\\nvml.dll C:\\Program Files\\Go\\bin\\nvml.dll C:\\Program Files\\CMake\\bin\\nvml.dll C:\\Program Files\\TDM-GCC-64\\bin\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\Program Files\\Neovim\\bin\\nvml.dll C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\bill\\go\\bin\\nvml.dll c:\\program files\\vim\\vim91\\nvml.dll C:\\Users\\bill\\tmp\\nvml.dll C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-10-20T01:58:48.746+01:00 level=DEBUG source=gpu.go:548 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll" time=2025-10-20T01:58:48.748+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-20T01:58:48.748+01:00 level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=nvcuda.dll time=2025-10-20T01:58:48.748+01:00 level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll C:\\WINDOWS\\nvcuda.dll C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files\\Go\\bin\\nvcuda.dll C:\\Program Files\\CMake\\bin\\nvcuda.dll C:\\Program Files\\TDM-GCC-64\\bin\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\Program Files\\Neovim\\bin\\nvcuda.dll C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\bill\\go\\bin\\nvcuda.dll c:\\program files\\vim\\vim91\\nvcuda.dll C:\\Users\\bill\\tmp\\nvcuda.dll C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-10-20T01:58:48.749+01:00 level=DEBUG source=gpu.go:548 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll" time=2025-10-20T01:58:48.750+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[] time=2025-10-20T01:58:48.751+01:00 level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=cudart64_*.dll time=2025-10-20T01:58:48.751+01:00 level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\cudart64_*.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_*.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\cudart64_*.dll C:\\WINDOWS\\system32\\cudart64_*.dll C:\\WINDOWS\\cudart64_*.dll C:\\WINDOWS\\System32\\Wbem\\cudart64_*.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll C:\\WINDOWS\\System32\\OpenSSH\\cudart64_*.dll C:\\Program Files\\Go\\bin\\cudart64_*.dll C:\\Program Files\\CMake\\bin\\cudart64_*.dll C:\\Program Files\\TDM-GCC-64\\bin\\cudart64_*.dll C:\\Program Files\\Git\\cmd\\cudart64_*.dll C:\\Program Files\\Neovim\\bin\\cudart64_*.dll C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\cudart64_*.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\cudart64_*.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll C:\\Users\\bill\\go\\bin\\cudart64_*.dll c:\\program files\\vim\\vim91\\cudart64_*.dll C:\\Users\\bill\\tmp\\cudart64_*.dll C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\cuda_v*\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]" time=2025-10-20T01:58:48.767+01:00 level=DEBUG source=gpu.go:548 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll" time=2025-10-20T01:58:48.770+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_12.dll C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\cuda_v12\\cudart64_12.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_12.dll]" cudaSetDevice err: 35 time=2025-10-20T01:58:48.773+01:00 level=DEBUG source=gpu.go:593 msg="Unable to load cudart library C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_12.dll: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" cudaSetDevice err: 35 time=2025-10-20T01:58:48.777+01:00 level=DEBUG source=gpu.go:593 msg="Unable to load cudart library C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama\\cuda_v12\\cudart64_12.dll: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" cudaSetDevice err: 35 time=2025-10-20T01:58:48.781+01:00 level=DEBUG source=gpu.go:593 msg="Unable to load cudart library c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\cudart64_12.dll: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" time=2025-10-20T01:58:48.783+01:00 level=DEBUG source=amd_windows.go:34 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found." time=2025-10-20T01:58:48.784+01:00 level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered" time=2025-10-20T01:58:48.784+01:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="8.0 GiB" available="4.0 GiB" time=2025-10-20T01:58:48.784+01:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" [GIN] 2025/10/20 - 01:58:54 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-10-20T01:58:54.318+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/20 - 01:58:54 | 200 | 76.7432ms | 127.0.0.1 | POST "/api/show" time=2025-10-20T01:58:54.409+01:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="8.0 GiB" before.free="4.0 GiB" before.free_swap="4.0 GiB" now.total="8.0 GiB" now.free="3.9 GiB" now.free_swap="3.9 GiB" time=2025-10-20T01:58:54.410+01:00 level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-20T01:58:54.429+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T01:58:54.431+01:00 level=DEBUG source=sched.go:208 msg="loading first model" model=C:\Users\bill\.ollama\models\blobs\sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 time=2025-10-20T01:58:54.506+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T01:58:54.508+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.pooling_type default=0 time=2025-10-20T01:58:54.509+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-10-20T01:58:54.509+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-10-20T01:58:54.510+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.attention.key_length default=0 time=2025-10-20T01:58:54.510+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.rope.dimension_count default=0 time=2025-10-20T01:58:54.510+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.rope.scaling.factor default=1 time=2025-10-20T01:58:54.512+01:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="8.0 GiB" before.free="3.9 GiB" before.free_swap="3.9 GiB" now.total="8.0 GiB" now.free="3.9 GiB" now.free_swap="3.9 GiB" time=2025-10-20T01:58:54.529+01:00 level=INFO source=server.go:399 msg="starting runner" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.3\\ollama.exe runner --ollama-engine --model C:\\Users\\bill\\.ollama\\models\\blobs\\sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 --port 63918" time=2025-10-20T01:58:54.531+01:00 level=DEBUG source=server.go:400 msg=subprocess CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" CUDA_PATH_V12_8="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 OLLAMA_LLM_LIBRARY="\"\"" OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_NEWENGINE=1 OLLAMA_NEW_ENGINE=1 PATH="C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\Go\\bin;C:\\Program Files\\CMake\\bin;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Neovim\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\bill\\go\\bin;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama;C:\\Users\\bill\\tmp\\ollama-0.12.3\\lib\\ollama" OLLAMA_LIBRARY_PATH=C:\Users\bill\tmp\ollama-0.12.3\lib\ollama time=2025-10-20T01:58:54.542+01:00 level=INFO source=server.go:672 msg="loading model" "model layers"=25 requested=-1 time=2025-10-20T01:58:54.593+01:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="8.0 GiB" before.free="3.9 GiB" before.free_swap="3.9 GiB" now.total="8.0 GiB" now.free="3.9 GiB" now.free_swap="3.9 GiB" time=2025-10-20T01:58:54.596+01:00 level=INFO source=server.go:678 msg="system memory" total="8.0 GiB" free="3.9 GiB" free_swap="3.9 GiB" time=2025-10-20T01:58:54.590+01:00 level=INFO source=runner.go:1252 msg="starting ollama engine" time=2025-10-20T01:58:54.592+01:00 level=INFO source=runner.go:1287 msg="Server listening on 127.0.0.1:63918" time=2025-10-20T01:58:54.607+01:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:1 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-20T01:58:54.637+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T01:58:54.637+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-10-20T01:58:54.637+01:00 level=INFO source=ggml.go:131 msg="" architecture=qwen2 file_type=Q4_K_M name="Qwen2.5 0.5B Instruct" description="" num_tensors=290 num_key_values=35 time=2025-10-20T01:58:54.637+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.3\lib\ollama load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.3\lib\ollama\ggml-cpu-haswell.dll time=2025-10-20T01:58:54.671+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) ... tensors loaded, output generated ``` 0.12.4, client: `ollama run qwen2.5:0.5b hello`: ``` time=2025-10-20T02:00:41.124+01:00 level=INFO source=routes.go:1479 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-20T02:00:41.134+01:00 level=INFO source=images.go:522 msg="total blobs: 20" time=2025-10-20T02:00:41.142+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-20T02:00:41.145+01:00 level=INFO source=routes.go:1532 msg="Listening on 127.0.0.1:11434 (version 0.12.4)" time=2025-10-20T02:00:41.146+01:00 level=DEBUG source=sched.go:122 msg="starting llm scheduler" time=2025-10-20T02:00:41.149+01:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-20T02:00:41.149+01:00 level=DEBUG source=runner.go:411 msg="spawing runner with" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12]" extra_envs=[] time=2025-10-20T02:00:41.165+01:00 level=TRACE source=runner.go:491 msg="starting runner for device discovery" env="[=C:=C:\\Users\\bill\\tmp =ExitCode=00000000 ALLUSERSPROFILE=C:\\ProgramData APPDATA=C:\\Users\\bill\\AppData\\Roaming CommonProgramFiles=C:\\Program Files\\Common Files CommonProgramFiles(x86)=C:\\Program Files (x86)\\Common Files CommonProgramW6432=C:\\Program Files\\Common Files COMPUTERNAME=DESKTOP-U51LGBR ComSpec=C:\\WINDOWS\\system32\\cmd.exe CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 CUDA_PATH_V12_8=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 DriverData=C:\\Windows\\System32\\Drivers\\DriverData GOPATH=C:\\Users\\bill\\go HOME=C:\\Users\\bill HOMEDRIVE=C: HOMEPATH=\\Users\\bill LOCALAPPDATA=C:\\Users\\bill\\AppData\\Local LOGNAME=bill NUMBER_OF_PROCESSORS=1 OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 OLLAMA_LLM_LIBRARY=\"\" OLLAMA_NEWENGINE=1 OLLAMA_NEW_ENGINE=1 OneDrive=C:\\Users\\bill\\OneDrive OS=Windows_NT PATH=C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\Go\\bin;C:\\Program Files\\CMake\\bin;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Neovim\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\bill\\go\\bin;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 158 Stepping 13, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=9e0d ProgramData=C:\\ProgramData ProgramFiles=C:\\Program Files ProgramFiles(x86)=C:\\Program Files (x86) ProgramW6432=C:\\Program Files PROMPT=bill@DESKTOP-U51LGBR $P$G PSModulePath=C:\\Program Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules PUBLIC=C:\\Users\\Public SHELL=c:\\windows\\system32\\cmd.exe SSH_CLIENT=10.10.22.186 45704 22 SSH_CONNECTION=10.10.22.186 45704 10.10.210.139 22 SSH_TTY=windows-pty SystemDrive=C: SystemRoot=C:\\WINDOWS TEMP=C:\\Users\\bill\\AppData\\Local\\Temp TERM=xterm-256color TMP=C:\\Users\\bill\\AppData\\Local\\Temp USER=bill USERDOMAIN=WORKGROUP USERNAME=bill USERPROFILE=C:\\Users\\bill windir=C:\\WINDOWS xHTTPS_PROXY=http://proxy-au.lan:8080 OLLAMA_LIBRARY_PATH=C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12]" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.4\\ollama.exe runner --ollama-engine --port 63934" time=2025-10-20T02:00:41.216+01:00 level=INFO source=runner.go:1299 msg="starting ollama engine" time=2025-10-20T02:00:41.220+01:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:63934" time=2025-10-20T02:00:41.226+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T02:00:41.227+01:00 level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-20T02:00:41.227+01:00 level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-20T02:00:41.228+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T02:00:41.229+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-20T02:00:41.229+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-10-20T02:00:41.231+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-10-20T02:00:41.231+01:00 level=INFO source=ggml.go:133 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-10-20T02:00:41.231+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.4\lib\ollama load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.4\lib\ollama\ggml-cpu-haswell.dll time=2025-10-20T02:00:41.274+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.4\lib\ollama\cuda_v12 time=2025-10-20T02:00:41.293+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-10-20T02:00:41.299+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-10-20T02:00:41.300+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-10-20T02:00:41.301+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-10-20T02:00:41.302+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=runner.go:1274 msg="dummy model load took" duration=79.0491ms time=2025-10-20T02:00:41.303+01:00 level=DEBUG source=runner.go:1279 msg="gathering device infos took" duration=0s time=2025-10-20T02:00:41.304+01:00 level=TRACE source=runner.go:510 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12]" devices=[] time=2025-10-20T02:00:41.305+01:00 level=DEBUG source=runner.go:414 msg="bootstrap discovery took" duration=155.8434ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v12]" extra_envs=[] time=2025-10-20T02:00:41.311+01:00 level=DEBUG source=runner.go:411 msg="spawing runner with" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" extra_envs=[] time=2025-10-20T02:00:41.312+01:00 level=TRACE source=runner.go:491 msg="starting runner for device discovery" env="[=C:=C:\\Users\\bill\\tmp =ExitCode=00000000 ALLUSERSPROFILE=C:\\ProgramData APPDATA=C:\\Users\\bill\\AppData\\Roaming CommonProgramFiles=C:\\Program Files\\Common Files CommonProgramFiles(x86)=C:\\Program Files (x86)\\Common Files CommonProgramW6432=C:\\Program Files\\Common Files COMPUTERNAME=DESKTOP-U51LGBR ComSpec=C:\\WINDOWS\\system32\\cmd.exe CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 CUDA_PATH_V12_8=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 DriverData=C:\\Windows\\System32\\Drivers\\DriverData GOPATH=C:\\Users\\bill\\go HOME=C:\\Users\\bill HOMEDRIVE=C: HOMEPATH=\\Users\\bill LOCALAPPDATA=C:\\Users\\bill\\AppData\\Local LOGNAME=bill NUMBER_OF_PROCESSORS=1 OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 OLLAMA_LLM_LIBRARY=\"\" OLLAMA_NEWENGINE=1 OLLAMA_NEW_ENGINE=1 OneDrive=C:\\Users\\bill\\OneDrive OS=Windows_NT PATH=C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\Go\\bin;C:\\Program Files\\CMake\\bin;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Neovim\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\bill\\go\\bin;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 158 Stepping 13, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=9e0d ProgramData=C:\\ProgramData ProgramFiles=C:\\Program Files ProgramFiles(x86)=C:\\Program Files (x86) ProgramW6432=C:\\Program Files PROMPT=bill@DESKTOP-U51LGBR $P$G PSModulePath=C:\\Program Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules PUBLIC=C:\\Users\\Public SHELL=c:\\windows\\system32\\cmd.exe SSH_CLIENT=10.10.22.186 45704 22 SSH_CONNECTION=10.10.22.186 45704 10.10.210.139 22 SSH_TTY=windows-pty SystemDrive=C: SystemRoot=C:\\WINDOWS TEMP=C:\\Users\\bill\\AppData\\Local\\Temp TERM=xterm-256color TMP=C:\\Users\\bill\\AppData\\Local\\Temp USER=bill USERDOMAIN=WORKGROUP USERNAME=bill USERPROFILE=C:\\Users\\bill windir=C:\\WINDOWS xHTTPS_PROXY=http://proxy-au.lan:8080 OLLAMA_LIBRARY_PATH=C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.4\\ollama.exe runner --ollama-engine --port 63940" time=2025-10-20T02:00:41.359+01:00 level=INFO source=runner.go:1299 msg="starting ollama engine" time=2025-10-20T02:00:41.363+01:00 level=INFO source=runner.go:1335 msg="Server listening on 127.0.0.1:63940" time=2025-10-20T02:00:41.372+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T02:00:41.374+01:00 level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-20T02:00:41.376+01:00 level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-20T02:00:41.378+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T02:00:41.379+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-20T02:00:41.379+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-10-20T02:00:41.380+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-10-20T02:00:41.380+01:00 level=INFO source=ggml.go:133 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-10-20T02:00:41.380+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.4\lib\ollama load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.4\lib\ollama\ggml-cpu-haswell.dll time=2025-10-20T02:00:41.423+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.4\lib\ollama\cuda_v13 CUDA error: (null) current device: -1, in function ggml_backend_cuda_reg at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:4150 cudaDriverGetVersion(&driverVersion) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:88: CUDA error time=2025-10-20T02:00:41.611+01:00 level=TRACE source=runner.go:505 msg="runner exited" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" extra_envs=[] code=3221226505 time=2025-10-20T02:00:41.612+01:00 level=TRACE source=runner.go:510 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" devices=[] time=2025-10-20T02:00:41.613+01:00 level=DEBUG source=runner.go:414 msg="bootstrap discovery took" duration=300.6264ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.4\\lib\\ollama\\cuda_v13]" extra_envs=[] time=2025-10-20T02:00:41.614+01:00 level=DEBUG source=runner.go:117 msg="filtering out unsupported or overlapping GPU library combinations" count=0 time=2025-10-20T02:00:41.615+01:00 level=TRACE source=runner.go:164 msg="supported GPU library combinations" supported=map[] time=2025-10-20T02:00:41.615+01:00 level=DEBUG source=runner.go:45 msg="GPU bootstrap discovery took" duration=468.3728ms time=2025-10-20T02:00:41.616+01:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="8.0 GiB" available="3.9 GiB" time=2025-10-20T02:00:41.616+01:00 level=INFO source=routes.go:1573 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2025/10/20 - 02:00:49 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-10-20T02:00:49.502+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/20 - 02:00:49 | 200 | 68.4827ms | 127.0.0.1 | POST "/api/show" time=2025-10-20T02:00:49.632+01:00 level=DEBUG source=runner.go:250 msg="refreshing free memory" time=2025-10-20T02:00:49.635+01:00 level=DEBUG source=runner.go:45 msg="overall device VRAM discovery took" duration=2.4972ms time=2025-10-20T02:00:49.636+01:00 level=DEBUG source=sched.go:194 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-10-20T02:00:49.651+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T02:00:49.653+01:00 level=DEBUG source=sched.go:214 msg="loading first model" model=C:\Users\bill\.ollama\models\blobs\sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 time=2025-10-20T02:00:49.730+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-20T02:00:49.731+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.pooling_type default=0 time=2025-10-20T02:00:49.733+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-10-20T02:00:49.733+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-10-20T02:00:49.734+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.attention.key_length default=0 time=2025-10-20T02:00:49.735+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.rope.dimension_count default=0 time=2025-10-20T02:00:49.735+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=qwen2.rope.scaling.factor default=1 ``` The ollama server prints no more output, even if left running for hours. The ollama client updates the spinner and makes no progress. ### Relevant log output ```shell ``` ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version 0.12.4, 0.12.5, 0.12.6
GiteaMirror added the nvidiabugwindows labels 2026-04-22 17:34:10 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 20, 2025):

Maybe same as #12640, and GPU-ness is not relevant.

<!-- gh-comment-id:3420199458 --> @rick-github commented on GitHub (Oct 20, 2025): Maybe same as #12640, and GPU-ness is not relevant.
Author
Owner

@Panican-Whyasker commented on GitHub (Oct 20, 2025):

Interestingly, no problem with ollama 0.12.6 on Windows Server 2016 Datacenter (GPU-less):

Image

<!-- gh-comment-id:3421784572 --> @Panican-Whyasker commented on GitHub (Oct 20, 2025): Interestingly, no problem with ollama 0.12.6 on Windows Server 2016 Datacenter (GPU-less): ![Image](https://github.com/user-attachments/assets/d4778c6c-a176-4fc5-b749-2e4226296b79)
Author
Owner

@guiksign commented on GitHub (Oct 27, 2025):

issue still there for me on 0.12.6

<!-- gh-comment-id:3451203500 --> @guiksign commented on GitHub (Oct 27, 2025): issue still there for me on 0.12.6
Author
Owner

@rick-github commented on GitHub (Oct 27, 2025):

The PR will be in 0.12.7.

<!-- gh-comment-id:3451215949 --> @rick-github commented on GitHub (Oct 27, 2025): The PR will be in 0.12.7.
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

Continuing report of this issue or something similar with 0.12.7.

<!-- gh-comment-id:3468612586 --> @rick-github commented on GitHub (Oct 30, 2025): Continuing report of this issue or something similar with 0.12.7.
Author
Owner

@dhiltgen commented on GitHub (Oct 30, 2025):

Anyone who's hitting this, please share an updated log with OLLAMA_DEBUG=2 running 0.12.7 so we can take a look.

<!-- gh-comment-id:3468642213 --> @dhiltgen commented on GitHub (Oct 30, 2025): Anyone who's hitting this, please share an updated log with OLLAMA_DEBUG=2 running 0.12.7 so we can take a look.
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

server 0.12.7, client: ollama run qwen2.5:0.5b hello:

time=2025-10-30T16:26:40.527Z level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-30T16:26:40.645Z level=INFO source=images.go:522 msg="total blobs: 20"
time=2025-10-30T16:26:40.648Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-30T16:26:40.656Z level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7)"
time=2025-10-30T16:26:40.656Z level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2025-10-30T16:26:40.661Z level=INFO source=runner.go:76 msg="discovering available GPUs..."
time=2025-10-30T16:26:40.662Z level=TRACE source=runner.go:471 msg="starting runner for device discovery" libDirs="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12]" extraEnvs=map[]
time=2025-10-30T16:26:40.685Z level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.7\\ollama.exe runner --ollama-engine --port 59000"
time=2025-10-30T16:26:40.686Z level=DEBUG source=server.go:386 msg=subprocess CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" CUDA_PATH_V12_8="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 PATH="C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Neovim\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Program Files\\CMake\\bin;C:\\Program Files\\Go\\bin;C:\\TDM-GCC-64\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama;C:\\Users\\bill\\go\\bin;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WinGet\\Packages\\ggml.llamacpp_Microsoft.Winget.Source_8wekyb3d8bbwe;" OLLAMA_LIBRARY_PATH=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama;C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v12
time=2025-10-30T16:26:41.160Z level=TRACE source=runner.go:498 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12]" devices=[]
time=2025-10-30T16:26:41.167Z level=DEBUG source=runner.go:468 msg="bootstrap discovery took" duration=504.8203ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12]" extra_envs=map[]
time=2025-10-30T16:26:41.174Z level=TRACE source=runner.go:471 msg="starting runner for device discovery" libDirs="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" extraEnvs=map[]
time=2025-10-30T16:26:41.177Z level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.7\\ollama.exe runner --ollama-engine --port 59005"
time=2025-10-30T16:26:41.177Z level=DEBUG source=server.go:386 msg=subprocess CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" CUDA_PATH_V12_8="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 PATH="C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Neovim\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Program Files\\CMake\\bin;C:\\Program Files\\Go\\bin;C:\\TDM-GCC-64\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama;C:\\Users\\bill\\go\\bin;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WinGet\\Packages\\ggml.llamacpp_Microsoft.Winget.Source_8wekyb3d8bbwe;" OLLAMA_LIBRARY_PATH=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama;C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v13
time=2025-10-30T16:26:42.081Z level=TRACE source=runner.go:498 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" devices=[]
time=2025-10-30T16:26:42.105Z level=DEBUG source=runner.go:468 msg="bootstrap discovery took" duration=930.9888ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" extra_envs=map[]
time=2025-10-30T16:26:42.107Z level=DEBUG source=runner.go:120 msg="evluating which if any devices to filter out" initial_count=0
time=2025-10-30T16:26:42.110Z level=TRACE source=runner.go:179 msg="supported GPU library combinations before filtering" supported=map[]
time=2025-10-30T16:26:42.111Z level=DEBUG source=runner.go:41 msg="GPU bootstrap discovery took" duration=1.4544526s
time=2025-10-30T16:26:42.114Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="8.0 GiB" available="4.4 GiB"
time=2025-10-30T16:26:42.115Z level=INFO source=routes.go:1618 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2025/10/30 - 16:27:14 | 200 |      2.3946ms |       127.0.0.1 | HEAD     "/"
time=2025-10-30T16:27:14.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/10/30 - 16:27:14 | 200 |    244.7352ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-30T16:27:14.635Z level=DEBUG source=runner.go:267 msg="refreshing free memory"
time=2025-10-30T16:27:14.637Z level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=1.6254ms
<!-- gh-comment-id:3468939336 --> @rick-github commented on GitHub (Oct 30, 2025): server 0.12.7, client: `ollama run qwen2.5:0.5b hello`: ``` time=2025-10-30T16:26:40.527Z level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-30T16:26:40.645Z level=INFO source=images.go:522 msg="total blobs: 20" time=2025-10-30T16:26:40.648Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-30T16:26:40.656Z level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7)" time=2025-10-30T16:26:40.656Z level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-10-30T16:26:40.661Z level=INFO source=runner.go:76 msg="discovering available GPUs..." time=2025-10-30T16:26:40.662Z level=TRACE source=runner.go:471 msg="starting runner for device discovery" libDirs="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12]" extraEnvs=map[] time=2025-10-30T16:26:40.685Z level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.7\\ollama.exe runner --ollama-engine --port 59000" time=2025-10-30T16:26:40.686Z level=DEBUG source=server.go:386 msg=subprocess CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" CUDA_PATH_V12_8="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 PATH="C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Neovim\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Program Files\\CMake\\bin;C:\\Program Files\\Go\\bin;C:\\TDM-GCC-64\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama;C:\\Users\\bill\\go\\bin;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WinGet\\Packages\\ggml.llamacpp_Microsoft.Winget.Source_8wekyb3d8bbwe;" OLLAMA_LIBRARY_PATH=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama;C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v12 time=2025-10-30T16:26:41.160Z level=TRACE source=runner.go:498 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12]" devices=[] time=2025-10-30T16:26:41.167Z level=DEBUG source=runner.go:468 msg="bootstrap discovery took" duration=504.8203ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12]" extra_envs=map[] time=2025-10-30T16:26:41.174Z level=TRACE source=runner.go:471 msg="starting runner for device discovery" libDirs="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" extraEnvs=map[] time=2025-10-30T16:26:41.177Z level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.7\\ollama.exe runner --ollama-engine --port 59005" time=2025-10-30T16:26:41.177Z level=DEBUG source=server.go:386 msg=subprocess CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" CUDA_PATH_V12_8="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 PATH="C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Neovim\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Program Files\\CMake\\bin;C:\\Program Files\\Go\\bin;C:\\TDM-GCC-64\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama;C:\\Users\\bill\\go\\bin;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WinGet\\Packages\\ggml.llamacpp_Microsoft.Winget.Source_8wekyb3d8bbwe;" OLLAMA_LIBRARY_PATH=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama;C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v13 time=2025-10-30T16:26:42.081Z level=TRACE source=runner.go:498 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" devices=[] time=2025-10-30T16:26:42.105Z level=DEBUG source=runner.go:468 msg="bootstrap discovery took" duration=930.9888ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" extra_envs=map[] time=2025-10-30T16:26:42.107Z level=DEBUG source=runner.go:120 msg="evluating which if any devices to filter out" initial_count=0 time=2025-10-30T16:26:42.110Z level=TRACE source=runner.go:179 msg="supported GPU library combinations before filtering" supported=map[] time=2025-10-30T16:26:42.111Z level=DEBUG source=runner.go:41 msg="GPU bootstrap discovery took" duration=1.4544526s time=2025-10-30T16:26:42.114Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="8.0 GiB" available="4.4 GiB" time=2025-10-30T16:26:42.115Z level=INFO source=routes.go:1618 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2025/10/30 - 16:27:14 | 200 | 2.3946ms | 127.0.0.1 | HEAD "/" time=2025-10-30T16:27:14.468Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/30 - 16:27:14 | 200 | 244.7352ms | 127.0.0.1 | POST "/api/show" time=2025-10-30T16:27:14.635Z level=DEBUG source=runner.go:267 msg="refreshing free memory" time=2025-10-30T16:27:14.637Z level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=1.6254ms ```
Author
Owner

@dhiltgen commented on GitHub (Oct 30, 2025):

Those logs seem to imply the subprocess never got started properly. I'm not sure why yet, but let me get additional logging added so we can figure out why. (Maybe AV blocking, or something like that)

<!-- gh-comment-id:3469389649 --> @dhiltgen commented on GitHub (Oct 30, 2025): Those logs seem to imply the subprocess never got started properly. I'm not sure why yet, but let me get additional logging added so we can figure out why. (Maybe AV blocking, or something like that)
Author
Owner

@dhiltgen commented on GitHub (Oct 30, 2025):

One thing I'm noticing is the way we're wiring up output of the subprocess may be causing some logs to get lost. Try running the serve with:

ollama serve 2>&1 | % ToString | tee-object serve.log
<!-- gh-comment-id:3469594381 --> @dhiltgen commented on GitHub (Oct 30, 2025): One thing I'm noticing is the way we're wiring up output of the subprocess may be causing some logs to get lost. Try running the serve with: ```powershell ollama serve 2>&1 | % ToString | tee-object serve.log ```
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

PS C:\Users\bill\tmp> .\ollama-0.12.7\ollama.exe serve 2>&1 | % ToString | tee-object serve.log
time=2025-10-30T18:57:34.780Z level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-30T18:57:34.798Z level=INFO source=images.go:522 msg="total blobs: 20"
time=2025-10-30T18:57:34.802Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-30T18:57:34.804Z level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7)"
time=2025-10-30T18:57:34.804Z level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2025-10-30T18:57:34.807Z level=INFO source=runner.go:76 msg="discovering available GPUs..."
time=2025-10-30T18:57:34.807Z level=TRACE source=runner.go:471 msg="starting runner for device discovery" libDirs="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12]" extraEnvs=map[]
time=2025-10-30T18:57:34.848Z level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.7\\ollama.exe runner --ollama-engine --port 55834"
time=2025-10-30T18:57:34.848Z level=DEBUG source=server.go:386 msg=subprocess CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" CUDA_PATH_V12_8="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 PATH="C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Neovim\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Program Files\\CMake\\bin;C:\\Program Files\\Go\\bin;C:\\TDM-GCC-64\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama;C:\\Users\\bill\\go\\bin;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WinGet\\Packages\\ggml.llamacpp_Microsoft.Winget.Source_8wekyb3d8bbwe;" OLLAMA_LIBRARY_PATH=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama;C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v12
time=2025-10-30T18:57:34.914Z level=INFO source=runner.go:1337 msg="starting ollama engine"
time=2025-10-30T18:57:34.916Z level=INFO source=runner.go:1372 msg="Server listening on 127.0.0.1:55834"
time=2025-10-30T18:57:34.921Z level=DEBUG source=gguf.go:590 msg=general.architecture type=string
time=2025-10-30T18:57:34.921Z level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string
time=2025-10-30T18:57:34.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-10-30T18:57:34.923Z level=INFO source=ggml.go:135 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama
load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\ggml-cpu-haswell.dll
time=2025-10-30T18:57:34.952Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v12
dl_load_library unable to load library C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v12\ggml-cuda.dll: The specified module could not be found.
System.Management.Automation.RemoteException
time=2025-10-30T18:57:34.973Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-10-30T18:57:34.973Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-10-30T18:57:35.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-10-30T18:57:35.067Z level=INFO source=ggml.go:135 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-10-30T18:57:35.067Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama
load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\ggml-cpu-haswell.dll
time=2025-10-30T18:57:35.094Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v13
ggml_cuda_init: failed to initialize CUDA: (null)
load_backend: loaded CUDA backend from C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v13\ggml-cuda.dll
time=2025-10-30T18:57:35.225Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-10-30T18:57:35.225Z level=DEBUG source=runner.go:1312 msg="dummy model load took" duration=160.169ms
time=2025-10-30T18:57:35.225Z level=DEBUG source=runner.go:1317 msg="gathering device infos took" duration=0s
time=2025-10-30T18:57:35.227Z level=TRACE source=runner.go:498 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" devices=[]
time=2025-10-30T18:57:35.255Z level=DEBUG source=runner.go:468 msg="bootstrap discovery took" duration=273.9772ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" extra_envs=map[]
time=2025-10-30T18:57:35.255Z level=DEBUG source=runner.go:120 msg="evluating which if any devices to filter out" initial_count=0
time=2025-10-30T18:57:35.255Z level=TRACE source=runner.go:179 msg="supported GPU library combinations before filtering" supported=map[]
time=2025-10-30T18:57:35.255Z level=DEBUG source=runner.go:41 msg="GPU bootstrap discovery took" duration=450.1257ms
time=2025-10-30T18:57:35.255Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="8.0 GiB" available="4.1 GiB"
time=2025-10-30T18:57:35.255Z level=INFO source=routes.go:1618 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2025/10/30 - 18:57:44 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-10-30T18:57:44.998Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/10/30 - 18:57:44 | 200 |     72.0749ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-30T18:57:45.151Z level=DEBUG source=runner.go:267 msg="refreshing free memory"
time=2025-10-30T18:57:45.151Z level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=0s
<!-- gh-comment-id:3469616664 --> @rick-github commented on GitHub (Oct 30, 2025): ```console PS C:\Users\bill\tmp> .\ollama-0.12.7\ollama.exe serve 2>&1 | % ToString | tee-object serve.log time=2025-10-30T18:57:34.780Z level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-30T18:57:34.798Z level=INFO source=images.go:522 msg="total blobs: 20" time=2025-10-30T18:57:34.802Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-30T18:57:34.804Z level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7)" time=2025-10-30T18:57:34.804Z level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-10-30T18:57:34.807Z level=INFO source=runner.go:76 msg="discovering available GPUs..." time=2025-10-30T18:57:34.807Z level=TRACE source=runner.go:471 msg="starting runner for device discovery" libDirs="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12]" extraEnvs=map[] time=2025-10-30T18:57:34.848Z level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\bill\\tmp\\ollama-0.12.7\\ollama.exe runner --ollama-engine --port 55834" time=2025-10-30T18:57:34.848Z level=DEBUG source=server.go:386 msg=subprocess CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" CUDA_PATH_V12_8="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8" OLLAMA_DEBUG=2 OLLAMA_HOSTx=z4070:11434 PATH="C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama;C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\TDM-GCC-64\\bin;C:\\Program Files\\Neovim\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Program Files\\CMake\\bin;C:\\Program Files\\Go\\bin;C:\\TDM-GCC-64\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WindowsApps;c:\\program files\\vim\\vim91;;C:\\Users\\bill\\AppData\\Local\\Programs\\Ollama;C:\\Users\\bill\\go\\bin;C:\\Users\\bill\\AppData\\Local\\Microsoft\\WinGet\\Packages\\ggml.llamacpp_Microsoft.Winget.Source_8wekyb3d8bbwe;" OLLAMA_LIBRARY_PATH=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama;C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v12 time=2025-10-30T18:57:34.914Z level=INFO source=runner.go:1337 msg="starting ollama engine" time=2025-10-30T18:57:34.916Z level=INFO source=runner.go:1372 msg="Server listening on 127.0.0.1:55834" time=2025-10-30T18:57:34.921Z level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-10-30T18:57:34.921Z level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-10-30T18:57:34.921Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-10-30T18:57:34.923Z level=INFO source=ggml.go:135 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-10-30T18:57:34.923Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\ggml-cpu-haswell.dll time=2025-10-30T18:57:34.952Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v12 dl_load_library unable to load library C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v12\ggml-cuda.dll: The specified module could not be found. System.Management.Automation.RemoteException time=2025-10-30T18:57:34.973Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-10-30T18:57:34.973Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-10-30T18:57:34.975Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-10-30T18:57:35.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-10-30T18:57:35.067Z level=INFO source=ggml.go:135 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-10-30T18:57:35.067Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama load_backend: loaded CPU backend from C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\ggml-cpu-haswell.dll time=2025-10-30T18:57:35.094Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v13 ggml_cuda_init: failed to initialize CUDA: (null) load_backend: loaded CUDA backend from C:\Users\bill\tmp\ollama-0.12.7\lib\ollama\cuda_v13\ggml-cuda.dll time=2025-10-30T18:57:35.225Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-10-30T18:57:35.225Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-10-30T18:57:35.225Z level=DEBUG source=runner.go:1312 msg="dummy model load took" duration=160.169ms time=2025-10-30T18:57:35.225Z level=DEBUG source=runner.go:1317 msg="gathering device infos took" duration=0s time=2025-10-30T18:57:35.227Z level=TRACE source=runner.go:498 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" devices=[] time=2025-10-30T18:57:35.255Z level=DEBUG source=runner.go:468 msg="bootstrap discovery took" duration=273.9772ms OLLAMA_LIBRARY_PATH="[C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama C:\\Users\\bill\\tmp\\ollama-0.12.7\\lib\\ollama\\cuda_v13]" extra_envs=map[] time=2025-10-30T18:57:35.255Z level=DEBUG source=runner.go:120 msg="evluating which if any devices to filter out" initial_count=0 time=2025-10-30T18:57:35.255Z level=TRACE source=runner.go:179 msg="supported GPU library combinations before filtering" supported=map[] time=2025-10-30T18:57:35.255Z level=DEBUG source=runner.go:41 msg="GPU bootstrap discovery took" duration=450.1257ms time=2025-10-30T18:57:35.255Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="8.0 GiB" available="4.1 GiB" time=2025-10-30T18:57:35.255Z level=INFO source=routes.go:1618 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2025/10/30 - 18:57:44 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-10-30T18:57:44.998Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/30 - 18:57:44 | 200 | 72.0749ms | 127.0.0.1 | POST "/api/show" time=2025-10-30T18:57:45.151Z level=DEBUG source=runner.go:267 msg="refreshing free memory" time=2025-10-30T18:57:45.151Z level=DEBUG source=runner.go:41 msg="overall device VRAM discovery took" duration=0s ```
Author
Owner

@dhiltgen commented on GitHub (Oct 30, 2025):

OK, those logs look more complete. I'll try to get the logging logic fixed so the redirect isn't necessary to see what's going on.

Just to confirm, this is a system without GPUs, correct? So the final msg="inference compute" id=cpu ... is correct.

Do you have a log of the load getting stuck?

<!-- gh-comment-id:3469641513 --> @dhiltgen commented on GitHub (Oct 30, 2025): OK, those logs look more complete. I'll try to get the logging logic fixed so the redirect isn't necessary to see what's going on. Just to confirm, this is a system without GPUs, correct? So the final `msg="inference compute" id=cpu ...` is correct. Do you have a log of the load getting stuck?
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

Confirm no GPU, it's a VM running Windows 11 Home 24H2. The last line shown is as far as it gets, which is different to the original post. The client just sits on the spinner waiting for the server:

bill@DESKTOP-U51LGBR C:\Users\bill\tmp>ollama-0.12.7\ollama.exe run qwen2.5:0.5b hello

If I change 0.12.7 to 0.12.3 in both server and client paths, the model loads and the client prints a response:

bill@DESKTOP-U51LGBR C:\Users\bill\tmp>ollama-0.12.3\ollama.exe run qwen2.5:0.5b hello
⠼ Hello! How can I assist you today?
<!-- gh-comment-id:3469676979 --> @rick-github commented on GitHub (Oct 30, 2025): Confirm no GPU, it's a VM running Windows 11 Home 24H2. The last line shown is as far as it gets, which is different to the original post. The client just sits on the spinner waiting for the server: ```console bill@DESKTOP-U51LGBR C:\Users\bill\tmp>ollama-0.12.7\ollama.exe run qwen2.5:0.5b hello ⠙ ``` If I change 0.12.7 to 0.12.3 in both server and client paths, the model loads and the client prints a response: ```console bill@DESKTOP-U51LGBR C:\Users\bill\tmp>ollama-0.12.3\ollama.exe run qwen2.5:0.5b hello ⠼ Hello! How can I assist you today? ```
Author
Owner

@dhiltgen commented on GitHub (Oct 30, 2025):

I tried to repro with a Hyper-V VM running Win 11 and I'm not able to get it to hang. Is there anything else unique/unusual about your setup you can think of I should try?

<!-- gh-comment-id:3469838505 --> @dhiltgen commented on GitHub (Oct 30, 2025): I tried to repro with a Hyper-V VM running Win 11 and I'm not able to get it to hang. Is there anything else unique/unusual about your setup you can think of I should try?
Author
Owner

@rick-github commented on GitHub (Oct 31, 2025):

Nothing unique as far as I know, qemu container on a Linux system, installed from an ISO image from MS, not activated, no passthrough for devices. Let me poke around a bit and see if I can isolate anything that results in a change in behaviour.

<!-- gh-comment-id:3472139021 --> @rick-github commented on GitHub (Oct 31, 2025): Nothing unique as far as I know, qemu container on a Linux system, installed from an ISO image from MS, not activated, no passthrough for devices. Let me poke around a bit and see if I can isolate anything that results in a change in behaviour.
Author
Owner

@dhiltgen commented on GitHub (Nov 5, 2025):

@rick-github I've merged some additional trace logging on main that may help narrow down where the hang is. My current suspicion is it's related to windows specific CPU discovery. If you have the ability to build a windows binary and try it out on your VM, maybe we'll see a clear signal from the new logs.

<!-- gh-comment-id:3492109059 --> @dhiltgen commented on GitHub (Nov 5, 2025): @rick-github I've merged some additional trace logging on main that may help narrow down where the hang is. My current suspicion is it's related to windows specific CPU discovery. If you have the ability to build a windows binary and try it out on your VM, maybe we'll see a clear signal from the new logs.
Author
Owner

@rick-github commented on GitHub (Nov 5, 2025):

I tried creating a build environment when I first came across the issue, with no luck. Let me give it another go.

<!-- gh-comment-id:3492119866 --> @rick-github commented on GitHub (Nov 5, 2025): I tried creating a build environment when I first came across the issue, with no luck. Let me give it another go.
Author
Owner

@dhiltgen commented on GitHub (Nov 5, 2025):

I'm pretty sure I have a fix for it now, so no need to try to get the logging.

<!-- gh-comment-id:3492242733 --> @dhiltgen commented on GitHub (Nov 5, 2025): I'm pretty sure I have a fix for it now, so no need to try to get the logging.
Author
Owner

@rick-github commented on GitHub (Nov 5, 2025):

Confirm that 0.12.10-rc1 resolves the issue.

<!-- gh-comment-id:3493930267 --> @rick-github commented on GitHub (Nov 5, 2025): Confirm that 0.12.10-rc1 resolves the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34187