[GH-ISSUE #10873] Windows 10 with RTX 5060 16GB cannot run Qwen3:14b #32904

Closed
opened 2026-04-22 14:50:32 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @VacantHusky on GitHub (May 27, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10873

What is the issue?

System:Windows 10 Pro 22H2

ollama: 0.7.1

env: OLLAMA_GPU_LAYER=cuda; OLLAMA_DEBUG=1

Image

Successfully run ollama run qwen3:8b.

Running ollama run qwen3:14b results in an error.

C:\Users\Lenovo>ollama run qwen3:14b
Error: llama runner process has terminated: cudaMalloc failed: out of memory
C:\Users\Lenovo>ollama run qwen3:14b
Error: llama runner process has terminated: GGML_ASSERT(ctx->mem_buffer != NULL) failed

Image

Relevant log output

time=2025-05-27T12:32:43.205+08:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Lenovo\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-05-27T12:32:43.239+08:00 level=INFO source=images.go:463 msg="total blobs: 13"
time=2025-05-27T12:32:43.240+08:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-27T12:32:43.242+08:00 level=INFO source=routes.go:1258 msg="Listening on [::]:11434 (version 0.7.1)"
time=2025-05-27T12:32:43.242+08:00 level=DEBUG source=sched.go:108 msg="starting llm scheduler"
time=2025-05-27T12:32:43.243+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-05-27T12:32:43.244+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-05-27T12:32:43.244+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-05-27T12:32:43.244+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=14 efficiency=8 threads=20
time=2025-05-27T12:32:43.244+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-05-27T12:32:43.245+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll
time=2025-05-27T12:32:43.245+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\Python311\\Scripts\\nvml.dll C:\\Program Files\\Python311\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\nvml.dll C:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\nvml.dll C:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\nvml.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\VSShell\\Common7\\IDE\\nvml.dll C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\Common7\\IDE\\PrivateAssemblies\\nvml.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\DTS\\Binn\\nvml.dll C:\\Program Files (x86)\\cvsnt\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Users\\Lenovo\\AppData\\Roaming\\Python\\Python311\\Scripts\\nvml.dll C:\\Users\\Lenovo\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\Lenovo\\.lmstudio\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-05-27T12:32:43.245+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll"
time=2025-05-27T12:32:43.246+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-05-27T12:32:43.270+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll
time=2025-05-27T12:32:43.270+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll
time=2025-05-27T12:32:43.270+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\Python311\\Scripts\\nvcuda.dll C:\\Program Files\\Python311\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\nvcuda.dll C:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\nvcuda.dll C:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\nvcuda.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\VSShell\\Common7\\IDE\\nvcuda.dll C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\Common7\\IDE\\PrivateAssemblies\\nvcuda.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\DTS\\Binn\\nvcuda.dll C:\\Program Files (x86)\\cvsnt\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Users\\Lenovo\\AppData\\Roaming\\Python\\Python311\\Scripts\\nvcuda.dll C:\\Users\\Lenovo\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\Lenovo\\.lmstudio\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]"
time=2025-05-27T12:32:43.270+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll"
time=2025-05-27T12:32:43.271+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll]

...

time=2025-05-27T12:33:27.665+08:00 level=DEBUG source=server.go:360 msg="adding gpu library" path=C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-05-27T12:33:27.665+08:00 level=DEBUG source=server.go:367 msg="adding gpu dependency paths" paths=[C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12]
time=2025-05-27T12:33:27.665+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\Lenovo\\.ollama\\models\\blobs\\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 6 --no-mmap --parallel 2 --port 49912"
time=2025-05-27T12:33:27.665+08:00 level=DEBUG source=server.go:432 msg=subprocess OLLAMA_DEBUG=1 OLLAMA_GPU_LAYER=cuda OLLAMA_HOST=0.0.0.0:11434 OLLAMA_MAX_LOADED_MODELS=3 PATH="C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Program Files\\Python311\\Scripts\\;C:\\Program Files\\Python311\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\VSShell\\Common7\\IDE\\;C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\Common7\\IDE\\PrivateAssemblies\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\DTS\\Binn\\;C:\\Program Files (x86)\\cvsnt;C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\Lenovo\\AppData\\Roaming\\Python\\Python311\\Scripts;C:\\Users\\Lenovo\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama;C:\\Users\\Lenovo\\.lmstudio\\bin;C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama" OLLAMA_LIBRARY_PATH=C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 CUDA_VISIBLE_DEVICES=GPU-f5ddf176-3342-96a2-abdf-094071c2a383
time=2025-05-27T12:33:27.668+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-05-27T12:33:27.669+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-27T12:33:27.669+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-27T12:33:27.691+08:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-05-27T12:33:27.695+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama
load_backend: loaded CPU backend from C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2025-05-27T12:33:27.820+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes
load_backend: loaded CUDA backend from C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-05-27T12:33:29.475+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-27T12:33:29.476+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:49912"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5060 Ti) - 15072 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

...

llama_kv_cache_unified: layer  39: dev = CUDA0
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1280.00 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate CUDA0 buffer of size 1342177280
llama_init_from_model: failed to initialize the context: failed to allocate buffer for kv cache
panic: unable to create llama context

goroutine 15 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0000f6360, {0x29, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc00058d770, 0x0}, {0xc00003a150, ...}, ...)
	C:/a/ollama/ollama/runner/llamarunner/runner.go:757 +0x389
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
	C:/a/ollama/ollama/runner/llamarunner/runner.go:848 +0xb57
time=2025-05-27T12:33:38.144+08:00 level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2"
[GIN] 2025/05/27 - 12:33:38 | 500 |   10.8186378s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-27T12:33:38.187+08:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory"
time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=sched.go:491 msg="triggering expiration for failed load" runner.name=registry.ollama.ai/library/qwen3:14b runner.inference=cuda runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192
time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=sched.go:364 msg="runner expired event received" runner.name=registry.ollama.ai/library/qwen3:14b runner.inference=cuda runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192
time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=sched.go:379 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/qwen3:14b runner.inference=cuda runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192
time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=sched.go:402 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/qwen3:14b runner.inference=cuda runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192
time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="15.7 GiB" before.free="11.8 GiB" before.free_swap="10.6 GiB" now.total="15.7 GiB" now.free="11.9 GiB" now.free_swap="10.4 GiB"
time=2025-05-27T12:33:38.204+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-f5ddf176-3342-96a2-abdf-094071c2a383 name="NVIDIA GeForce RTX 5060 Ti" overhead="399.1 MiB" before.total="15.9 GiB" before.free="14.9 GiB" now.total="15.9 GiB" now.free="14.9 GiB" now.used="655.9 MiB"
releasing nvml library
time=2025-05-27T12:33:38.222+08:00 level=DEBUG source=server.go:1023 msg="stopping llama server" pid=13316
time=2025-05-27T12:33:38.222+08:00 level=DEBUG source=sched.go:407 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e
time=2025-05-27T12:33:38.456+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="15.7 GiB" before.free="11.9 GiB" before.free_swap="10.4 GiB" now.total="15.7 GiB" now.free="11.9 GiB" now.free_swap="10.4 GiB"
time=2025-05-27T12:33:38.466+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-f5ddf176-3342-96a2-abdf-094071c2a383 name="NVIDIA GeForce RTX 5060 Ti" overhead="399.1 MiB" before.total="15.9 GiB" before.free="14.9 GiB" now.total="15.9 GiB" now.free="14.9 GiB" now.used="656.0 MiB"
releasing nvml library
time=2025-05-27T12:33:38.467+08:00 level=DEBUG source=sched.go:700 msg="gpu VRAM free memory converged after 0.28 seconds" runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e
time=2025-05-27T12:33:38.467+08:00 level=DEBUG source=sched.go:410 msg="sending an unloaded event" runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e
time=2025-05-27T12:33:38.467+08:00 level=DEBUG source=sched.go:312 msg="ignoring unload event with no pending requests"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.7.1

Originally created by @VacantHusky on GitHub (May 27, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10873 ### What is the issue? System:Windows 10 Pro 22H2 ollama: 0.7.1 env: OLLAMA_GPU_LAYER=cuda; OLLAMA_DEBUG=1 ![Image](https://github.com/user-attachments/assets/4597cfa3-1d5d-4056-b792-6144b15c5920) Successfully run `ollama run qwen3:8b`. Running `ollama run qwen3:14b` results in an error. ``` C:\Users\Lenovo>ollama run qwen3:14b Error: llama runner process has terminated: cudaMalloc failed: out of memory C:\Users\Lenovo>ollama run qwen3:14b Error: llama runner process has terminated: GGML_ASSERT(ctx->mem_buffer != NULL) failed ``` ![Image](https://github.com/user-attachments/assets/36df1e94-9df6-4ccc-a4f5-b68b9a762ffc) ### Relevant log output ```shell time=2025-05-27T12:32:43.205+08:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Lenovo\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-05-27T12:32:43.239+08:00 level=INFO source=images.go:463 msg="total blobs: 13" time=2025-05-27T12:32:43.240+08:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-27T12:32:43.242+08:00 level=INFO source=routes.go:1258 msg="Listening on [::]:11434 (version 0.7.1)" time=2025-05-27T12:32:43.242+08:00 level=DEBUG source=sched.go:108 msg="starting llm scheduler" time=2025-05-27T12:32:43.243+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-27T12:32:43.244+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-05-27T12:32:43.244+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-05-27T12:32:43.244+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=14 efficiency=8 threads=20 time=2025-05-27T12:32:43.244+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-05-27T12:32:43.245+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll time=2025-05-27T12:32:43.245+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\Python311\\Scripts\\nvml.dll C:\\Program Files\\Python311\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\nvml.dll C:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\nvml.dll C:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\nvml.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\VSShell\\Common7\\IDE\\nvml.dll C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\Common7\\IDE\\PrivateAssemblies\\nvml.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\DTS\\Binn\\nvml.dll C:\\Program Files (x86)\\cvsnt\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Users\\Lenovo\\AppData\\Roaming\\Python\\Python311\\Scripts\\nvml.dll C:\\Users\\Lenovo\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\Lenovo\\.lmstudio\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-05-27T12:32:43.245+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll" time=2025-05-27T12:32:43.246+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-05-27T12:32:43.270+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll time=2025-05-27T12:32:43.270+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll time=2025-05-27T12:32:43.270+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\Python311\\Scripts\\nvcuda.dll C:\\Program Files\\Python311\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\nvcuda.dll C:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\nvcuda.dll C:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\nvcuda.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\VSShell\\Common7\\IDE\\nvcuda.dll C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\Common7\\IDE\\PrivateAssemblies\\nvcuda.dll C:\\Program Files (x86)\\Microsoft SQL Server\\100\\DTS\\Binn\\nvcuda.dll C:\\Program Files (x86)\\cvsnt\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Users\\Lenovo\\AppData\\Roaming\\Python\\Python311\\Scripts\\nvcuda.dll C:\\Users\\Lenovo\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\Lenovo\\.lmstudio\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-05-27T12:32:43.270+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll" time=2025-05-27T12:32:43.271+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll] ... time=2025-05-27T12:33:27.665+08:00 level=DEBUG source=server.go:360 msg="adding gpu library" path=C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-05-27T12:33:27.665+08:00 level=DEBUG source=server.go:367 msg="adding gpu dependency paths" paths=[C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12] time=2025-05-27T12:33:27.665+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\Lenovo\\.ollama\\models\\blobs\\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 6 --no-mmap --parallel 2 --port 49912" time=2025-05-27T12:33:27.665+08:00 level=DEBUG source=server.go:432 msg=subprocess OLLAMA_DEBUG=1 OLLAMA_GPU_LAYER=cuda OLLAMA_HOST=0.0.0.0:11434 OLLAMA_MAX_LOADED_MODELS=3 PATH="C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Program Files\\Python311\\Scripts\\;C:\\Program Files\\Python311\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\VSShell\\Common7\\IDE\\;C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\Common7\\IDE\\PrivateAssemblies\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\DTS\\Binn\\;C:\\Program Files (x86)\\cvsnt;C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\Lenovo\\AppData\\Roaming\\Python\\Python311\\Scripts;C:\\Users\\Lenovo\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama;C:\\Users\\Lenovo\\.lmstudio\\bin;C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama" OLLAMA_LIBRARY_PATH=C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 CUDA_VISIBLE_DEVICES=GPU-f5ddf176-3342-96a2-abdf-094071c2a383 time=2025-05-27T12:33:27.668+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-27T12:33:27.669+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-27T12:33:27.669+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-27T12:33:27.691+08:00 level=INFO source=runner.go:815 msg="starting go runner" time=2025-05-27T12:33:27.695+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama load_backend: loaded CPU backend from C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-05-27T12:33:27.820+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes load_backend: loaded CUDA backend from C:\Users\Lenovo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-27T12:33:29.475+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-27T12:33:29.476+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:49912" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5060 Ti) - 15072 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ... llama_kv_cache_unified: layer 39: dev = CUDA0 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1280.00 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate CUDA0 buffer of size 1342177280 llama_init_from_model: failed to initialize the context: failed to allocate buffer for kv cache panic: unable to create llama context goroutine 15 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0000f6360, {0x29, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc00058d770, 0x0}, {0xc00003a150, ...}, ...) C:/a/ollama/ollama/runner/llamarunner/runner.go:757 +0x389 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 C:/a/ollama/ollama/runner/llamarunner/runner.go:848 +0xb57 time=2025-05-27T12:33:38.144+08:00 level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2" [GIN] 2025/05/27 - 12:33:38 | 500 | 10.8186378s | 127.0.0.1 | POST "/api/generate" time=2025-05-27T12:33:38.187+08:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory" time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=sched.go:491 msg="triggering expiration for failed load" runner.name=registry.ollama.ai/library/qwen3:14b runner.inference=cuda runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=sched.go:364 msg="runner expired event received" runner.name=registry.ollama.ai/library/qwen3:14b runner.inference=cuda runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=sched.go:379 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/qwen3:14b runner.inference=cuda runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=sched.go:402 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/qwen3:14b runner.inference=cuda runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 time=2025-05-27T12:33:38.187+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="15.7 GiB" before.free="11.8 GiB" before.free_swap="10.6 GiB" now.total="15.7 GiB" now.free="11.9 GiB" now.free_swap="10.4 GiB" time=2025-05-27T12:33:38.204+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-f5ddf176-3342-96a2-abdf-094071c2a383 name="NVIDIA GeForce RTX 5060 Ti" overhead="399.1 MiB" before.total="15.9 GiB" before.free="14.9 GiB" now.total="15.9 GiB" now.free="14.9 GiB" now.used="655.9 MiB" releasing nvml library time=2025-05-27T12:33:38.222+08:00 level=DEBUG source=server.go:1023 msg="stopping llama server" pid=13316 time=2025-05-27T12:33:38.222+08:00 level=DEBUG source=sched.go:407 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e time=2025-05-27T12:33:38.456+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="15.7 GiB" before.free="11.9 GiB" before.free_swap="10.4 GiB" now.total="15.7 GiB" now.free="11.9 GiB" now.free_swap="10.4 GiB" time=2025-05-27T12:33:38.466+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-f5ddf176-3342-96a2-abdf-094071c2a383 name="NVIDIA GeForce RTX 5060 Ti" overhead="399.1 MiB" before.total="15.9 GiB" before.free="14.9 GiB" now.total="15.9 GiB" now.free="14.9 GiB" now.used="656.0 MiB" releasing nvml library time=2025-05-27T12:33:38.467+08:00 level=DEBUG source=sched.go:700 msg="gpu VRAM free memory converged after 0.28 seconds" runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e time=2025-05-27T12:33:38.467+08:00 level=DEBUG source=sched.go:410 msg="sending an unloaded event" runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=13316 runner.model=C:\Users\Lenovo\.ollama\models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e time=2025-05-27T12:33:38.467+08:00 level=DEBUG source=sched.go:312 msg="ignoring unload event with no pending requests" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.7.1
GiteaMirror added the bug label 2026-04-22 14:50:32 -05:00
Author
Owner

@rick-github commented on GitHub (May 27, 2025):

https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288

<!-- gh-comment-id:2912010903 --> @rick-github commented on GitHub (May 27, 2025): https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288
Author
Owner

@VacantHusky commented on GitHub (May 29, 2025):

#8597 (评论)

On another computer (Windows 11, RTX 3060 with 12GB VRAM), the qwen3:14b model runs normally. However, on this computer with an RTX 5060 and 16GB of VRAM—which has more memory—it fails to run qwen3:14b. During execution, the VRAM usage only goes up to 8GB before it throws an "out of memory" error. Why does it report insufficient memory even though there is still VRAM available?

<!-- gh-comment-id:2918275583 --> @VacantHusky commented on GitHub (May 29, 2025): > [#8597 (评论)](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288) On another computer (Windows 11, RTX 3060 with 12GB VRAM), the `qwen3:14b` model runs normally. However, on this computer with an RTX 5060 and 16GB of VRAM—which has more memory—it fails to run `qwen3:14b`. During execution, the VRAM usage only goes up to 8GB before it throws an "out of memory" error. Why does it report insufficient memory even though there is still VRAM available?
Author
Owner

@VacantHusky commented on GitHub (May 30, 2025):

I am using the "Lenovo ThinkStation P368-C3" motherboard, and with this motherboard, the 5060Ti graphics card cannot utilize its full VRAM. However, when I install the same graphics card in another computer with a "PRIME B660M-K D4" motherboard, it works normally and can use all of its VRAM.

<!-- gh-comment-id:2921392053 --> @VacantHusky commented on GitHub (May 30, 2025): I am using the "Lenovo ThinkStation P368-C3" motherboard, and with this motherboard, the 5060Ti graphics card cannot utilize its full VRAM. However, when I install the same graphics card in another computer with a "PRIME B660M-K D4" motherboard, it works normally and can use all of its VRAM.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32904