[GH-ISSUE #11776] The latest version does not support GPU. #7805

Closed
opened 2026-04-12 19:58:41 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @239573049 on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11776

What is the issue?

After I updated from version 0.9.3 to the latest version 0.11.3, all the models couldn't enable GPU usage and had to run on the CPU. Then we tried to restore the version to 0.9.3, and after that the models could use GPU normally again.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @239573049 on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11776 ### What is the issue? After I updated from version 0.9.3 to the latest version 0.11.3, all the models couldn't enable GPU usage and had to run on the CPU. Then we tried to restore the version to 0.9.3, and after that the models could use GPU normally again. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the needs more infobug labels 2026-04-12 19:58:42 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

server logs will help in debugging.

<!-- gh-comment-id:3163018267 --> @rick-github commented on GitHub (Aug 7, 2025): [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@jzzhang001 commented on GitHub (Aug 7, 2025):

same issue with 0.11.3, all deployed models only run on CPU, while ollama ps shows "100% GPU"

<!-- gh-comment-id:3163954352 --> @jzzhang001 commented on GitHub (Aug 7, 2025): same issue with 0.11.3, all deployed models only run on CPU, while `ollama ps` shows "100% GPU"
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

server logs will help in debugging.

<!-- gh-comment-id:3164100416 --> @rick-github commented on GitHub (Aug 7, 2025): [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@MarcusTseng commented on GitHub (Aug 7, 2025):

same issue with 0.11.3, all deployed models only run on CPU, while ollama ps shows "100% GPU"

https://github.com/ollama/ollama/issues/3201#issuecomment-3161464911
this might help

<!-- gh-comment-id:3165082163 --> @MarcusTseng commented on GitHub (Aug 7, 2025): > same issue with 0.11.3, all deployed models only run on CPU, while `ollama ps` shows "100% GPU" https://github.com/ollama/ollama/issues/3201#issuecomment-3161464911 this might help
Author
Owner

@pdevine commented on GitHub (Aug 7, 2025):

Anything would help here like what kind of GPU you're using, what OS, the results of ollama ps.

<!-- gh-comment-id:3165361256 --> @pdevine commented on GitHub (Aug 7, 2025): Anything would help here like what kind of GPU you're using, what OS, the results of `ollama ps`.
Author
Owner

@nipwd commented on GitHub (Aug 7, 2025):

having the same issue, running win 11, nvidia 3090, running from the new app.

ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
gpt-oss:20b f2b8351c629c 15 GB 100% CPU 131072 4 minutes from now

if i run via command line ollama serve it runs ok, but have 8k context length

ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
gpt-oss:20b f2b8351c629c 16 GB 100% GPU 8192 4 minutes from now

looks like app cant use gpu !

<!-- gh-comment-id:3166063538 --> @nipwd commented on GitHub (Aug 7, 2025): having the same issue, running win 11, nvidia 3090, running from the new app. ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:20b f2b8351c629c 15 GB 100% CPU 131072 4 minutes from now if i run via command line ollama serve it runs ok, but have 8k context length ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:20b f2b8351c629c 16 GB 100% GPU 8192 4 minutes from now looks like app cant use gpu !
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

num_ctx of 131072 creates a memory graph that is too large to fit on a GPU.

server logs will help in further debugging.

<!-- gh-comment-id:3166068413 --> @rick-github commented on GitHub (Aug 7, 2025): num_ctx of 131072 creates a memory graph that is too large to fit on a GPU. [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in further debugging.
Author
Owner

@nipwd commented on GitHub (Aug 7, 2025):

time=2025-08-07T19:40:55.611-03:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:131072 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\user\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-08-07T19:40:55.619-03:00 level=INFO source=images.go:477 msg="total blobs: 21"
time=2025-08-07T19:40:55.620-03:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-07T19:40:55.621-03:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.3)"
time=2025-08-07T19:40:55.621-03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-07T19:40:55.621-03:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-08-07T19:40:55.621-03:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-08-07T19:40:55.756-03:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a8278140-29e5-65be-a0af-c4b6c6ddd252 library=cuda variant=v12 compute=8.6 driver=13.0 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
[GIN] 2025/08/07 - 19:40:57 | 200 | 111.911ms | 172.25.112.41 | POST "/api/show"
time=2025-08-07T19:40:57.762-03:00 level=INFO source=server.go:135 msg="system memory" total="63.9 GiB" free="48.7 GiB" free_swap="54.9 GiB"
time=2025-08-07T19:40:57.762-03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.8 GiB" memory.required.partial="0 B" memory.required.kv="3.1 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB"
time=2025-08-07T19:40:57.818-03:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\Users\user\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model C:\Users\user\.ollama\models\blobs\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 131072 --batch-size 512 --threads 6 --no-mmap --parallel 1 --port 53456"
time=2025-08-07T19:40:57.829-03:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-07T19:40:57.829-03:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-07T19:40:57.831-03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-08-07T19:40:57.863-03:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-08-07T19:40:57.867-03:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:53456"
time=2025-08-07T19:40:57.922-03:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-08-07T19:40:58.016-03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-08-07T19:40:58.082-03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-07T19:40:58.082-03:00 level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU"
time=2025-08-07T19:40:58.082-03:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
time=2025-08-07T19:40:58.082-03:00 level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU"
time=2025-08-07T19:40:58.082-03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB"
time=2025-08-07T19:40:58.447-03:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B"
time=2025-08-07T19:40:58.447-03:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="32.0 GiB"
time=2025-08-07T19:41:00.340-03:00 level=INFO source=server.go:637 msg="llama runner started in 2.51 seconds"
[GIN] 2025/08/07 - 19:42:15 | 200 | 0s | ::1 | GET "/"
[GIN] 2025/08/07 - 19:42:15 | 404 | 0s | ::1 | GET "/favicon.ico"
[GIN] 2025/08/07 - 19:42:22 | 200 | 0s | 172.25.112.1 | GET "/"
[GIN] 2025/08/07 - 19:42:22 | 404 | 0s | 172.25.112.1 | GET "/favicon.ico"
[GIN] 2025/08/07 - 19:46:02 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/07 - 19:46:02 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
time=2025-08-07T19:46:21.322-03:00 level=ERROR source=server.go:807 msg="post predict" error="Post "http://127.0.0.1:53456/completion": context canceled"
[GIN] 2025/08/07 - 19:46:21 | 200 | 5m23s | 172.25.112.41 | POST "/v1/chat/completions"
[GIN] 2025/08/07 - 19:48:16 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/07 - 19:48:16 | 200 | 523.2µs | 127.0.0.1 | GET "/api/ps"
time=2025-08-07T19:56:30.455-03:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:131072 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\user\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-08-07T19:56:30.465-03:00 level=INFO source=images.go:477 msg="total blobs: 21"
time=2025-08-07T19:56:30.467-03:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-07T19:56:30.469-03:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.3)"
time=2025-08-07T19:56:30.469-03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-07T19:56:30.469-03:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-08-07T19:56:30.469-03:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-08-07T19:56:30.651-03:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a8278140-29e5-65be-a0af-c4b6c6ddd252 library=cuda variant=v12 compute=8.6 driver=13.0 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
[GIN] 2025/08/07 - 19:56:30 | 200 | 3.139ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/08/07 - 19:56:30 | 200 | 149.3698ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/08/07 - 19:56:39 | 200 | 8.7886ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/08/07 - 19:56:40 | 200 | 171.566ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/08/07 - 19:56:40 | 200 | 157.3716ms | 127.0.0.1 | POST "/api/show"
time=2025-08-07T19:56:40.556-03:00 level=INFO source=server.go:135 msg="system memory" total="63.9 GiB" free="34.6 GiB" free_swap="12.1 GiB"
time=2025-08-07T19:56:40.556-03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.8 GiB" memory.required.partial="0 B" memory.required.kv="3.1 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB"
time=2025-08-07T19:56:40.652-03:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\Users\user\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model C:\Users\user\.ollama\models\blobs\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 131072 --batch-size 512 --threads 6 --no-mmap --parallel 1 --port 53930"
time=2025-08-07T19:56:40.657-03:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-07T19:56:40.657-03:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-07T19:56:40.657-03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-08-07T19:56:40.700-03:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-08-07T19:56:40.704-03:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:53930"
time=2025-08-07T19:56:40.781-03:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-08-07T19:56:40.883-03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-08-07T19:56:40.908-03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-07T19:56:40.961-03:00 level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU"
time=2025-08-07T19:56:40.961-03:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
time=2025-08-07T19:56:40.961-03:00 level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU"
time=2025-08-07T19:56:40.961-03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB"
time=2025-08-07T19:56:41.479-03:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B"
time=2025-08-07T19:56:41.479-03:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="32.0 GiB"
time=2025-08-07T19:56:45.734-03:00 level=INFO source=server.go:637 msg="llama runner started in 5.08 seconds"
[GIN] 2025/08/07 - 19:57:24 | 200 | 43.7278472s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/08/07 - 19:57:41 | 200 | 502.8µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/07 - 19:57:41 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/08/07 - 19:57:41 | 200 | 0s | 172.25.112.41 | GET "/api/version"
[GIN] 2025/08/07 - 19:57:41 | 200 | 3.2737ms | 172.25.112.41 | GET "/api/tags"
[GIN] 2025/08/07 - 19:57:41 | 200 | 199.3754ms | 172.25.112.41 | POST "/api/show"
[GIN] 2025/08/07 - 19:57:41 | 200 | 160.5994ms | 172.25.112.41 | POST "/api/show"
[GIN] 2025/08/07 - 19:57:41 | 200 | 42.8294ms | 172.25.112.41 | POST "/api/show"
[GIN] 2025/08/07 - 19:57:41 | 200 | 96.5885ms | 172.25.112.41 | POST "/api/show"
[GIN] 2025/08/07 - 19:57:41 | 200 | 73.2893ms | 172.25.112.41 | POST "/api/show"
[GIN] 2025/08/07 - 19:58:01 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/07 - 19:58:01 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/08/07 - 19:58:53 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/07 - 19:58:53 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
time=2025-08-07T19:59:04.815-03:00 level=ERROR source=server.go:807 msg="post predict" error="Post "http://127.0.0.1:53930/completion": context canceled"
[GIN] 2025/08/07 - 19:59:04 | 200 | 1m12s | 172.25.112.41 | POST "/v1/chat/completions"

<!-- gh-comment-id:3166091987 --> @nipwd commented on GitHub (Aug 7, 2025): time=2025-08-07T19:40:55.611-03:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:131072 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\user\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-08-07T19:40:55.619-03:00 level=INFO source=images.go:477 msg="total blobs: 21" time=2025-08-07T19:40:55.620-03:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-07T19:40:55.621-03:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.3)" time=2025-08-07T19:40:55.621-03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-07T19:40:55.621-03:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-08-07T19:40:55.621-03:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-08-07T19:40:55.756-03:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a8278140-29e5-65be-a0af-c4b6c6ddd252 library=cuda variant=v12 compute=8.6 driver=13.0 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" [GIN] 2025/08/07 - 19:40:57 | 200 | 111.911ms | 172.25.112.41 | POST "/api/show" time=2025-08-07T19:40:57.762-03:00 level=INFO source=server.go:135 msg="system memory" total="63.9 GiB" free="48.7 GiB" free_swap="54.9 GiB" time=2025-08-07T19:40:57.762-03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.8 GiB" memory.required.partial="0 B" memory.required.kv="3.1 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB" time=2025-08-07T19:40:57.818-03:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\user\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\user\\.ollama\\models\\blobs\\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 131072 --batch-size 512 --threads 6 --no-mmap --parallel 1 --port 53456" time=2025-08-07T19:40:57.829-03:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-07T19:40:57.829-03:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-07T19:40:57.831-03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-08-07T19:40:57.863-03:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-08-07T19:40:57.867-03:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:53456" time=2025-08-07T19:40:57.922-03:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-08-07T19:40:58.016-03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-08-07T19:40:58.082-03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" time=2025-08-07T19:40:58.082-03:00 level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU" time=2025-08-07T19:40:58.082-03:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU" time=2025-08-07T19:40:58.082-03:00 level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU" time=2025-08-07T19:40:58.082-03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB" time=2025-08-07T19:40:58.447-03:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B" time=2025-08-07T19:40:58.447-03:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="32.0 GiB" time=2025-08-07T19:41:00.340-03:00 level=INFO source=server.go:637 msg="llama runner started in 2.51 seconds" [GIN] 2025/08/07 - 19:42:15 | 200 | 0s | ::1 | GET "/" [GIN] 2025/08/07 - 19:42:15 | 404 | 0s | ::1 | GET "/favicon.ico" [GIN] 2025/08/07 - 19:42:22 | 200 | 0s | 172.25.112.1 | GET "/" [GIN] 2025/08/07 - 19:42:22 | 404 | 0s | 172.25.112.1 | GET "/favicon.ico" [GIN] 2025/08/07 - 19:46:02 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/08/07 - 19:46:02 | 200 | 0s | 127.0.0.1 | GET "/api/ps" time=2025-08-07T19:46:21.322-03:00 level=ERROR source=server.go:807 msg="post predict" error="Post \"http://127.0.0.1:53456/completion\": context canceled" [GIN] 2025/08/07 - 19:46:21 | 200 | 5m23s | 172.25.112.41 | POST "/v1/chat/completions" [GIN] 2025/08/07 - 19:48:16 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/08/07 - 19:48:16 | 200 | 523.2µs | 127.0.0.1 | GET "/api/ps" time=2025-08-07T19:56:30.455-03:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:131072 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\user\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-08-07T19:56:30.465-03:00 level=INFO source=images.go:477 msg="total blobs: 21" time=2025-08-07T19:56:30.467-03:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-07T19:56:30.469-03:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.3)" time=2025-08-07T19:56:30.469-03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-07T19:56:30.469-03:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-08-07T19:56:30.469-03:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-08-07T19:56:30.651-03:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a8278140-29e5-65be-a0af-c4b6c6ddd252 library=cuda variant=v12 compute=8.6 driver=13.0 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" [GIN] 2025/08/07 - 19:56:30 | 200 | 3.139ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/07 - 19:56:30 | 200 | 149.3698ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/07 - 19:56:39 | 200 | 8.7886ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/07 - 19:56:40 | 200 | 171.566ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/07 - 19:56:40 | 200 | 157.3716ms | 127.0.0.1 | POST "/api/show" time=2025-08-07T19:56:40.556-03:00 level=INFO source=server.go:135 msg="system memory" total="63.9 GiB" free="34.6 GiB" free_swap="12.1 GiB" time=2025-08-07T19:56:40.556-03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.8 GiB" memory.required.partial="0 B" memory.required.kv="3.1 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB" time=2025-08-07T19:56:40.652-03:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\user\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\user\\.ollama\\models\\blobs\\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 131072 --batch-size 512 --threads 6 --no-mmap --parallel 1 --port 53930" time=2025-08-07T19:56:40.657-03:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-07T19:56:40.657-03:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-07T19:56:40.657-03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-08-07T19:56:40.700-03:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-08-07T19:56:40.704-03:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:53930" time=2025-08-07T19:56:40.781-03:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-08-07T19:56:40.883-03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-08-07T19:56:40.908-03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" time=2025-08-07T19:56:40.961-03:00 level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU" time=2025-08-07T19:56:40.961-03:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU" time=2025-08-07T19:56:40.961-03:00 level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU" time=2025-08-07T19:56:40.961-03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB" time=2025-08-07T19:56:41.479-03:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B" time=2025-08-07T19:56:41.479-03:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="32.0 GiB" time=2025-08-07T19:56:45.734-03:00 level=INFO source=server.go:637 msg="llama runner started in 5.08 seconds" [GIN] 2025/08/07 - 19:57:24 | 200 | 43.7278472s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/08/07 - 19:57:41 | 200 | 502.8µs | 127.0.0.1 | HEAD "/" [GIN] 2025/08/07 - 19:57:41 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/08/07 - 19:57:41 | 200 | 0s | 172.25.112.41 | GET "/api/version" [GIN] 2025/08/07 - 19:57:41 | 200 | 3.2737ms | 172.25.112.41 | GET "/api/tags" [GIN] 2025/08/07 - 19:57:41 | 200 | 199.3754ms | 172.25.112.41 | POST "/api/show" [GIN] 2025/08/07 - 19:57:41 | 200 | 160.5994ms | 172.25.112.41 | POST "/api/show" [GIN] 2025/08/07 - 19:57:41 | 200 | 42.8294ms | 172.25.112.41 | POST "/api/show" [GIN] 2025/08/07 - 19:57:41 | 200 | 96.5885ms | 172.25.112.41 | POST "/api/show" [GIN] 2025/08/07 - 19:57:41 | 200 | 73.2893ms | 172.25.112.41 | POST "/api/show" [GIN] 2025/08/07 - 19:58:01 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/08/07 - 19:58:01 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/08/07 - 19:58:53 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/08/07 - 19:58:53 | 200 | 0s | 127.0.0.1 | GET "/api/ps" time=2025-08-07T19:59:04.815-03:00 level=ERROR source=server.go:807 msg="post predict" error="Post \"http://127.0.0.1:53930/completion\": context canceled" [GIN] 2025/08/07 - 19:59:04 | 200 | 1m12s | 172.25.112.41 | POST "/v1/chat/completions"
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

time=2025-08-07T19:56:40.556-03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1
 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="14.8 GiB" memory.required.partial="0 B" memory.required.kv="3.1 GiB" memory.required.allocations="[0 B]"
 memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB"
 memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB"

Context size of 131072 creates a memory graph of 32GiB that will not fit in the 22.2GiB available, hence the model is loaded in RAM. You can reduce OLLAMA_CONTEXT_LENGTH, or set OLLAMA_FLASH_ATTENTION and OLLAMA_KV_CACHE_TYPE to reduce the memory footprint.

<!-- gh-comment-id:3166108995 --> @rick-github commented on GitHub (Aug 7, 2025): ``` time=2025-08-07T19:56:40.556-03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.8 GiB" memory.required.partial="0 B" memory.required.kv="3.1 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB" ``` Context size of 131072 creates a memory graph of 32GiB that will not fit in the 22.2GiB available, hence the model is loaded in RAM. You can reduce [`OLLAMA_CONTEXT_LENGTH`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size), or set [`OLLAMA_FLASH_ATTENTION`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-enable-flash-attention) and [`OLLAMA_KV_CACHE_TYPE`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache) to reduce the memory footprint.
Author
Owner

@rabinnh commented on GitHub (Aug 9, 2025):

FWIW, I can verify the same bug. My machine said it was using 100% GPU but all my CPU cores were spiking and and nvidia-smi showed almost no GPU memory being used.

<!-- gh-comment-id:3170693342 --> @rabinnh commented on GitHub (Aug 9, 2025): FWIW, I can verify the same bug. My machine said it was using 100% GPU but all my CPU cores were spiking and and nvidia-smi showed almost no GPU memory being used.
Author
Owner

@rick-github commented on GitHub (Aug 9, 2025):

server logs will help in debugging.

<!-- gh-comment-id:3170694752 --> @rick-github commented on GitHub (Aug 9, 2025): [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@hn-lyf commented on GitHub (Aug 9, 2025):

FWIW, I can verify the same bug. My machine said it was using 100% GPU but all my CPU cores were spiking and and nvidia-smi showed almost no GPU memory being used.

我也遇到了同样的问题,显示ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
我没更新前是好的

ime=2025-08-10T01:14:09.719+08:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:G:\Data\ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-08-10T01:14:09.725+08:00 level=INFO source=images.go:477 msg="total blobs: 34"
time=2025-08-10T01:14:09.727+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-10T01:14:09.727+08:00 level=INFO source=routes.go:1357 msg="Listening on [::]:11434 (version 0.11.4)"
time=2025-08-10T01:14:09.727+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-10T01:14:09.727+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-08-10T01:14:09.727+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-08-10T01:14:09.727+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=4 threads=20
time=2025-08-10T01:14:09.845+08:00 level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.8
time=2025-08-10T01:14:09.866+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-234dc178-597a-deaa-36fe-652c51bf1bc4 library=cuda compute=8.6 driver=11.8 name="NVIDIA GeForce RTX 3060" overhead="372.4 MiB"
time=2025-08-10T01:14:09.869+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-234dc178-597a-deaa-36fe-652c51bf1bc4 library=cuda variant=v11 compute=8.6 driver=11.8 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
time=2025-08-10T01:14:09.869+08:00 level=INFO source=routes.go:1398 msg="entering low vram mode" "total vram"="12.0 GiB" threshold="20.0 GiB"
[GIN] 2025/08/10 - 01:14:10 | 200 | 5.7678ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/08/10 - 01:14:10 | 404 | 2.6189ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/08/10 - 01:14:11 | 404 | 1.5422ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/08/10 - 01:14:13 | 404 | 6.6475ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/08/10 - 01:14:17 | 404 | 6.8883ms | 127.0.0.1 | POST "/api/show"
time=2025-08-10T01:14:31.259+08:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=G:\Data\ollama\models\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 gpu=GPU-234dc178-597a-deaa-36fe-652c51bf1bc4 parallel=1 available=11762458624 required="9.7 GiB"
time=2025-08-10T01:14:31.276+08:00 level=INFO source=server.go:135 msg="system memory" total="31.7 GiB" free="25.8 GiB" free_swap="43.2 GiB"
time=2025-08-10T01:14:31.277+08:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.7 GiB" memory.required.partial="9.7 GiB" memory.required.kv="768.0 MiB" memory.required.allocations="[9.7 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="348.0 MiB" memory.graph.partial="916.1 MiB"
llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from G:\Data\ollama\models\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 14B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-1...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-14B
llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 48
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 15
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 8.37 GiB (4.87 BPW)
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 14.77 B
print_info: general.name = Qwen2.5 14B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-10T01:14:31.465+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="D:\Program Files\Ollama\ollama.exe runner --model G:\Data\ollama\models\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 1 --port 57595"
time=2025-08-10T01:14:31.468+08:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-10T01:14:31.468+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-10T01:14:31.469+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-08-10T01:14:31.505+08:00 level=INFO source=runner.go:815 msg="starting go runner"
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from D:\Program Files\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from D:\Program Files\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2025-08-10T01:14:31.589+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-08-10T01:14:31.593+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:57595"
llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from G:\Data\ollama\models\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest))

<!-- gh-comment-id:3171969018 --> @hn-lyf commented on GitHub (Aug 9, 2025): > FWIW, I can verify the same bug. My machine said it was using 100% GPU but all my CPU cores were spiking and and nvidia-smi showed almost no GPU memory being used. 我也遇到了同样的问题,显示ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version 我没更新前是好的 ime=2025-08-10T01:14:09.719+08:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:G:\\Data\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-08-10T01:14:09.725+08:00 level=INFO source=images.go:477 msg="total blobs: 34" time=2025-08-10T01:14:09.727+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-10T01:14:09.727+08:00 level=INFO source=routes.go:1357 msg="Listening on [::]:11434 (version 0.11.4)" time=2025-08-10T01:14:09.727+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-10T01:14:09.727+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-08-10T01:14:09.727+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-08-10T01:14:09.727+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=4 threads=20 time=2025-08-10T01:14:09.845+08:00 level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.8 time=2025-08-10T01:14:09.866+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-234dc178-597a-deaa-36fe-652c51bf1bc4 library=cuda compute=8.6 driver=11.8 name="NVIDIA GeForce RTX 3060" overhead="372.4 MiB" time=2025-08-10T01:14:09.869+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-234dc178-597a-deaa-36fe-652c51bf1bc4 library=cuda variant=v11 compute=8.6 driver=11.8 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" time=2025-08-10T01:14:09.869+08:00 level=INFO source=routes.go:1398 msg="entering low vram mode" "total vram"="12.0 GiB" threshold="20.0 GiB" [GIN] 2025/08/10 - 01:14:10 | 200 | 5.7678ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/10 - 01:14:10 | 404 | 2.6189ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/10 - 01:14:11 | 404 | 1.5422ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/10 - 01:14:13 | 404 | 6.6475ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/10 - 01:14:17 | 404 | 6.8883ms | 127.0.0.1 | POST "/api/show" time=2025-08-10T01:14:31.259+08:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=G:\Data\ollama\models\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 gpu=GPU-234dc178-597a-deaa-36fe-652c51bf1bc4 parallel=1 available=11762458624 required="9.7 GiB" time=2025-08-10T01:14:31.276+08:00 level=INFO source=server.go:135 msg="system memory" total="31.7 GiB" free="25.8 GiB" free_swap="43.2 GiB" time=2025-08-10T01:14:31.277+08:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.7 GiB" memory.required.partial="9.7 GiB" memory.required.kv="768.0 MiB" memory.required.allocations="[9.7 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="348.0 MiB" memory.graph.partial="916.1 MiB" llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from G:\Data\ollama\models\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-1... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-14B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = Qwen2.5 14B Instruct print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-10T01:14:31.465+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="D:\\Program Files\\Ollama\\ollama.exe runner --model G:\\Data\\ollama\\models\\blobs\\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 1 --port 57595" time=2025-08-10T01:14:31.468+08:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-10T01:14:31.468+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-10T01:14:31.469+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-08-10T01:14:31.505+08:00 level=INFO source=runner.go:815 msg="starting go runner" ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version load_backend: loaded CUDA backend from D:\Program Files\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from D:\Program Files\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-08-10T01:14:31.589+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-08-10T01:14:31.593+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:57595" llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from G:\Data\ollama\models\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest))
Author
Owner

@rick-github commented on GitHub (Aug 9, 2025):

time=2025-08-10T01:14:09.845+08:00 level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.8

Ollama no longer supports CUDA v11. Upgrade your Nvidia driver to use recent versions of ollama.

<!-- gh-comment-id:3172063820 --> @rick-github commented on GitHub (Aug 9, 2025): ``` time=2025-08-10T01:14:09.845+08:00 level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.8 ``` Ollama no longer supports CUDA v11. Upgrade your Nvidia driver to use recent versions of ollama.
Author
Owner

@azomDev commented on GitHub (Aug 10, 2025):

Apart from the old CUDA driver, this issue might be related to #11676

<!-- gh-comment-id:3172341363 --> @azomDev commented on GitHub (Aug 10, 2025): Apart from the old CUDA driver, this issue might be related to #11676
Author
Owner

@rick-github commented on GitHub (Sep 1, 2025):

Recent release of ollama have reduced the memory footprint of gpt-oss. Upgrade and leave a comment if the problem persists.

<!-- gh-comment-id:3243162692 --> @rick-github commented on GitHub (Sep 1, 2025): Recent release of ollama have reduced the memory footprint of gpt-oss. Upgrade and leave a comment if the problem persists.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7805