[GH-ISSUE #10823] Ollama not using GPU when CURLing local API #53618

Closed
opened 2026-04-29 04:16:23 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @2jfs904judsw20600jikn613d0dookl23jsig on GitHub (May 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10823

What is the issue?

I'm non-technical. Ask follow-up if more information needed.

When running the following command from within my python app:

'''
curl.exe -X POST http://localhost:11434/api/chat -H Content-Type: application/json -d {"model": "gemma3:27b-it-qat", "messages": [{"role": "user", "content": "hi"}], "stream": false, "options": {"num_gpu": -1}}
''''

It executes correctly, hitting the ollama local server API. Models are being loaded fully into VRAM (w/ sufficient extra room). CPU is being used around 50-70% so clearly there is splitting between the GPU and CPU.

When using "ollama run model tag" GPU is utilized as expected, at 100% during inference.

PC Specs:

  • Windows 11
  • Intel 285k CPU
  • Nvidia 4090 GPU

Models exhibiting the behavior:

  1. gemma3:27b-it-qat 18 GB
  2. qwen2.5vl:32b-q4_K_M 21 GB

This behavior has never happened before on this machine with other apps that use Ollama before the 0.7.0 update (I haven't tried these other apps since the update and don't really have the intention to).

Relevant log output

[GIN] 2025/05/22 - 17:52:10 | 200 |         3m18s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-22T17:57:15.333-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0119824 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=2556 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87
time=2025-05-22T17:57:15.583-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2621407 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=2556 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87
time=2025-05-22T17:57:15.834-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5122889 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=2556 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87
[GIN] 2025/05/22 - 17:57:30 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 17:57:30 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/05/22 - 17:59:04 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 17:59:04 | 200 |     37.6968ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-22T17:59:04.661-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="41.6 GiB" free_swap="37.0 GiB"
time=2025-05-22T17:59:04.662-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=49 layers.split="" memory.available="[17.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.4 GiB" memory.required.partial="17.8 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[17.8 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB"
time=2025-05-22T17:59:04.692-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 1 --port 56475"
time=2025-05-22T17:59:04.695-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-22T17:59:04.695-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"        
time=2025-05-22T17:59:04.696-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T17:59:04.729-07:00 level=INFO source=runner.go:836 msg="starting ollama engine"
time=2025-05-22T17:59:04.730-07:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:56475"
time=2025-05-22T17:59:04.746-07:00 level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1247 num_key_values=40
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll        
time=2025-05-22T17:59:04.842-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-22T17:59:04.916-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="10.6 GiB"
time=2025-05-22T17:59:04.916-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="8.9 GiB"
time=2025-05-22T17:59:04.947-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-22T17:59:07.584-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="288.0 MiB"
time=2025-05-22T17:59:07.584-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="274.5 MiB"
time=2025-05-22T17:59:07.704-07:00 level=INFO source=server.go:630 msg="llama runner started in 3.01 seconds"
[GIN] 2025/05/22 - 17:59:22 | 200 |   18.0644609s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/05/22 - 18:01:24 | 200 |   15.8832629s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:03:54 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 18:03:54 | 200 |      1.0355ms |       127.0.0.1 | GET      "/api/tags"
time=2025-05-22T18:04:14.400-07:00 level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="5.7 GiB"
time=2025-05-22T18:04:19.434-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0290693 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=42416 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87
time=2025-05-22T18:04:19.515-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="41.5 GiB" free_swap="36.7 GiB"
time=2025-05-22T18:04:19.517-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=45 layers.split="" memory.available="[17.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.8 GiB" memory.required.partial="17.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[17.7 GiB]" memory.weights.total="18.1 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-05-22T18:04:19.548-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 --ctx-size 4096 --batch-size 512 --n-gpu-layers 45 --threads 8 --no-mmap --parallel 1 --port 56932"
time=2025-05-22T18:04:19.551-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-22T18:04:19.551-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"        
time=2025-05-22T18:04:19.552-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T18:04:19.589-07:00 level=INFO source=runner.go:836 msg="starting ollama engine"
time=2025-05-22T18:04:19.590-07:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:56932"
time=2025-05-22T18:04:19.605-07:00 level=INFO source=ggml.go:73 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=1290 num_key_values=36
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2025-05-22T18:04:19.684-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2789744 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=42416 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll        
time=2025-05-22T18:04:19.707-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-22T18:04:19.790-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="7.5 GiB"
time=2025-05-22T18:04:19.790-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="12.2 GiB"
time=2025-05-22T18:04:19.803-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-22T18:04:19.934-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5289526 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=42416 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87
time=2025-05-22T18:04:28.570-07:00 level=WARN source=server.go:598 msg="client connection closed before server finished loading, aborting load"
time=2025-05-22T18:04:28.570-07:00 level=ERROR source=sched.go:478 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
[GIN] 2025/05/22 - 18:04:28 | 499 |    14.261202s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-22T18:04:33.585-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0155523 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=6124 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66
time=2025-05-22T18:04:33.835-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2654386 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=6124 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66
time=2025-05-22T18:04:34.085-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5155626 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=6124 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66
[GIN] 2025/05/22 - 18:05:27 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 18:05:27 | 200 |       3.141ms |       127.0.0.1 | GET      "/api/tags"
time=2025-05-22T18:05:42.626-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="41.8 GiB" free_swap="37.0 GiB"
time=2025-05-22T18:05:42.627-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=45 layers.split="" memory.available="[17.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.8 GiB" memory.required.partial="17.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[17.7 GiB]" memory.weights.total="18.1 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-05-22T18:05:42.656-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 --ctx-size 4096 --batch-size 512 --n-gpu-layers 45 --threads 8 --no-mmap --parallel 1 --port 57081"
time=2025-05-22T18:05:42.659-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-22T18:05:42.659-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"        
time=2025-05-22T18:05:42.659-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T18:05:42.693-07:00 level=INFO source=runner.go:836 msg="starting ollama engine"
time=2025-05-22T18:05:42.694-07:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:57081"
time=2025-05-22T18:05:42.709-07:00 level=INFO source=ggml.go:73 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=1290 num_key_values=36
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll        
time=2025-05-22T18:05:42.811-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-22T18:05:42.888-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="7.5 GiB"
time=2025-05-22T18:05:42.889-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="12.2 GiB"
time=2025-05-22T18:05:42.911-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-22T18:05:45.989-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="348.0 MiB"
time=2025-05-22T18:05:45.989-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="340.0 MiB"
time=2025-05-22T18:05:46.169-07:00 level=INFO source=server.go:630 msg="llama runner started in 3.51 seconds"
[GIN] 2025/05/22 - 18:05:51 | 200 |    9.4155777s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-22T18:05:52.311-07:00 level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="4.1 GiB"
time=2025-05-22T18:05:52.311-07:00 level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 parallel=2 available=4450906112 required="3.9 GiB"
time=2025-05-22T18:05:52.334-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="33.4 GiB" free_swap="14.4 GiB"
time=2025-05-22T18:05:52.334-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[4.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.9 GiB" memory.required.partial="3.9 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.9 GiB]" memory.weights.total="1.9 GiB" memory.weights.repeating="1.6 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from C:\Users\admin\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ä  Ä ", "Ä  Ä Ä Ä ", "Ä Ä  Ä Ä ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW)
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'ÄŠ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-22T18:05:52.532-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 57096"
time=2025-05-22T18:05:52.534-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=2
time=2025-05-22T18:05:52.534-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"        
time=2025-05-22T18:05:52.536-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T18:05:52.582-07:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll        
time=2025-05-22T18:05:52.664-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-22T18:05:52.665-07:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:57096"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22994 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from C:\Users\admin\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
time=2025-05-22T18:05:52.789-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ä  Ä ", "Ä  Ä Ä Ä ", "Ä Ä  Ä Ä ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW)
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 3072
print_info: n_layer          = 28
print_info: n_head           = 24
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 3
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8192
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 3B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'ÄŠ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:        CUDA0 model buffer size =  1918.35 MiB
load_tensors:          CPU model buffer size =   308.23 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     1.00 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =   896.00 MiB
llama_kv_cache_unified: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_context:      CUDA0 compute buffer size =   424.00 MiB
llama_context:  CUDA_Host compute buffer size =    22.01 MiB
llama_context: graph nodes  = 958
llama_context: graph splits = 2
time=2025-05-22T18:05:54.799-07:00 level=INFO source=server.go:630 msg="llama runner started in 2.27 seconds"
[GIN] 2025/05/22 - 18:05:54 | 200 |    2.7512694s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:07:12 | 500 |   27.1341562s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:07:39 | 500 |   19.4662615s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:08:24 | 400 |      1.0006ms |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:08:54 | 400 |            0s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:09:41 | 400 |            0s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:11:51 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 18:11:51 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/05/22 - 18:11:59 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 18:11:59 | 200 |      2.0667ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/05/22 - 18:12:09 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 18:12:09 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/05/22 - 18:12:13 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 18:12:13 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/05/22 - 18:13:07 | 200 |         2m55s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:13:34 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 18:13:34 | 200 |       2.075ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/05/22 - 18:13:37 | 500 |         1m30s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/22 - 18:13:37 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 18:13:37 | 200 |       514.6µs |       127.0.0.1 | GET      "/api/ps"
time=2025-05-22T18:13:42.496-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0173303 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=33316 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66
time=2025-05-22T18:13:42.746-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2671461 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=33316 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66
time=2025-05-22T18:13:42.996-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5171999 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=33316 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66
time=2025-05-22T18:14:51.845-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="45.6 GiB" free_swap="45.1 GiB"
time=2025-05-22T18:14:51.847-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=59 layers.split="" memory.available="[21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.8 GiB" memory.required.partial="21.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[21.7 GiB]" memory.weights.total="18.1 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-05-22T18:14:51.871-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 --ctx-size 4096 --batch-size 512 --n-gpu-layers 59 --threads 8 --no-mmap --parallel 1 --port 58011"
time=2025-05-22T18:14:51.874-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-22T18:14:51.874-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"        
time=2025-05-22T18:14:51.876-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T18:14:51.907-07:00 level=INFO source=runner.go:836 msg="starting ollama engine"
time=2025-05-22T18:14:51.908-07:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:58011"
time=2025-05-22T18:14:51.922-07:00 level=INFO source=ggml.go:73 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=1290 num_key_values=36
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll        
time=2025-05-22T18:14:52.033-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-22T18:14:52.108-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="16.0 GiB"
time=2025-05-22T18:14:52.108-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="3.7 GiB"
time=2025-05-22T18:14:52.128-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-22T18:14:54.998-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="348.0 MiB"
time=2025-05-22T18:14:54.998-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="340.0 MiB"
time=2025-05-22T18:14:55.135-07:00 level=INFO source=server.go:630 msg="llama runner started in 3.26 seconds"
[GIN] 2025/05/22 - 18:15:53 | 500 |          1m1s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-22T18:16:03.318-07:00 level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="2.3 GiB"
time=2025-05-22T18:16:03.690-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="45.6 GiB" free_swap="45.1 GiB"
time=2025-05-22T18:16:03.691-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=62 layers.split="" memory.available="[21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.4 GiB" memory.required.partial="20.8 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[20.8 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB"
time=2025-05-22T18:16:03.721-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 4096 --batch-size 512 --n-gpu-layers 62 --threads 8 --no-mmap --parallel 1 --port 58124"
time=2025-05-22T18:16:03.724-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-22T18:16:03.724-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"        
time=2025-05-22T18:16:03.724-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T18:16:03.758-07:00 level=INFO source=runner.go:836 msg="starting ollama engine"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.7.0

Originally created by @2jfs904judsw20600jikn613d0dookl23jsig on GitHub (May 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10823 ### What is the issue? I'm non-technical. Ask follow-up if more information needed. When running the following command from within my python app: ''' curl.exe -X POST http://localhost:11434/api/chat -H Content-Type: application/json -d {"model": "gemma3:27b-it-qat", "messages": [{"role": "user", "content": "hi"}], "stream": false, "options": {"num_gpu": -1}} '''' It executes correctly, hitting the ollama local server API. Models are being loaded fully into VRAM (w/ sufficient extra room). CPU is being used around 50-70% so clearly there is splitting between the GPU and CPU. When using "ollama run *model tag*" GPU is utilized as expected, at 100% during inference. PC Specs: - Windows 11 - Intel 285k CPU - Nvidia 4090 GPU Models exhibiting the behavior: 1. gemma3:27b-it-qat 18 GB 2. qwen2.5vl:32b-q4_K_M 21 GB This behavior has never happened before on this machine with other apps that use Ollama before the 0.7.0 update (I haven't tried these other apps since the update and don't really have the intention to). ### Relevant log output ```shell [GIN] 2025/05/22 - 17:52:10 | 200 | 3m18s | 127.0.0.1 | POST "/api/chat" time=2025-05-22T17:57:15.333-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0119824 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=2556 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 time=2025-05-22T17:57:15.583-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2621407 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=2556 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 time=2025-05-22T17:57:15.834-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5122889 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=2556 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 [GIN] 2025/05/22 - 17:57:30 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 17:57:30 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/05/22 - 17:59:04 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 17:59:04 | 200 | 37.6968ms | 127.0.0.1 | POST "/api/show" time=2025-05-22T17:59:04.661-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="41.6 GiB" free_swap="37.0 GiB" time=2025-05-22T17:59:04.662-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=49 layers.split="" memory.available="[17.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.4 GiB" memory.required.partial="17.8 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[17.8 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-05-22T17:59:04.692-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 1 --port 56475" time=2025-05-22T17:59:04.695-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-22T17:59:04.695-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-22T17:59:04.696-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-22T17:59:04.729-07:00 level=INFO source=runner.go:836 msg="starting ollama engine" time=2025-05-22T17:59:04.730-07:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:56475" time=2025-05-22T17:59:04.746-07:00 level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1247 num_key_values=40 load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-22T17:59:04.842-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-22T17:59:04.916-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="10.6 GiB" time=2025-05-22T17:59:04.916-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="8.9 GiB" time=2025-05-22T17:59:04.947-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-22T17:59:07.584-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="288.0 MiB" time=2025-05-22T17:59:07.584-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="274.5 MiB" time=2025-05-22T17:59:07.704-07:00 level=INFO source=server.go:630 msg="llama runner started in 3.01 seconds" [GIN] 2025/05/22 - 17:59:22 | 200 | 18.0644609s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/05/22 - 18:01:24 | 200 | 15.8832629s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:03:54 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 18:03:54 | 200 | 1.0355ms | 127.0.0.1 | GET "/api/tags" time=2025-05-22T18:04:14.400-07:00 level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="5.7 GiB" time=2025-05-22T18:04:19.434-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0290693 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=42416 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 time=2025-05-22T18:04:19.515-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="41.5 GiB" free_swap="36.7 GiB" time=2025-05-22T18:04:19.517-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=45 layers.split="" memory.available="[17.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.8 GiB" memory.required.partial="17.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[17.7 GiB]" memory.weights.total="18.1 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB" time=2025-05-22T18:04:19.548-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 --ctx-size 4096 --batch-size 512 --n-gpu-layers 45 --threads 8 --no-mmap --parallel 1 --port 56932" time=2025-05-22T18:04:19.551-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-22T18:04:19.551-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-22T18:04:19.552-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-22T18:04:19.589-07:00 level=INFO source=runner.go:836 msg="starting ollama engine" time=2025-05-22T18:04:19.590-07:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:56932" time=2025-05-22T18:04:19.605-07:00 level=INFO source=ggml.go:73 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=1290 num_key_values=36 load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-05-22T18:04:19.684-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2789744 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=42416 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-22T18:04:19.707-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-22T18:04:19.790-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="7.5 GiB" time=2025-05-22T18:04:19.790-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="12.2 GiB" time=2025-05-22T18:04:19.803-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-22T18:04:19.934-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5289526 runner.size="23.4 GiB" runner.vram="17.8 GiB" runner.parallel=1 runner.pid=42416 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 time=2025-05-22T18:04:28.570-07:00 level=WARN source=server.go:598 msg="client connection closed before server finished loading, aborting load" time=2025-05-22T18:04:28.570-07:00 level=ERROR source=sched.go:478 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" [GIN] 2025/05/22 - 18:04:28 | 499 | 14.261202s | 127.0.0.1 | POST "/api/chat" time=2025-05-22T18:04:33.585-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0155523 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=6124 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 time=2025-05-22T18:04:33.835-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2654386 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=6124 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 time=2025-05-22T18:04:34.085-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5155626 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=6124 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 [GIN] 2025/05/22 - 18:05:27 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 18:05:27 | 200 | 3.141ms | 127.0.0.1 | GET "/api/tags" time=2025-05-22T18:05:42.626-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="41.8 GiB" free_swap="37.0 GiB" time=2025-05-22T18:05:42.627-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=45 layers.split="" memory.available="[17.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.8 GiB" memory.required.partial="17.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[17.7 GiB]" memory.weights.total="18.1 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB" time=2025-05-22T18:05:42.656-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 --ctx-size 4096 --batch-size 512 --n-gpu-layers 45 --threads 8 --no-mmap --parallel 1 --port 57081" time=2025-05-22T18:05:42.659-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-22T18:05:42.659-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-22T18:05:42.659-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-22T18:05:42.693-07:00 level=INFO source=runner.go:836 msg="starting ollama engine" time=2025-05-22T18:05:42.694-07:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:57081" time=2025-05-22T18:05:42.709-07:00 level=INFO source=ggml.go:73 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=1290 num_key_values=36 load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-22T18:05:42.811-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-22T18:05:42.888-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="7.5 GiB" time=2025-05-22T18:05:42.889-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="12.2 GiB" time=2025-05-22T18:05:42.911-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-22T18:05:45.989-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="348.0 MiB" time=2025-05-22T18:05:45.989-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="340.0 MiB" time=2025-05-22T18:05:46.169-07:00 level=INFO source=server.go:630 msg="llama runner started in 3.51 seconds" [GIN] 2025/05/22 - 18:05:51 | 200 | 9.4155777s | 127.0.0.1 | POST "/api/chat" time=2025-05-22T18:05:52.311-07:00 level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="4.1 GiB" time=2025-05-22T18:05:52.311-07:00 level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 parallel=2 available=4450906112 required="3.9 GiB" time=2025-05-22T18:05:52.334-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="33.4 GiB" free_swap="14.4 GiB" time=2025-05-22T18:05:52.334-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[4.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.9 GiB" memory.required.partial="3.9 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.9 GiB]" memory.weights.total="1.9 GiB" memory.weights.repeating="1.6 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from C:\Users\admin\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ä Ä ", "Ä Ä Ä Ä ", "Ä Ä Ä Ä ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'ÄŠ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-22T18:05:52.532-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 57096" time=2025-05-22T18:05:52.534-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=2 time=2025-05-22T18:05:52.534-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-22T18:05:52.536-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-22T18:05:52.582-07:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-22T18:05:52.664-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-22T18:05:52.665-07:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:57096" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22994 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from C:\Users\admin\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... time=2025-05-22T18:05:52.789-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ä Ä ", "Ä Ä Ä Ä ", "Ä Ä Ä Ä ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 3072 print_info: n_layer = 28 print_info: n_head = 24 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 3 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 3B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'ÄŠ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: CUDA0 model buffer size = 1918.35 MiB load_tensors: CPU model buffer size = 308.23 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 1.00 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 896.00 MiB llama_kv_cache_unified: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_context: CUDA0 compute buffer size = 424.00 MiB llama_context: CUDA_Host compute buffer size = 22.01 MiB llama_context: graph nodes = 958 llama_context: graph splits = 2 time=2025-05-22T18:05:54.799-07:00 level=INFO source=server.go:630 msg="llama runner started in 2.27 seconds" [GIN] 2025/05/22 - 18:05:54 | 200 | 2.7512694s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:07:12 | 500 | 27.1341562s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:07:39 | 500 | 19.4662615s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:08:24 | 400 | 1.0006ms | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:08:54 | 400 | 0s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:09:41 | 400 | 0s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:11:51 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 18:11:51 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/05/22 - 18:11:59 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 18:11:59 | 200 | 2.0667ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/05/22 - 18:12:09 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 18:12:09 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/05/22 - 18:12:13 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 18:12:13 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/05/22 - 18:13:07 | 200 | 2m55s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:13:34 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 18:13:34 | 200 | 2.075ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/05/22 - 18:13:37 | 500 | 1m30s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/22 - 18:13:37 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 18:13:37 | 200 | 514.6µs | 127.0.0.1 | GET "/api/ps" time=2025-05-22T18:13:42.496-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0173303 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=33316 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 time=2025-05-22T18:13:42.746-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2671461 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=33316 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 time=2025-05-22T18:13:42.996-07:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5171999 runner.size="23.8 GiB" runner.vram="17.7 GiB" runner.parallel=1 runner.pid=33316 runner.model=C:\Users\admin\.ollama\models\blobs\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 time=2025-05-22T18:14:51.845-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="45.6 GiB" free_swap="45.1 GiB" time=2025-05-22T18:14:51.847-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=59 layers.split="" memory.available="[21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.8 GiB" memory.required.partial="21.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[21.7 GiB]" memory.weights.total="18.1 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="853.3 MiB" memory.graph.partial="853.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB" time=2025-05-22T18:14:51.871-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 --ctx-size 4096 --batch-size 512 --n-gpu-layers 59 --threads 8 --no-mmap --parallel 1 --port 58011" time=2025-05-22T18:14:51.874-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-22T18:14:51.874-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-22T18:14:51.876-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-22T18:14:51.907-07:00 level=INFO source=runner.go:836 msg="starting ollama engine" time=2025-05-22T18:14:51.908-07:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:58011" time=2025-05-22T18:14:51.922-07:00 level=INFO source=ggml.go:73 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=1290 num_key_values=36 load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-22T18:14:52.033-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-22T18:14:52.108-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="16.0 GiB" time=2025-05-22T18:14:52.108-07:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="3.7 GiB" time=2025-05-22T18:14:52.128-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-22T18:14:54.998-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="348.0 MiB" time=2025-05-22T18:14:54.998-07:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="340.0 MiB" time=2025-05-22T18:14:55.135-07:00 level=INFO source=server.go:630 msg="llama runner started in 3.26 seconds" [GIN] 2025/05/22 - 18:15:53 | 500 | 1m1s | 127.0.0.1 | POST "/api/chat" time=2025-05-22T18:16:03.318-07:00 level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-abd6bb9b-1deb-493f-9b96-28ba55537561 library=cuda total="24.0 GiB" available="2.3 GiB" time=2025-05-22T18:16:03.690-07:00 level=INFO source=server.go:135 msg="system memory" total="63.3 GiB" free="45.6 GiB" free_swap="45.1 GiB" time=2025-05-22T18:16:03.691-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=62 layers.split="" memory.available="[21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.4 GiB" memory.required.partial="20.8 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[20.8 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-05-22T18:16:03.721-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 4096 --batch-size 512 --n-gpu-layers 62 --threads 8 --no-mmap --parallel 1 --port 58124" time=2025-05-22T18:16:03.724-07:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-22T18:16:03.724-07:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-22T18:16:03.724-07:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-22T18:16:03.758-07:00 level=INFO source=runner.go:836 msg="starting ollama engine" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.7.0
GiteaMirror added the bug label 2026-04-29 04:16:23 -05:00
Author
Owner

@konn-submarine-bu commented on GitHub (May 23, 2025):

lol i have same problem with u, and i reported it just ago

<!-- gh-comment-id:2903526519 --> @konn-submarine-bu commented on GitHub (May 23, 2025): lol i have same problem with u, and i reported it just ago
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

At different times, there's different amounts of free VRAM.

time=2025-05-22T17:59:04.662-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1
 layers.model=63 layers.offload=49 layers.split="" memory.available="[17.8 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="23.4 GiB" memory.required.partial="17.8 GiB" memory.required.kv="944.0 MiB"
 memory.required.allocations="[17.8 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB"
 memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB"
 projector.weights="806.2 MiB" projector.graph="1.0 GiB"
time=2025-05-22T17:59:04.692-07:00 level=INFO source=server.go:431 msg="starting llama server"
 cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model
 C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87
 --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 1 --port 56475"

17.8G free, 49 of 63 layers offloaded to VRAM.

time=2025-05-22T18:16:03.691-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1
 layers.model=63 layers.offload=62 layers.split="" memory.available="[21.7 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="23.4 GiB" memory.required.partial="20.8 GiB" memory.required.kv="944.0 MiB"
 memory.required.allocations="[20.8 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB"
 memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB"
 projector.weights="806.2 MiB" projector.graph="1.0 GiB"
time=2025-05-22T18:16:03.721-07:00 level=INFO source=server.go:431 msg="starting llama server"
 cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model
 C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87
 --ctx-size 4096 --batch-size 512 --n-gpu-layers 62 --threads 8 --no-mmap --parallel 1 --port 58124"

21.7G free, 62 of 63 layers offloaded to VRAM.

<!-- gh-comment-id:2903768232 --> @rick-github commented on GitHub (May 23, 2025): At different times, there's different amounts of free VRAM. ``` time=2025-05-22T17:59:04.662-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=49 layers.split="" memory.available="[17.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.4 GiB" memory.required.partial="17.8 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[17.8 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-05-22T17:59:04.692-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 1 --port 56475" ``` 17.8G free, 49 of 63 layers offloaded to VRAM. ``` time=2025-05-22T18:16:03.691-07:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=62 layers.split="" memory.available="[21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="23.4 GiB" memory.required.partial="20.8 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[20.8 GiB]" memory.weights.total="16.0 GiB" memory.weights.repeating="13.4 GiB" memory.weights.nonrepeating="2.6 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-05-22T18:16:03.721-07:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --ctx-size 4096 --batch-size 512 --n-gpu-layers 62 --threads 8 --no-mmap --parallel 1 --port 58124" ``` 21.7G free, 62 of 63 layers offloaded to VRAM.
Author
Owner

@PowZone commented on GitHub (Sep 9, 2025):

Same problem here with version 0.11.10 on Windows (RTX4080).
Via UI app uses GPU, via API runs on CPU :/

<!-- gh-comment-id:3270812813 --> @PowZone commented on GitHub (Sep 9, 2025): Same problem here with version 0.11.10 on Windows (RTX4080). Via UI app uses GPU, via API runs on CPU :/
Author
Owner

@rodrigojuarez commented on GitHub (Oct 24, 2025):

I have the same issue with 0.12.6 on Windows, RTX4090, did you find anything @PowZone?

<!-- gh-comment-id:3441101828 --> @rodrigojuarez commented on GitHub (Oct 24, 2025): I have the same issue with 0.12.6 on Windows, RTX4090, did you find anything @PowZone?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53618