[GH-ISSUE #10654] Significantly reduced disk I/O when loading models with GPU in Docker on WSL #7004

Open
opened 2026-04-12 18:53:47 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @B-X-Y on GitHub (May 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10654

What is the issue?

When running Ollama with GPU support inside Docker on WSL, model loading is limited by very low disk I/O throughput, consistently around 20–30 MB/s. In comparison:

The CPU version of Ollama in Docker on WSL achieves 200–300 MB/s.
The GPU version running directly on the Windows host (outside Docker/WSL) also achieves 200–300 MB/s.

All tests were performed on a system with an SSD, so the observed I/O performance is likely below the hardware's capability. Even the 200–300 MB/s I/O rate in the CPU version and on the host system seems lower than expected, potentially indicating broader inefficiencies or bottlenecks in how model data is streamed from disk.

The performance degradation in the GPU + Docker + WSL setup is particularly severe and significantly increases model loading times.

Relevant log output

2025-05-10 16:26:42.425 | 2025/05/10 21:26:42 routes.go:1233: INFO server config env="map\[CUDA\_VISIBLE\_DEVICES: GPU\_DEVICE\_ORDINAL: HIP\_VISIBLE\_DEVICES: HSA\_OVERRIDE\_GFX\_VERSION: HTTPS\_PROXY: HTTP\_PROXY: NO\_PROXY: OLLAMA\_CONTEXT\_LENGTH:4096 OLLAMA\_DEBUG\:false OLLAMA\_FLASH\_ATTENTION\:false OLLAMA\_GPU\_OVERHEAD:0 OLLAMA\_HOST:[http://0.0.0.0:11434](http://0.0.0.0:11434) OLLAMA\_INTEL\_GPU\:false OLLAMA\_KEEP\_ALIVE:5m0s OLLAMA\_KV\_CACHE\_TYPE: OLLAMA\_LLM\_LIBRARY: OLLAMA\_LOAD\_TIMEOUT:5m0s OLLAMA\_MAX\_LOADED\_MODELS:0 OLLAMA\_MAX\_QUEUE:512 OLLAMA\_MODELS:/root/.ollama/models OLLAMA\_MULTIUSER\_CACHE\:false OLLAMA\_NEW\_ENGINE\:false OLLAMA\_NOHISTORY\:false OLLAMA\_NOPRUNE\:false OLLAMA\_NUM\_PARALLEL:0 OLLAMA\_ORIGINS:\[http\://localhost https\://localhost http\://localhost:\* https\://localhost:\* [http://127.0.0.1](http://127.0.0.1) [https://127.0.0.1](https://127.0.0.1) [http://127.0.0.1:\*](http://127.0.0.1:*) [https://127.0.0.1:\*](https://127.0.0.1:*) [http://0.0.0.0](http://0.0.0.0) [https://0.0.0.0](https://0.0.0.0) [http://0.0.0.0:\*](http://0.0.0.0:*) [https://0.0.0.0:\*](https://0.0.0.0:*) app\://\* file://\* tauri://\* vscode-webview://\* vscode-file://\*] OLLAMA\_SCHED\_SPREAD\:false ROCR\_VISIBLE\_DEVICES: http\_proxy: https\_proxy: no\_proxy:]"
2025-05-10 16:26:42.872 | time=2025-05-10T21:26:42.872Z level=INFO source=images.go:463 msg="total blobs: 77"
2025-05-10 16:26:43.108 | time=2025-05-10T21:26:43.108Z level=INFO source=images.go:470 msg="total unused blobs removed: 0"
2025-05-10 16:26:43.310 | time=2025-05-10T21:26:43.309Z level=INFO source=routes.go:1300 msg="Listening on \[::]:11434 (version 0.6.8)"
2025-05-10 16:26:43.310 | time=2025-05-10T21:26:43.310Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2025-05-10 16:26:43.526 | time=2025-05-10T21:26:43.525Z level=INFO source=types.go:130 msg="inference compute" id=GPU-07af221e-9f32-2d81-d076-8cf8e9abfb8a library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090 Laptop GPU" total="16.0 GiB" available="14.7 GiB"
2025-05-10 16:26:59.182 | time=2025-05-10T21:26:59.181Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:26:59.347 | time=2025-05-10T21:26:59.347Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:26:59.396 | time=2025-05-10T21:26:59.396Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:26:59.397 | time=2025-05-10T21:26:59.397Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block\_count default=0
2025-05-10 16:26:59.398 | time=2025-05-10T21:26:59.397Z level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e gpu=GPU-07af221e-9f32-2d81-d076-8cf8e9abfb8a parallel=2 available=15778971648 required="11.2 GiB"
2025-05-10 16:26:59.501 | time=2025-05-10T21:26:59.501Z level=INFO source=server.go:106 msg="system memory" total="31.2 GiB" free="28.9 GiB" free\_swap="8.0 GiB"
2025-05-10 16:26:59.501 | time=2025-05-10T21:26:59.501Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block\_count default=0
2025-05-10 16:26:59.501 | time=2025-05-10T21:26:59.501Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="\[14.7 GiB]" memory.gpu\_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="11.2 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="\[11.2 GiB]" memory.weights.total="8.2 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB"
2025-05-10 16:27:01.443 | llama\_model\_loader: loaded meta data with 27 key-value pairs and 443 tensors from /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest))
2025-05-10 16:27:01.443 | llama\_model\_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   0:                       general.architecture str              = qwen3
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   1:                               general.type str              = model
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   2:                               general.name str              = Qwen3 14B
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   3:                           general.basename str              = Qwen3
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   4:                         general.size\_label str              = 14B
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   5:                          qwen3.block\_count u32              = 40
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   6:                       qwen3.context\_length u32              = 40960
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   7:                     qwen3.embedding\_length u32              = 5120
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   8:                  qwen3.feed\_forward\_length u32              = 17408
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv   9:                 qwen3.attention.head\_count u32              = 40
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv  10:              qwen3.attention.head\_count\_kv u32              = 8
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv  11:                       qwen3.rope.freq\_base f32              = 1000000.000000
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv  12:     qwen3.attention.layer\_norm\_rms\_epsilon f32              = 0.000001
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv  13:                 qwen3.attention.key\_length u32              = 128
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv  14:               qwen3.attention.value\_length u32              = 128
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
2025-05-10 16:27:01.443 | llama\_model\_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
2025-05-10 16:27:01.458 | llama\_model\_loader: - kv  17:                      tokenizer.ggml.tokens arr\[str,151936]  = \["!", """, "#", "\$", "%", "&", "'", ...
2025-05-10 16:27:01.463 | llama\_model\_loader: - kv  18:                  tokenizer.ggml.token\_type arr\[i32,151936]  = \[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2025-05-10 16:27:01.481 | llama\_model\_loader: - kv  19:                      tokenizer.ggml.merges arr\[str,151387]  = \["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
2025-05-10 16:27:01.481 | llama\_model\_loader: - kv  20:                tokenizer.ggml.eos\_token\_id u32              = 151645
2025-05-10 16:27:01.481 | llama\_model\_loader: - kv  21:            tokenizer.ggml.padding\_token\_id u32              = 151643
2025-05-10 16:27:01.481 | llama\_model\_loader: - kv  22:                tokenizer.ggml.bos\_token\_id u32              = 151643
2025-05-10 16:27:01.481 | llama\_model\_loader: - kv  23:               tokenizer.ggml.add\_bos\_token bool             = false
2025-05-10 16:27:01.481 | llama\_model\_loader: - kv  24:                    tokenizer.chat\_template str              = {%- if tools %}\n    {{- '<|im\_start|>...
2025-05-10 16:27:01.481 | llama\_model\_loader: - kv  25:               general.quantization\_version u32              = 2
2025-05-10 16:27:01.481 | llama\_model\_loader: - kv  26:                          general.file\_type u32              = 15
2025-05-10 16:27:01.481 | llama\_model\_loader: - type  f32:  161 tensors
2025-05-10 16:27:01.481 | llama\_model\_loader: - type  f16:   40 tensors
2025-05-10 16:27:01.481 | llama\_model\_loader: - type q4\_K:  221 tensors
2025-05-10 16:27:01.481 | llama\_model\_loader: - type q6\_K:   21 tensors
2025-05-10 16:27:01.481 | print\_info: file format = GGUF V3 (latest)
2025-05-10 16:27:01.481 | print\_info: file type   = Q4\_K - Medium
2025-05-10 16:27:01.481 | print\_info: file size   = 8.63 GiB (5.02 BPW)
2025-05-10 16:27:01.570 | load: special tokens cache size = 26
2025-05-10 16:27:01.603 | load: token to piece cache size = 0.9311 MB
2025-05-10 16:27:01.603 | print\_info: arch             = qwen3
2025-05-10 16:27:01.603 | print\_info: vocab\_only       = 1
2025-05-10 16:27:01.603 | print\_info: model type       = ?B
2025-05-10 16:27:01.603 | print\_info: model params     = 14.77 B
2025-05-10 16:27:01.603 | print\_info: general.name     = Qwen3 14B
2025-05-10 16:27:01.603 | print\_info: vocab type       = BPE
2025-05-10 16:27:01.603 | print\_info: n\_vocab          = 151936
2025-05-10 16:27:01.603 | print\_info: n\_merges         = 151387
2025-05-10 16:27:01.603 | print\_info: BOS token        = 151643 '<|endoftext|>'
2025-05-10 16:27:01.603 | print\_info: EOS token        = 151645 '<|im\_end|>'
2025-05-10 16:27:01.603 | print\_info: EOT token        = 151645 '<|im\_end|>'
2025-05-10 16:27:01.603 | print\_info: PAD token        = 151643 '<|endoftext|>'
2025-05-10 16:27:01.603 | print\_info: LF token         = 198 'Ċ'
2025-05-10 16:27:01.603 | print\_info: FIM PRE token    = 151659 '<|fim\_prefix|>'
2025-05-10 16:27:01.603 | print\_info: FIM SUF token    = 151661 '<|fim\_suffix|>'
2025-05-10 16:27:01.603 | print\_info: FIM MID token    = 151660 '<|fim\_middle|>'
2025-05-10 16:27:01.603 | print\_info: FIM PAD token    = 151662 '<|fim\_pad|>'
2025-05-10 16:27:01.603 | print\_info: FIM REP token    = 151663 '<|repo\_name|>'
2025-05-10 16:27:01.603 | print\_info: FIM SEP token    = 151664 '<|file\_sep|>'
2025-05-10 16:27:01.603 | print\_info: EOG token        = 151643 '<|endoftext|>'
2025-05-10 16:27:01.603 | print\_info: EOG token        = 151645 '<|im\_end|>'
2025-05-10 16:27:01.603 | print\_info: EOG token        = 151662 '<|fim\_pad|>'
2025-05-10 16:27:01.603 | print\_info: EOG token        = 151663 '<|repo\_name|>'
2025-05-10 16:27:01.603 | print\_info: EOG token        = 151664 '<|file\_sep|>'
2025-05-10 16:27:01.603 | print\_info: max token length = 256
2025-05-10 16:27:01.603 | llama\_model\_load: vocab only - skipping tensors
2025-05-10 16:27:01.606 | time=2025-05-10T21:27:01.606Z level=INFO source=server.go:410 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 16 --parallel 2 --port 33421"
2025-05-10 16:27:01.606 | time=2025-05-10T21:27:01.606Z level=INFO source=sched.go:452 msg="loaded runners" count=1
2025-05-10 16:27:01.606 | time=2025-05-10T21:27:01.606Z level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
2025-05-10 16:27:01.607 | time=2025-05-10T21:27:01.606Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"
2025-05-10 16:27:01.618 | time=2025-05-10T21:27:01.618Z level=INFO source=runner.go:853 msg="starting go runner"
2025-05-10 16:27:01.621 | load\_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
2025-05-10 16:27:01.721 | ggml\_cuda\_init: GGML\_CUDA\_FORCE\_MMQ:    no
2025-05-10 16:27:01.721 | ggml\_cuda\_init: GGML\_CUDA\_FORCE\_CUBLAS: no
2025-05-10 16:27:01.721 | ggml\_cuda\_init: found 1 CUDA devices:
2025-05-10 16:27:01.721 |   Device 0: NVIDIA GeForce RTX 4090 Laptop GPU, compute capability 8.9, VMM: yes
2025-05-10 16:27:01.721 | load\_backend: loaded CUDA backend from /usr/lib/ollama/cuda\_v12/libggml-cuda.so
2025-05-10 16:27:01.721 | time=2025-05-10T21:27:01.721Z level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX\_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE\_GRAPHS=1 CUDA.0.PEER\_MAX\_BATCH\_SIZE=128 compiler=cgo(gcc)
2025-05-10 16:27:01.736 | time=2025-05-10T21:27:01.736Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:33421"
2025-05-10 16:27:01.841 | llama\_model\_load\_from\_file\_impl: using device CUDA0 (NVIDIA GeForce RTX 4090 Laptop GPU) - 15048 MiB free
2025-05-10 16:27:01.857 | time=2025-05-10T21:27:01.857Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"
2025-05-10 16:27:03.673 | llama\_model\_loader: loaded meta data with 27 key-value pairs and 443 tensors from /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest))
2025-05-10 16:27:03.674 | llama\_model\_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   0:                       general.architecture str              = qwen3
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   1:                               general.type str              = model
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   2:                               general.name str              = Qwen3 14B
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   3:                           general.basename str              = Qwen3
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   4:                         general.size\_label str              = 14B
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   5:                          qwen3.block\_count u32              = 40
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   6:                       qwen3.context\_length u32              = 40960
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   7:                     qwen3.embedding\_length u32              = 5120
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   8:                  qwen3.feed\_forward\_length u32              = 17408
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv   9:                 qwen3.attention.head\_count u32              = 40
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv  10:              qwen3.attention.head\_count\_kv u32              = 8
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv  11:                       qwen3.rope.freq\_base f32              = 1000000.000000
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv  12:     qwen3.attention.layer\_norm\_rms\_epsilon f32              = 0.000001
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv  13:                 qwen3.attention.key\_length u32              = 128
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv  14:               qwen3.attention.value\_length u32              = 128
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
2025-05-10 16:27:03.674 | llama\_model\_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
2025-05-10 16:27:03.688 | llama\_model\_loader: - kv  17:                      tokenizer.ggml.tokens arr\[str,151936]  = \["!", """, "#", "\$", "%", "&", "'", ...
2025-05-10 16:27:03.693 | llama\_model\_loader: - kv  18:                  tokenizer.ggml.token\_type arr\[i32,151936]  = \[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2025-05-10 16:27:03.707 | llama\_model\_loader: - kv  19:                      tokenizer.ggml.merges arr\[str,151387]  = \["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
2025-05-10 16:27:03.707 | llama\_model\_loader: - kv  20:                tokenizer.ggml.eos\_token\_id u32              = 151645
2025-05-10 16:27:03.707 | llama\_model\_loader: - kv  21:            tokenizer.ggml.padding\_token\_id u32              = 151643
2025-05-10 16:27:03.707 | llama\_model\_loader: - kv  22:                tokenizer.ggml.bos\_token\_id u32              = 151643
2025-05-10 16:27:03.707 | llama\_model\_loader: - kv  23:               tokenizer.ggml.add\_bos\_token bool             = false
2025-05-10 16:27:03.707 | llama\_model\_loader: - kv  24:                    tokenizer.chat\_template str              = {%- if tools %}\n    {{- '<|im\_start|>...
2025-05-10 16:27:03.707 | llama\_model\_loader: - kv  25:               general.quantization\_version u32              = 2
2025-05-10 16:27:03.707 | llama\_model\_loader: - kv  26:                          general.file\_type u32              = 15
2025-05-10 16:27:03.707 | llama\_model\_loader: - type  f32:  161 tensors
2025-05-10 16:27:03.707 | llama\_model\_loader: - type  f16:   40 tensors
2025-05-10 16:27:03.707 | llama\_model\_loader: - type q4\_K:  221 tensors
2025-05-10 16:27:03.707 | llama\_model\_loader: - type q6\_K:   21 tensors
2025-05-10 16:27:03.707 | print\_info: file format = GGUF V3 (latest)
2025-05-10 16:27:03.707 | print\_info: file type   = Q4\_K - Medium
2025-05-10 16:27:03.707 | print\_info: file size   = 8.63 GiB (5.02 BPW)
2025-05-10 16:27:03.793 | load: special tokens cache size = 26
2025-05-10 16:27:03.820 | load: token to piece cache size = 0.9311 MB
2025-05-10 16:27:03.820 | print\_info: arch             = qwen3
2025-05-10 16:27:03.820 | print\_info: vocab\_only       = 0
2025-05-10 16:27:03.820 | print\_info: n\_ctx\_train      = 40960
2025-05-10 16:27:03.820 | print\_info: n\_embd           = 5120
2025-05-10 16:27:03.820 | print\_info: n\_layer          = 40
2025-05-10 16:27:03.820 | print\_info: n\_head           = 40
2025-05-10 16:27:03.820 | print\_info: n\_head\_kv        = 8
2025-05-10 16:27:03.820 | print\_info: n\_rot            = 128
2025-05-10 16:27:03.820 | print\_info: n\_swa            = 0
2025-05-10 16:27:03.820 | print\_info: n\_swa\_pattern    = 1
2025-05-10 16:27:03.820 | print\_info: n\_embd\_head\_k    = 128
2025-05-10 16:27:03.820 | print\_info: n\_embd\_head\_v    = 128
2025-05-10 16:27:03.820 | print\_info: n\_gqa            = 5
2025-05-10 16:27:03.820 | print\_info: n\_embd\_k\_gqa     = 1024
2025-05-10 16:27:03.820 | print\_info: n\_embd\_v\_gqa     = 1024
2025-05-10 16:27:03.820 | print\_info: f\_norm\_eps       = 0.0e+00
2025-05-10 16:27:03.820 | print\_info: f\_norm\_rms\_eps   = 1.0e-06
2025-05-10 16:27:03.820 | print\_info: f\_clamp\_kqv      = 0.0e+00
2025-05-10 16:27:03.820 | print\_info: f\_max\_alibi\_bias = 0.0e+00
2025-05-10 16:27:03.820 | print\_info: f\_logit\_scale    = 0.0e+00
2025-05-10 16:27:03.820 | print\_info: f\_attn\_scale     = 0.0e+00
2025-05-10 16:27:03.820 | print\_info: n\_ff             = 17408
2025-05-10 16:27:03.820 | print\_info: n\_expert         = 0
2025-05-10 16:27:03.820 | print\_info: n\_expert\_used    = 0
2025-05-10 16:27:03.820 | print\_info: causal attn      = 1
2025-05-10 16:27:03.820 | print\_info: pooling type     = 0
2025-05-10 16:27:03.820 | print\_info: rope type        = 2
2025-05-10 16:27:03.820 | print\_info: rope scaling     = linear
2025-05-10 16:27:03.820 | print\_info: freq\_base\_train  = 1000000.0
2025-05-10 16:27:03.820 | print\_info: freq\_scale\_train = 1
2025-05-10 16:27:03.820 | print\_info: n\_ctx\_orig\_yarn  = 40960
2025-05-10 16:27:03.820 | print\_info: rope\_finetuned   = unknown
2025-05-10 16:27:03.820 | print\_info: ssm\_d\_conv       = 0
2025-05-10 16:27:03.820 | print\_info: ssm\_d\_inner      = 0
2025-05-10 16:27:03.820 | print\_info: ssm\_d\_state      = 0
2025-05-10 16:27:03.820 | print\_info: ssm\_dt\_rank      = 0
2025-05-10 16:27:03.820 | print\_info: ssm\_dt\_b\_c\_rms   = 0
2025-05-10 16:27:03.820 | print\_info: model type       = 14B
2025-05-10 16:27:03.820 | print\_info: model params     = 14.77 B
2025-05-10 16:27:03.820 | print\_info: general.name     = Qwen3 14B
2025-05-10 16:27:03.820 | print\_info: vocab type       = BPE
2025-05-10 16:27:03.820 | print\_info: n\_vocab          = 151936
2025-05-10 16:27:03.820 | print\_info: n\_merges         = 151387
2025-05-10 16:27:03.820 | print\_info: BOS token        = 151643 '<|endoftext|>'
2025-05-10 16:27:03.820 | print\_info: EOS token        = 151645 '<|im\_end|>'
2025-05-10 16:27:03.820 | print\_info: EOT token        = 151645 '<|im\_end|>'
2025-05-10 16:27:03.820 | print\_info: PAD token        = 151643 '<|endoftext|>'
2025-05-10 16:27:03.820 | print\_info: LF token         = 198 'Ċ'
2025-05-10 16:27:03.820 | print\_info: FIM PRE token    = 151659 '<|fim\_prefix|>'
2025-05-10 16:27:03.820 | print\_info: FIM SUF token    = 151661 '<|fim\_suffix|>'
2025-05-10 16:27:03.820 | print\_info: FIM MID token    = 151660 '<|fim\_middle|>'
2025-05-10 16:27:03.820 | print\_info: FIM PAD token    = 151662 '<|fim\_pad|>'
2025-05-10 16:27:03.820 | print\_info: FIM REP token    = 151663 '<|repo\_name|>'
2025-05-10 16:27:03.820 | print\_info: FIM SEP token    = 151664 '<|file\_sep|>'
2025-05-10 16:27:03.820 | print\_info: EOG token        = 151643 '<|endoftext|>'
2025-05-10 16:27:03.820 | print\_info: EOG token        = 151645 '<|im\_end|>'
2025-05-10 16:27:03.820 | print\_info: EOG token        = 151662 '<|fim\_pad|>'
2025-05-10 16:27:03.820 | print\_info: EOG token        = 151663 '<|repo\_name|>'
2025-05-10 16:27:03.820 | print\_info: EOG token        = 151664 '<|file\_sep|>'
2025-05-10 16:27:03.820 | print\_info: max token length = 256
2025-05-10 16:27:03.820 | load\_tensors: loading model tensors, this can take a while... (mmap = true)
2025-05-10 16:31:38.758 | load\_tensors: offloading 40 repeating layers to GPU
2025-05-10 16:31:38.758 | load\_tensors: offloading output layer to GPU
2025-05-10 16:31:38.758 | load\_tensors: offloaded 41/41 layers to GPU
2025-05-10 16:31:38.758 | load\_tensors:        CUDA0 model buffer size =  8423.47 MiB
2025-05-10 16:31:38.758 | load\_tensors:   CPU\_Mapped model buffer size =   417.30 MiB
2025-05-10 16:31:40.781 | llama\_context: constructing llama\_context
2025-05-10 16:31:40.781 | llama\_context: n\_seq\_max     = 2
2025-05-10 16:31:40.781 | llama\_context: n\_ctx         = 8192
2025-05-10 16:31:40.781 | llama\_context: n\_ctx\_per\_seq = 4096
2025-05-10 16:31:40.781 | llama\_context: n\_batch       = 1024
2025-05-10 16:31:40.781 | llama\_context: n\_ubatch      = 512
2025-05-10 16:31:40.781 | llama\_context: causal\_attn   = 1
2025-05-10 16:31:40.781 | llama\_context: flash\_attn    = 0
2025-05-10 16:31:40.781 | llama\_context: freq\_base     = 1000000.0
2025-05-10 16:31:40.781 | llama\_context: freq\_scale    = 1
2025-05-10 16:31:40.781 | llama\_context: n\_ctx\_per\_seq (4096) < n\_ctx\_train (40960) -- the full capacity of the model will not be utilized
2025-05-10 16:31:40.782 | llama\_context:  CUDA\_Host  output buffer size =     1.20 MiB
2025-05-10 16:31:40.784 | init: kv\_size = 8192, offload = 1, type\_k = 'f16', type\_v = 'f16', n\_layer = 40, can\_shift = 1
2025-05-10 16:31:40.817 | init:      CUDA0 KV buffer size =  1280.00 MiB
2025-05-10 16:31:40.817 | llama\_context: KV self size  = 1280.00 MiB, K (f16):  640.00 MiB, V (f16):  640.00 MiB
2025-05-10 16:31:40.835 | llama\_context:      CUDA0 compute buffer size =   696.00 MiB
2025-05-10 16:31:40.835 | llama\_context:  CUDA\_Host compute buffer size =    26.01 MiB
2025-05-10 16:31:40.835 | llama\_context: graph nodes  = 1526
2025-05-10 16:31:40.835 | llama\_context: graph splits = 2
2025-05-10 16:31:40.851 | time=2025-05-10T21:31:40.851Z level=INFO source=server.go:628 msg="llama runner started in 279.25 seconds"
2025-05-10 16:31:40.876 | time=2025-05-10T21:31:40.876Z level=WARN source=runner.go:131 msg="truncating input prompt" limit=4096 prompt=11596 keep=4 new=4096
2025-05-10 16:32:42.139 | \[GIN] 2025/05/10 - 21:32:42 | 200 |         5m43s |      172.17.0.1 | POST     "/api/generate"
2025-05-10 16:32:42.468 | time=2025-05-10T21:32:42.468Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:42.468 | time=2025-05-10T21:32:42.468Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:42.512 | time=2025-05-10T21:32:42.511Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:42.512 | time=2025-05-10T21:32:42.512Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:42.512 | \[GIN] 2025/05/10 - 21:32:42 | 200 |  364.051939ms |      172.17.0.1 | POST     "/api/show"
2025-05-10 16:32:42.514 | \[GIN] 2025/05/10 - 21:32:42 | 200 |  366.060431ms |      172.17.0.1 | POST     "/api/show"
2025-05-10 16:32:42.776 | time=2025-05-10T21:32:42.776Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:42.822 | time=2025-05-10T21:32:42.822Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:42.823 | \[GIN] 2025/05/10 - 21:32:42 | 200 |  309.032901ms |      172.17.0.1 | POST     "/api/show"
2025-05-10 16:32:43.023 | time=2025-05-10T21:32:43.023Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:43.056 | time=2025-05-10T21:32:43.056Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:43.057 | \[GIN] 2025/05/10 - 21:32:43 | 200 |  231.809042ms |      172.17.0.1 | POST     "/api/show"
2025-05-10 16:32:43.366 | time=2025-05-10T21:32:43.366Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:43.408 | time=2025-05-10T21:32:43.408Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
2025-05-10 16:32:43.408 | \[GIN] 2025/05/10 - 21:32:43 | 200 |  349.621308ms |      172.17.0.1 | POST     "/api/show"
2025-05-10 16:37:47.337 | time=2025-05-10T21:37:47.337Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=5.211314928 model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e
2025-05-10 16:37:47.588 | time=2025-05-10T21:37:47.588Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=5.462207696 model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e
2025-05-10 16:37:47.838 | time=2025-05-10T21:37:47.838Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=5.711917423 model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e

OS

WSL2

GPU

Nvidia

CPU

Intel

Ollama version

0.6.8

Originally created by @B-X-Y on GitHub (May 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10654 ### What is the issue? When running Ollama with GPU support inside Docker on WSL, model loading is limited by very low disk I/O throughput, consistently around 20–30 MB/s. In comparison: The CPU version of Ollama in Docker on WSL achieves 200–300 MB/s. The GPU version running directly on the Windows host (outside Docker/WSL) also achieves 200–300 MB/s. All tests were performed on a system with an SSD, so the observed I/O performance is likely below the hardware's capability. Even the 200–300 MB/s I/O rate in the CPU version and on the host system seems lower than expected, potentially indicating broader inefficiencies or bottlenecks in how model data is streamed from disk. The performance degradation in the GPU + Docker + WSL setup is particularly severe and significantly increases model loading times. ### Relevant log output ```shell 2025-05-10 16:26:42.425 | 2025/05/10 21:26:42 routes.go:1233: INFO server config env="map\[CUDA\_VISIBLE\_DEVICES: GPU\_DEVICE\_ORDINAL: HIP\_VISIBLE\_DEVICES: HSA\_OVERRIDE\_GFX\_VERSION: HTTPS\_PROXY: HTTP\_PROXY: NO\_PROXY: OLLAMA\_CONTEXT\_LENGTH:4096 OLLAMA\_DEBUG\:false OLLAMA\_FLASH\_ATTENTION\:false OLLAMA\_GPU\_OVERHEAD:0 OLLAMA\_HOST:[http://0.0.0.0:11434](http://0.0.0.0:11434) OLLAMA\_INTEL\_GPU\:false OLLAMA\_KEEP\_ALIVE:5m0s OLLAMA\_KV\_CACHE\_TYPE: OLLAMA\_LLM\_LIBRARY: OLLAMA\_LOAD\_TIMEOUT:5m0s OLLAMA\_MAX\_LOADED\_MODELS:0 OLLAMA\_MAX\_QUEUE:512 OLLAMA\_MODELS:/root/.ollama/models OLLAMA\_MULTIUSER\_CACHE\:false OLLAMA\_NEW\_ENGINE\:false OLLAMA\_NOHISTORY\:false OLLAMA\_NOPRUNE\:false OLLAMA\_NUM\_PARALLEL:0 OLLAMA\_ORIGINS:\[http\://localhost https\://localhost http\://localhost:\* https\://localhost:\* [http://127.0.0.1](http://127.0.0.1) [https://127.0.0.1](https://127.0.0.1) [http://127.0.0.1:\*](http://127.0.0.1:*) [https://127.0.0.1:\*](https://127.0.0.1:*) [http://0.0.0.0](http://0.0.0.0) [https://0.0.0.0](https://0.0.0.0) [http://0.0.0.0:\*](http://0.0.0.0:*) [https://0.0.0.0:\*](https://0.0.0.0:*) app\://\* file://\* tauri://\* vscode-webview://\* vscode-file://\*] OLLAMA\_SCHED\_SPREAD\:false ROCR\_VISIBLE\_DEVICES: http\_proxy: https\_proxy: no\_proxy:]" 2025-05-10 16:26:42.872 | time=2025-05-10T21:26:42.872Z level=INFO source=images.go:463 msg="total blobs: 77" 2025-05-10 16:26:43.108 | time=2025-05-10T21:26:43.108Z level=INFO source=images.go:470 msg="total unused blobs removed: 0" 2025-05-10 16:26:43.310 | time=2025-05-10T21:26:43.309Z level=INFO source=routes.go:1300 msg="Listening on \[::]:11434 (version 0.6.8)" 2025-05-10 16:26:43.310 | time=2025-05-10T21:26:43.310Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" 2025-05-10 16:26:43.526 | time=2025-05-10T21:26:43.525Z level=INFO source=types.go:130 msg="inference compute" id=GPU-07af221e-9f32-2d81-d076-8cf8e9abfb8a library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090 Laptop GPU" total="16.0 GiB" available="14.7 GiB" 2025-05-10 16:26:59.182 | time=2025-05-10T21:26:59.181Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:26:59.347 | time=2025-05-10T21:26:59.347Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:26:59.396 | time=2025-05-10T21:26:59.396Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:26:59.397 | time=2025-05-10T21:26:59.397Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block\_count default=0 2025-05-10 16:26:59.398 | time=2025-05-10T21:26:59.397Z level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e gpu=GPU-07af221e-9f32-2d81-d076-8cf8e9abfb8a parallel=2 available=15778971648 required="11.2 GiB" 2025-05-10 16:26:59.501 | time=2025-05-10T21:26:59.501Z level=INFO source=server.go:106 msg="system memory" total="31.2 GiB" free="28.9 GiB" free\_swap="8.0 GiB" 2025-05-10 16:26:59.501 | time=2025-05-10T21:26:59.501Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block\_count default=0 2025-05-10 16:26:59.501 | time=2025-05-10T21:26:59.501Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="\[14.7 GiB]" memory.gpu\_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="11.2 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="\[11.2 GiB]" memory.weights.total="8.2 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB" 2025-05-10 16:27:01.443 | llama\_model\_loader: loaded meta data with 27 key-value pairs and 443 tensors from /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest)) 2025-05-10 16:27:01.443 | llama\_model\_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 0: general.architecture str = qwen3 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 1: general.type str = model 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 2: general.name str = Qwen3 14B 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 3: general.basename str = Qwen3 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 4: general.size\_label str = 14B 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 5: qwen3.block\_count u32 = 40 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 6: qwen3.context\_length u32 = 40960 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 7: qwen3.embedding\_length u32 = 5120 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 8: qwen3.feed\_forward\_length u32 = 17408 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 9: qwen3.attention.head\_count u32 = 40 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 10: qwen3.attention.head\_count\_kv u32 = 8 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 11: qwen3.rope.freq\_base f32 = 1000000.000000 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 12: qwen3.attention.layer\_norm\_rms\_epsilon f32 = 0.000001 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 13: qwen3.attention.key\_length u32 = 128 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 14: qwen3.attention.value\_length u32 = 128 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 15: tokenizer.ggml.model str = gpt2 2025-05-10 16:27:01.443 | llama\_model\_loader: - kv 16: tokenizer.ggml.pre str = qwen2 2025-05-10 16:27:01.458 | llama\_model\_loader: - kv 17: tokenizer.ggml.tokens arr\[str,151936] = \["!", """, "#", "\$", "%", "&", "'", ... 2025-05-10 16:27:01.463 | llama\_model\_loader: - kv 18: tokenizer.ggml.token\_type arr\[i32,151936] = \[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2025-05-10 16:27:01.481 | llama\_model\_loader: - kv 19: tokenizer.ggml.merges arr\[str,151387] = \["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... 2025-05-10 16:27:01.481 | llama\_model\_loader: - kv 20: tokenizer.ggml.eos\_token\_id u32 = 151645 2025-05-10 16:27:01.481 | llama\_model\_loader: - kv 21: tokenizer.ggml.padding\_token\_id u32 = 151643 2025-05-10 16:27:01.481 | llama\_model\_loader: - kv 22: tokenizer.ggml.bos\_token\_id u32 = 151643 2025-05-10 16:27:01.481 | llama\_model\_loader: - kv 23: tokenizer.ggml.add\_bos\_token bool = false 2025-05-10 16:27:01.481 | llama\_model\_loader: - kv 24: tokenizer.chat\_template str = {%- if tools %}\n {{- '<|im\_start|>... 2025-05-10 16:27:01.481 | llama\_model\_loader: - kv 25: general.quantization\_version u32 = 2 2025-05-10 16:27:01.481 | llama\_model\_loader: - kv 26: general.file\_type u32 = 15 2025-05-10 16:27:01.481 | llama\_model\_loader: - type f32: 161 tensors 2025-05-10 16:27:01.481 | llama\_model\_loader: - type f16: 40 tensors 2025-05-10 16:27:01.481 | llama\_model\_loader: - type q4\_K: 221 tensors 2025-05-10 16:27:01.481 | llama\_model\_loader: - type q6\_K: 21 tensors 2025-05-10 16:27:01.481 | print\_info: file format = GGUF V3 (latest) 2025-05-10 16:27:01.481 | print\_info: file type = Q4\_K - Medium 2025-05-10 16:27:01.481 | print\_info: file size = 8.63 GiB (5.02 BPW) 2025-05-10 16:27:01.570 | load: special tokens cache size = 26 2025-05-10 16:27:01.603 | load: token to piece cache size = 0.9311 MB 2025-05-10 16:27:01.603 | print\_info: arch = qwen3 2025-05-10 16:27:01.603 | print\_info: vocab\_only = 1 2025-05-10 16:27:01.603 | print\_info: model type = ?B 2025-05-10 16:27:01.603 | print\_info: model params = 14.77 B 2025-05-10 16:27:01.603 | print\_info: general.name = Qwen3 14B 2025-05-10 16:27:01.603 | print\_info: vocab type = BPE 2025-05-10 16:27:01.603 | print\_info: n\_vocab = 151936 2025-05-10 16:27:01.603 | print\_info: n\_merges = 151387 2025-05-10 16:27:01.603 | print\_info: BOS token = 151643 '<|endoftext|>' 2025-05-10 16:27:01.603 | print\_info: EOS token = 151645 '<|im\_end|>' 2025-05-10 16:27:01.603 | print\_info: EOT token = 151645 '<|im\_end|>' 2025-05-10 16:27:01.603 | print\_info: PAD token = 151643 '<|endoftext|>' 2025-05-10 16:27:01.603 | print\_info: LF token = 198 'Ċ' 2025-05-10 16:27:01.603 | print\_info: FIM PRE token = 151659 '<|fim\_prefix|>' 2025-05-10 16:27:01.603 | print\_info: FIM SUF token = 151661 '<|fim\_suffix|>' 2025-05-10 16:27:01.603 | print\_info: FIM MID token = 151660 '<|fim\_middle|>' 2025-05-10 16:27:01.603 | print\_info: FIM PAD token = 151662 '<|fim\_pad|>' 2025-05-10 16:27:01.603 | print\_info: FIM REP token = 151663 '<|repo\_name|>' 2025-05-10 16:27:01.603 | print\_info: FIM SEP token = 151664 '<|file\_sep|>' 2025-05-10 16:27:01.603 | print\_info: EOG token = 151643 '<|endoftext|>' 2025-05-10 16:27:01.603 | print\_info: EOG token = 151645 '<|im\_end|>' 2025-05-10 16:27:01.603 | print\_info: EOG token = 151662 '<|fim\_pad|>' 2025-05-10 16:27:01.603 | print\_info: EOG token = 151663 '<|repo\_name|>' 2025-05-10 16:27:01.603 | print\_info: EOG token = 151664 '<|file\_sep|>' 2025-05-10 16:27:01.603 | print\_info: max token length = 256 2025-05-10 16:27:01.603 | llama\_model\_load: vocab only - skipping tensors 2025-05-10 16:27:01.606 | time=2025-05-10T21:27:01.606Z level=INFO source=server.go:410 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 16 --parallel 2 --port 33421" 2025-05-10 16:27:01.606 | time=2025-05-10T21:27:01.606Z level=INFO source=sched.go:452 msg="loaded runners" count=1 2025-05-10 16:27:01.606 | time=2025-05-10T21:27:01.606Z level=INFO source=server.go:589 msg="waiting for llama runner to start responding" 2025-05-10 16:27:01.607 | time=2025-05-10T21:27:01.606Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding" 2025-05-10 16:27:01.618 | time=2025-05-10T21:27:01.618Z level=INFO source=runner.go:853 msg="starting go runner" 2025-05-10 16:27:01.621 | load\_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so 2025-05-10 16:27:01.721 | ggml\_cuda\_init: GGML\_CUDA\_FORCE\_MMQ: no 2025-05-10 16:27:01.721 | ggml\_cuda\_init: GGML\_CUDA\_FORCE\_CUBLAS: no 2025-05-10 16:27:01.721 | ggml\_cuda\_init: found 1 CUDA devices: 2025-05-10 16:27:01.721 | Device 0: NVIDIA GeForce RTX 4090 Laptop GPU, compute capability 8.9, VMM: yes 2025-05-10 16:27:01.721 | load\_backend: loaded CUDA backend from /usr/lib/ollama/cuda\_v12/libggml-cuda.so 2025-05-10 16:27:01.721 | time=2025-05-10T21:27:01.721Z level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX\_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE\_GRAPHS=1 CUDA.0.PEER\_MAX\_BATCH\_SIZE=128 compiler=cgo(gcc) 2025-05-10 16:27:01.736 | time=2025-05-10T21:27:01.736Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:33421" 2025-05-10 16:27:01.841 | llama\_model\_load\_from\_file\_impl: using device CUDA0 (NVIDIA GeForce RTX 4090 Laptop GPU) - 15048 MiB free 2025-05-10 16:27:01.857 | time=2025-05-10T21:27:01.857Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model" 2025-05-10 16:27:03.673 | llama\_model\_loader: loaded meta data with 27 key-value pairs and 443 tensors from /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest)) 2025-05-10 16:27:03.674 | llama\_model\_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 0: general.architecture str = qwen3 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 1: general.type str = model 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 2: general.name str = Qwen3 14B 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 3: general.basename str = Qwen3 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 4: general.size\_label str = 14B 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 5: qwen3.block\_count u32 = 40 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 6: qwen3.context\_length u32 = 40960 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 7: qwen3.embedding\_length u32 = 5120 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 8: qwen3.feed\_forward\_length u32 = 17408 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 9: qwen3.attention.head\_count u32 = 40 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 10: qwen3.attention.head\_count\_kv u32 = 8 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 11: qwen3.rope.freq\_base f32 = 1000000.000000 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 12: qwen3.attention.layer\_norm\_rms\_epsilon f32 = 0.000001 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 13: qwen3.attention.key\_length u32 = 128 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 14: qwen3.attention.value\_length u32 = 128 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 15: tokenizer.ggml.model str = gpt2 2025-05-10 16:27:03.674 | llama\_model\_loader: - kv 16: tokenizer.ggml.pre str = qwen2 2025-05-10 16:27:03.688 | llama\_model\_loader: - kv 17: tokenizer.ggml.tokens arr\[str,151936] = \["!", """, "#", "\$", "%", "&", "'", ... 2025-05-10 16:27:03.693 | llama\_model\_loader: - kv 18: tokenizer.ggml.token\_type arr\[i32,151936] = \[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2025-05-10 16:27:03.707 | llama\_model\_loader: - kv 19: tokenizer.ggml.merges arr\[str,151387] = \["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... 2025-05-10 16:27:03.707 | llama\_model\_loader: - kv 20: tokenizer.ggml.eos\_token\_id u32 = 151645 2025-05-10 16:27:03.707 | llama\_model\_loader: - kv 21: tokenizer.ggml.padding\_token\_id u32 = 151643 2025-05-10 16:27:03.707 | llama\_model\_loader: - kv 22: tokenizer.ggml.bos\_token\_id u32 = 151643 2025-05-10 16:27:03.707 | llama\_model\_loader: - kv 23: tokenizer.ggml.add\_bos\_token bool = false 2025-05-10 16:27:03.707 | llama\_model\_loader: - kv 24: tokenizer.chat\_template str = {%- if tools %}\n {{- '<|im\_start|>... 2025-05-10 16:27:03.707 | llama\_model\_loader: - kv 25: general.quantization\_version u32 = 2 2025-05-10 16:27:03.707 | llama\_model\_loader: - kv 26: general.file\_type u32 = 15 2025-05-10 16:27:03.707 | llama\_model\_loader: - type f32: 161 tensors 2025-05-10 16:27:03.707 | llama\_model\_loader: - type f16: 40 tensors 2025-05-10 16:27:03.707 | llama\_model\_loader: - type q4\_K: 221 tensors 2025-05-10 16:27:03.707 | llama\_model\_loader: - type q6\_K: 21 tensors 2025-05-10 16:27:03.707 | print\_info: file format = GGUF V3 (latest) 2025-05-10 16:27:03.707 | print\_info: file type = Q4\_K - Medium 2025-05-10 16:27:03.707 | print\_info: file size = 8.63 GiB (5.02 BPW) 2025-05-10 16:27:03.793 | load: special tokens cache size = 26 2025-05-10 16:27:03.820 | load: token to piece cache size = 0.9311 MB 2025-05-10 16:27:03.820 | print\_info: arch = qwen3 2025-05-10 16:27:03.820 | print\_info: vocab\_only = 0 2025-05-10 16:27:03.820 | print\_info: n\_ctx\_train = 40960 2025-05-10 16:27:03.820 | print\_info: n\_embd = 5120 2025-05-10 16:27:03.820 | print\_info: n\_layer = 40 2025-05-10 16:27:03.820 | print\_info: n\_head = 40 2025-05-10 16:27:03.820 | print\_info: n\_head\_kv = 8 2025-05-10 16:27:03.820 | print\_info: n\_rot = 128 2025-05-10 16:27:03.820 | print\_info: n\_swa = 0 2025-05-10 16:27:03.820 | print\_info: n\_swa\_pattern = 1 2025-05-10 16:27:03.820 | print\_info: n\_embd\_head\_k = 128 2025-05-10 16:27:03.820 | print\_info: n\_embd\_head\_v = 128 2025-05-10 16:27:03.820 | print\_info: n\_gqa = 5 2025-05-10 16:27:03.820 | print\_info: n\_embd\_k\_gqa = 1024 2025-05-10 16:27:03.820 | print\_info: n\_embd\_v\_gqa = 1024 2025-05-10 16:27:03.820 | print\_info: f\_norm\_eps = 0.0e+00 2025-05-10 16:27:03.820 | print\_info: f\_norm\_rms\_eps = 1.0e-06 2025-05-10 16:27:03.820 | print\_info: f\_clamp\_kqv = 0.0e+00 2025-05-10 16:27:03.820 | print\_info: f\_max\_alibi\_bias = 0.0e+00 2025-05-10 16:27:03.820 | print\_info: f\_logit\_scale = 0.0e+00 2025-05-10 16:27:03.820 | print\_info: f\_attn\_scale = 0.0e+00 2025-05-10 16:27:03.820 | print\_info: n\_ff = 17408 2025-05-10 16:27:03.820 | print\_info: n\_expert = 0 2025-05-10 16:27:03.820 | print\_info: n\_expert\_used = 0 2025-05-10 16:27:03.820 | print\_info: causal attn = 1 2025-05-10 16:27:03.820 | print\_info: pooling type = 0 2025-05-10 16:27:03.820 | print\_info: rope type = 2 2025-05-10 16:27:03.820 | print\_info: rope scaling = linear 2025-05-10 16:27:03.820 | print\_info: freq\_base\_train = 1000000.0 2025-05-10 16:27:03.820 | print\_info: freq\_scale\_train = 1 2025-05-10 16:27:03.820 | print\_info: n\_ctx\_orig\_yarn = 40960 2025-05-10 16:27:03.820 | print\_info: rope\_finetuned = unknown 2025-05-10 16:27:03.820 | print\_info: ssm\_d\_conv = 0 2025-05-10 16:27:03.820 | print\_info: ssm\_d\_inner = 0 2025-05-10 16:27:03.820 | print\_info: ssm\_d\_state = 0 2025-05-10 16:27:03.820 | print\_info: ssm\_dt\_rank = 0 2025-05-10 16:27:03.820 | print\_info: ssm\_dt\_b\_c\_rms = 0 2025-05-10 16:27:03.820 | print\_info: model type = 14B 2025-05-10 16:27:03.820 | print\_info: model params = 14.77 B 2025-05-10 16:27:03.820 | print\_info: general.name = Qwen3 14B 2025-05-10 16:27:03.820 | print\_info: vocab type = BPE 2025-05-10 16:27:03.820 | print\_info: n\_vocab = 151936 2025-05-10 16:27:03.820 | print\_info: n\_merges = 151387 2025-05-10 16:27:03.820 | print\_info: BOS token = 151643 '<|endoftext|>' 2025-05-10 16:27:03.820 | print\_info: EOS token = 151645 '<|im\_end|>' 2025-05-10 16:27:03.820 | print\_info: EOT token = 151645 '<|im\_end|>' 2025-05-10 16:27:03.820 | print\_info: PAD token = 151643 '<|endoftext|>' 2025-05-10 16:27:03.820 | print\_info: LF token = 198 'Ċ' 2025-05-10 16:27:03.820 | print\_info: FIM PRE token = 151659 '<|fim\_prefix|>' 2025-05-10 16:27:03.820 | print\_info: FIM SUF token = 151661 '<|fim\_suffix|>' 2025-05-10 16:27:03.820 | print\_info: FIM MID token = 151660 '<|fim\_middle|>' 2025-05-10 16:27:03.820 | print\_info: FIM PAD token = 151662 '<|fim\_pad|>' 2025-05-10 16:27:03.820 | print\_info: FIM REP token = 151663 '<|repo\_name|>' 2025-05-10 16:27:03.820 | print\_info: FIM SEP token = 151664 '<|file\_sep|>' 2025-05-10 16:27:03.820 | print\_info: EOG token = 151643 '<|endoftext|>' 2025-05-10 16:27:03.820 | print\_info: EOG token = 151645 '<|im\_end|>' 2025-05-10 16:27:03.820 | print\_info: EOG token = 151662 '<|fim\_pad|>' 2025-05-10 16:27:03.820 | print\_info: EOG token = 151663 '<|repo\_name|>' 2025-05-10 16:27:03.820 | print\_info: EOG token = 151664 '<|file\_sep|>' 2025-05-10 16:27:03.820 | print\_info: max token length = 256 2025-05-10 16:27:03.820 | load\_tensors: loading model tensors, this can take a while... (mmap = true) 2025-05-10 16:31:38.758 | load\_tensors: offloading 40 repeating layers to GPU 2025-05-10 16:31:38.758 | load\_tensors: offloading output layer to GPU 2025-05-10 16:31:38.758 | load\_tensors: offloaded 41/41 layers to GPU 2025-05-10 16:31:38.758 | load\_tensors: CUDA0 model buffer size = 8423.47 MiB 2025-05-10 16:31:38.758 | load\_tensors: CPU\_Mapped model buffer size = 417.30 MiB 2025-05-10 16:31:40.781 | llama\_context: constructing llama\_context 2025-05-10 16:31:40.781 | llama\_context: n\_seq\_max = 2 2025-05-10 16:31:40.781 | llama\_context: n\_ctx = 8192 2025-05-10 16:31:40.781 | llama\_context: n\_ctx\_per\_seq = 4096 2025-05-10 16:31:40.781 | llama\_context: n\_batch = 1024 2025-05-10 16:31:40.781 | llama\_context: n\_ubatch = 512 2025-05-10 16:31:40.781 | llama\_context: causal\_attn = 1 2025-05-10 16:31:40.781 | llama\_context: flash\_attn = 0 2025-05-10 16:31:40.781 | llama\_context: freq\_base = 1000000.0 2025-05-10 16:31:40.781 | llama\_context: freq\_scale = 1 2025-05-10 16:31:40.781 | llama\_context: n\_ctx\_per\_seq (4096) < n\_ctx\_train (40960) -- the full capacity of the model will not be utilized 2025-05-10 16:31:40.782 | llama\_context: CUDA\_Host output buffer size = 1.20 MiB 2025-05-10 16:31:40.784 | init: kv\_size = 8192, offload = 1, type\_k = 'f16', type\_v = 'f16', n\_layer = 40, can\_shift = 1 2025-05-10 16:31:40.817 | init: CUDA0 KV buffer size = 1280.00 MiB 2025-05-10 16:31:40.817 | llama\_context: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB 2025-05-10 16:31:40.835 | llama\_context: CUDA0 compute buffer size = 696.00 MiB 2025-05-10 16:31:40.835 | llama\_context: CUDA\_Host compute buffer size = 26.01 MiB 2025-05-10 16:31:40.835 | llama\_context: graph nodes = 1526 2025-05-10 16:31:40.835 | llama\_context: graph splits = 2 2025-05-10 16:31:40.851 | time=2025-05-10T21:31:40.851Z level=INFO source=server.go:628 msg="llama runner started in 279.25 seconds" 2025-05-10 16:31:40.876 | time=2025-05-10T21:31:40.876Z level=WARN source=runner.go:131 msg="truncating input prompt" limit=4096 prompt=11596 keep=4 new=4096 2025-05-10 16:32:42.139 | \[GIN] 2025/05/10 - 21:32:42 | 200 | 5m43s | 172.17.0.1 | POST "/api/generate" 2025-05-10 16:32:42.468 | time=2025-05-10T21:32:42.468Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:42.468 | time=2025-05-10T21:32:42.468Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:42.512 | time=2025-05-10T21:32:42.511Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:42.512 | time=2025-05-10T21:32:42.512Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:42.512 | \[GIN] 2025/05/10 - 21:32:42 | 200 | 364.051939ms | 172.17.0.1 | POST "/api/show" 2025-05-10 16:32:42.514 | \[GIN] 2025/05/10 - 21:32:42 | 200 | 366.060431ms | 172.17.0.1 | POST "/api/show" 2025-05-10 16:32:42.776 | time=2025-05-10T21:32:42.776Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:42.822 | time=2025-05-10T21:32:42.822Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:42.823 | \[GIN] 2025/05/10 - 21:32:42 | 200 | 309.032901ms | 172.17.0.1 | POST "/api/show" 2025-05-10 16:32:43.023 | time=2025-05-10T21:32:43.023Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:43.056 | time=2025-05-10T21:32:43.056Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:43.057 | \[GIN] 2025/05/10 - 21:32:43 | 200 | 231.809042ms | 172.17.0.1 | POST "/api/show" 2025-05-10 16:32:43.366 | time=2025-05-10T21:32:43.366Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:43.408 | time=2025-05-10T21:32:43.408Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 2025-05-10 16:32:43.408 | \[GIN] 2025/05/10 - 21:32:43 | 200 | 349.621308ms | 172.17.0.1 | POST "/api/show" 2025-05-10 16:37:47.337 | time=2025-05-10T21:37:47.337Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=5.211314928 model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e 2025-05-10 16:37:47.588 | time=2025-05-10T21:37:47.588Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=5.462207696 model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e 2025-05-10 16:37:47.838 | time=2025-05-10T21:37:47.838Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=5.711917423 model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e ``` ### OS WSL2 ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.8
GiteaMirror added the bug label 2026-04-12 18:53:47 -05:00
Author
Owner

@rick-github commented on GitHub (May 11, 2025):

#6006

<!-- gh-comment-id:2869308730 --> @rick-github commented on GitHub (May 11, 2025): #6006
Author
Owner

@B-X-Y commented on GitHub (May 11, 2025):

If the issue is attributed solely to WSL, it remains unclear why there is a significant disparity in disk throughput between the CPU and GPU versions of Ollama running in Docker within the same WSL environment.

To clarify:
CPU version (Docker on WSL): ~200–300 MB/s
GPU version (Docker on WSL): ~20–30 MB/s

Both configurations run under identical storage and system conditions, accessing an SSD via Docker in WSL. The discrepancy suggests the GPU version may be engaging a different I/O pathway, or that interactions with CUDA, device passthrough, or related mechanisms are introducing additional overhead that does not occur in the CPU version.

Additional detail on differences in I/O handling between the CPU and GPU builds would help determine whether this behavior is inherent to WSL or a result of how the GPU variant is structured or executed.

<!-- gh-comment-id:2869333462 --> @B-X-Y commented on GitHub (May 11, 2025): If the issue is attributed solely to WSL, it remains unclear why there is a significant disparity in disk throughput between the CPU and GPU versions of Ollama running in Docker within the same WSL environment. To clarify: CPU version (Docker on WSL): ~200–300 MB/s GPU version (Docker on WSL): ~20–30 MB/s Both configurations run under identical storage and system conditions, accessing an SSD via Docker in WSL. The discrepancy suggests the GPU version may be engaging a different I/O pathway, or that interactions with CUDA, device passthrough, or related mechanisms are introducing additional overhead that does not occur in the CPU version. Additional detail on differences in I/O handling between the CPU and GPU builds would help determine whether this behavior is inherent to WSL or a result of how the GPU variant is structured or executed.
Author
Owner

@mlaihk commented on GitHub (Jun 1, 2025):

I noticed the same. And I moved all my models into a docker persistent volume and have the ollama docker access the docker persistent volume instead of a bind to the host directory. Speeds up tremendously.

<!-- gh-comment-id:2926921783 --> @mlaihk commented on GitHub (Jun 1, 2025): I noticed the same. And I moved all my models into a docker persistent volume and have the ollama docker access the docker persistent volume instead of a bind to the host directory. Speeds up tremendously.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7004