requires much more VRAM to run same model after update #7117

Open
opened 2025-11-12 13:56:51 -06:00 by GiteaMirror · 14 comments
Owner

Originally created by @konn-submarine-bu on GitHub (May 23, 2025).

What is the issue?

I had been running Qwen3:32b with GPU A6000(48G VRAM) since weeks ago, and it worked smoothly. From my observation, it normally occupied about 35G VRAM when generating responses.
However, after I updated Ollama to version 0.7.0, I found that it requires much more VRAM to run this model, and it has to use RAM which makes generation much slower. Although I tried to uninstall Ollama and install version 0.6.7 back, this problem still cant be handled.
After I execute ollama run qwen3:32b , I can see 35G VRAM got occupied.

Image
But ollama ps shows the size is 45G.

Image
When I try to chat with this model on Ragflow which retrievals long context(like 1000+ token) to generate answers. The VRAM needed sharply increases to 71G, while the system shows only 30G VARM used.

Image
Before the update, the RAG system worked well, and never required VRAM more than I have whenever how long the context is.
I dont know whether there is a bug when Ollama calculate VRAM usage or it just is your new strategy to apply LLMs. But why I cant get situation back by installing old version? I will be very appreciated if you can help me analyze this problem. Here are some logs for reference.

Relevant log output

time=2025-05-22T10:12:15.917+08:00 level=INFO source=server.go:624 msg="llama runner started in 14.89 seconds"
[GIN] 2025/05/22 - 10:12:15 | 200 |   16.0031865s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/05/22 - 10:12:23 | 200 |        54.4µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 10:12:23 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
time=2025-05-22T10:14:28.324+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-22T10:14:33.347+08:00 level=WARN source=sched.go:649 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0178211 model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
time=2025-05-22T10:14:33.403+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-22T10:14:33.445+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-22T10:14:33.458+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-22T10:14:33.460+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-22T10:14:33.474+08:00 level=INFO source=server.go:105 msg="system memory" total="127.7 GiB" free="78.0 GiB" free_swap="91.8 GiB"
time=2025-05-22T10:14:33.475+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-22T10:14:33.475+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=31 layers.split="" memory.available="[46.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="66.6 GiB" memory.required.partial="46.0 GiB" memory.required.kv="20.0 GiB" memory.required.allocations="[46.0 GiB]" memory.weights.total="18.4 GiB" memory.weights.repeating="17.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="26.7 GiB" memory.graph.partial="26.7 GiB"
time=2025-05-22T10:14:33.597+08:00 level=WARN source=sched.go:649 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2676747 model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 32B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen3.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 25600
llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 64
llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  257 tensors
llama_model_loader: - type  f16:   64 tensors
llama_model_loader: - type q4_K:  353 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 18.81 GiB (4.93 BPW) 
load: special tokens cache size = 26
time=2025-05-22T10:14:33.846+08:00 level=WARN source=sched.go:649 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5173513 model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 32.76 B
print_info: general.name     = Qwen3 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-22T10:14:33.943+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\SHV4SZH\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\dru1szh\\.ollama\\models\\blobs\\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 --ctx-size 81920 --batch-size 512 --n-gpu-layers 31 --threads 40 --no-mmap --parallel 10 --port 59842"
time=2025-05-22T10:14:34.550+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-22T10:14:34.550+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-05-22T10:14:34.552+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T10:14:34.808+08:00 level=INFO source=runner.go:853 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.6.7 and 0.7.0

Originally created by @konn-submarine-bu on GitHub (May 23, 2025). ### What is the issue? I had been running **Qwen3:32b** with GPU A6000(**48G** VRAM) since weeks ago, and it worked smoothly. From my observation, it normally occupied about **35G** VRAM when generating responses. However, after I updated Ollama to version 0.7.0, I found that it requires much more VRAM to run this model, and it has to use RAM which makes generation much slower. Although I tried to uninstall Ollama and install version 0.6.7 back, this problem still cant be handled. After I execute ` ollama run qwen3:32b` , I can see **35G** VRAM got occupied. ![Image](https://github.com/user-attachments/assets/33c28927-23d6-4224-88f8-9cfe63bdeca5) But **ollama ps** shows the size is **45G**. ![Image](https://github.com/user-attachments/assets/bafdbdf3-55eb-4578-bc6c-70f6b8b601fe) When I try to chat with this model on Ragflow which retrievals long context(like 1000+ token) to generate answers. The VRAM needed sharply increases to **71G**, while the system shows only **30G** VARM used. ![Image](https://github.com/user-attachments/assets/f74ed9a8-efbf-451f-b5ff-7cc531ef22e8) Before the update, the RAG system worked well, and never required VRAM more than I have whenever how long the context is. I dont know whether there is a bug when Ollama calculate VRAM usage or it just is your new strategy to apply LLMs. But why I cant get situation back by installing old version? I will be very appreciated if you can help me analyze this problem. Here are some logs for reference. ### Relevant log output ```shell time=2025-05-22T10:12:15.917+08:00 level=INFO source=server.go:624 msg="llama runner started in 14.89 seconds" [GIN] 2025/05/22 - 10:12:15 | 200 | 16.0031865s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/05/22 - 10:12:23 | 200 | 54.4µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 10:12:23 | 200 | 0s | 127.0.0.1 | GET "/api/ps" time=2025-05-22T10:14:28.324+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-22T10:14:33.347+08:00 level=WARN source=sched.go:649 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0178211 model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 time=2025-05-22T10:14:33.403+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-22T10:14:33.445+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-22T10:14:33.458+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-22T10:14:33.460+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-22T10:14:33.474+08:00 level=INFO source=server.go:105 msg="system memory" total="127.7 GiB" free="78.0 GiB" free_swap="91.8 GiB" time=2025-05-22T10:14:33.475+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-22T10:14:33.475+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=31 layers.split="" memory.available="[46.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="66.6 GiB" memory.required.partial="46.0 GiB" memory.required.kv="20.0 GiB" memory.required.allocations="[46.0 GiB]" memory.weights.total="18.4 GiB" memory.weights.repeating="17.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="26.7 GiB" memory.graph.partial="26.7 GiB" time=2025-05-22T10:14:33.597+08:00 level=WARN source=sched.go:649 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2676747 model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 32B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen3.block_count u32 = 64 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type f16: 64 tensors llama_model_loader: - type q4_K: 353 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.81 GiB (4.93 BPW) load: special tokens cache size = 26 time=2025-05-22T10:14:33.846+08:00 level=WARN source=sched.go:649 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5173513 model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 32.76 B print_info: general.name = Qwen3 32B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-22T10:14:33.943+08:00 level=INFO source=server.go:409 msg="starting llama server" cmd="C:\\Users\\SHV4SZH\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\dru1szh\\.ollama\\models\\blobs\\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 --ctx-size 81920 --batch-size 512 --n-gpu-layers 31 --threads 40 --no-mmap --parallel 10 --port 59842" time=2025-05-22T10:14:34.550+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-05-22T10:14:34.550+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-05-22T10:14:34.552+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-05-22T10:14:34.808+08:00 level=INFO source=runner.go:853 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.7 and 0.7.0
GiteaMirror added the bug label 2025-11-12 13:56:51 -06:00
Author
Owner

@flappy5812 commented on GitHub (May 23, 2025):

same gpu, same bug.

@flappy5812 commented on GitHub (May 23, 2025): same gpu, same bug.
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

#10773

@rick-github commented on GitHub (May 23, 2025): #10773
Author
Owner

@konn-submarine-bu commented on GitHub (May 23, 2025):

So what I should do? download latest version?

#10773

@konn-submarine-bu commented on GitHub (May 23, 2025): So what I should do? download latest version? > [#10773](https://github.com/ollama/ollama/pull/10773)
Author
Owner

@konn-submarine-bu commented on GitHub (May 23, 2025):

Well, I just updated to v0.7.1-rc2, this bug still not be fixed.

@konn-submarine-bu commented on GitHub (May 23, 2025): Well, I just updated to [v0.7.1-rc2](https://github.com/ollama/ollama/tree/v0.7.1-rc2), this bug still not be fixed.
Author
Owner

@flappy5812 commented on GitHub (May 23, 2025):

Well, I just updated to v0.7.1-rc2, this bug still not be fixed.

0.7.1rc2 works for me. Maybe you should clean up previous libs.

@flappy5812 commented on GitHub (May 23, 2025): > Well, I just updated to [v0.7.1-rc2](https://github.com/ollama/ollama/tree/v0.7.1-rc2), this bug still not be fixed. 0.7.1rc2 works for me. Maybe you should clean up previous libs.
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

Well, I just updated to v0.7.1-rc2, this bug still not be fixed.

Logs.

@rick-github commented on GitHub (May 23, 2025): > Well, I just updated to [v0.7.1-rc2](https://github.com/ollama/ollama/tree/v0.7.1-rc2), this bug still not be fixed. Logs.
Author
Owner

@konn-submarine-bu commented on GitHub (May 23, 2025):

_time=2025-05-23T17:27:10.247+08:00 level=INFO source=server.go:630 msg="llama runner started in 15.65 seconds"
[GIN] 2025/05/23 - 17:27:10 | 200 | 16.8512835s | 127.0.0.1 | POST "/api/generate"
time=2025-05-23T17:28:10.030+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0137743 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=18228 runner.model=C:\Users\dru1szh.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
time=2025-05-23T17:28:10.145+08:00 level=INFO source=server.go:135 msg="system memory" total="127.7 GiB" free="81.8 GiB" free_swap="95.0 GiB"
time=2025-05-23T17:28:10.146+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=31 layers.split="" memory.available="[46.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="66.1 GiB" memory.required.partial="46.0 GiB" memory.required.kv="20.0 GiB" memory.required.allocations="[46.0 GiB]" memory.weights.total="18.4 GiB" memory.weights.repeating="17.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="26.7 GiB" memory.graph.partial="26.7 GiB"
llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3 32B
llama_model_loader: - kv 3: general.basename str = Qwen3
llama_model_loader: - kv 4: general.size_label str = 32B
llama_model_loader: - kv 5: qwen3.block_count u32 = 64
llama_model_loader: - kv 6: qwen3.context_length u32 = 40960
llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600
llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64
llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128
llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128
llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2
time=2025-05-23T17:28:10.280+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2636014 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=18228 runner.model=C:\Users\dru1szh.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - kv 26: general.file_type u32 = 15
llama_model_loader: - type f32: 257 tensors
llama_model_loader: - type f16: 64 tensors
llama_model_loader: - type q4_K: 353 tensors
llama_model_loader: - type q6_K: 33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 18.81 GiB (4.93 BPW)
load: special tokens cache size = 26
time=2025-05-23T17:28:10.530+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5137655 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=18228 runner.model=C:\Users\dru1szh.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 32.76 B
print_info: general.name = Qwen3 32B
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-23T17:28:10.564+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\ollama.exe runner --model C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 --ctx-size 81920 --batch-size 512 --n-gpu-layers 31 --threads 40 --no-mmap --parallel 10 --port 64675"
time=2025-05-23T17:28:11.240+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-05-23T17:28:11.240+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-23T17:28:11.242+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-23T17:28:11.598+08:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-05-23T17:28:11.935+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-23T17:28:11.940+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:64675"
time=2025-05-23T17:28:12.002+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX A6000) - 47545 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3 32B
llama_model_loader: - kv 3: general.basename str = Qwen3
llama_model_loader: - kv 4: general.size_label str = 32B
llama_model_loader: - kv 5: qwen3.block_count u32 = 64
llama_model_loader: - kv 6: qwen3.context_length u32 = 40960
llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600
llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64
llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128
llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128
llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - kv 26: general.file_type u32 = 15
llama_model_loader: - type f32: 257 tensors
llama_model_loader: - type f16: 64 tensors
llama_model_loader: - type q4_K: 353 tensors
llama_model_loader: - type q6_K: 33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 18.81 GiB (4.93 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3
print_info: vocab_only = 0
print_info: n_ctx_train = 40960
print_info: n_embd = 5120
print_info: n_layer = 64
print_info: n_head = 64
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 25600
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 40960
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 32B
print_info: model params = 32.76 B
print_info: general.name = Qwen3 32B
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 31 repeating layers to GPU
load_tensors: offloaded 31/65 layers to GPU
load_tensors: CUDA_Host model buffer size = 9994.29 MiB
load_tensors: CUDA0 model buffer size = 8848.12 MiB
load_tensors: CPU model buffer size = 417.30 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 10
llama_context: n_ctx = 81920
llama_context: n_ctx_per_seq = 8192
llama_context: n_batch = 5120
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (8192) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 5.99 MiB
llama_kv_cache_unified: kv_size = 81920, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32
llama_kv_cache_unified: CUDA0 KV buffer size = 9920.00 MiB
llama_kv_cache_unified: CPU KV buffer size = 10560.00 MiB
llama_kv_cache_unified: KV self size = 20480.00 MiB, K (f16): 10240.00 MiB, V (f16): 10240.00 MiB
llama_context: CUDA0 compute buffer size = 10916.00 MiB
llama_context: CUDA_Host compute buffer size = 170.01 MiB
llama_context: graph nodes = 2438
llama_context: graph splits = 433 (with bs=512), 69 (with bs=1)
time=2025-05-23T17:28:38.262+08:00 level=INFO source=server.go:630 msg="llama runner started in 27.02 seconds"
_

@konn-submarine-bu commented on GitHub (May 23, 2025): _time=2025-05-23T17:27:10.247+08:00 level=INFO source=server.go:630 msg="llama runner started in 15.65 seconds" [GIN] 2025/05/23 - 17:27:10 | 200 | 16.8512835s | 127.0.0.1 | POST "/api/generate" time=2025-05-23T17:28:10.030+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0137743 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=18228 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 time=2025-05-23T17:28:10.145+08:00 level=INFO source=server.go:135 msg="system memory" total="127.7 GiB" free="81.8 GiB" free_swap="95.0 GiB" time=2025-05-23T17:28:10.146+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=31 layers.split="" memory.available="[46.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="66.1 GiB" memory.required.partial="46.0 GiB" memory.required.kv="20.0 GiB" memory.required.allocations="[46.0 GiB]" memory.weights.total="18.4 GiB" memory.weights.repeating="17.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="26.7 GiB" memory.graph.partial="26.7 GiB" llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 32B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen3.block_count u32 = 64 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 time=2025-05-23T17:28:10.280+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2636014 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=18228 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type f16: 64 tensors llama_model_loader: - type q4_K: 353 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.81 GiB (4.93 BPW) load: special tokens cache size = 26 time=2025-05-23T17:28:10.530+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5137655 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=18228 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 32.76 B print_info: general.name = Qwen3 32B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-23T17:28:10.564+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\SHV4SZH\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\dru1szh\\.ollama\\models\\blobs\\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 --ctx-size 81920 --batch-size 512 --n-gpu-layers 31 --threads 40 --no-mmap --parallel 10 --port 64675" time=2025-05-23T17:28:11.240+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-23T17:28:11.240+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-23T17:28:11.242+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-23T17:28:11.598+08:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-23T17:28:11.935+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-23T17:28:11.940+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:64675" time=2025-05-23T17:28:12.002+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX A6000) - 47545 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 32B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen3.block_count u32 = 64 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type f16: 64 tensors llama_model_loader: - type q4_K: 353 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.81 GiB (4.93 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 25600 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = Qwen3 32B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 31 repeating layers to GPU load_tensors: offloaded 31/65 layers to GPU load_tensors: CUDA_Host model buffer size = 9994.29 MiB load_tensors: CUDA0 model buffer size = 8848.12 MiB load_tensors: CPU model buffer size = 417.30 MiB llama_context: constructing llama_context llama_context: n_seq_max = 10 llama_context: n_ctx = 81920 llama_context: n_ctx_per_seq = 8192 llama_context: n_batch = 5120 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (8192) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 5.99 MiB llama_kv_cache_unified: kv_size = 81920, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 9920.00 MiB llama_kv_cache_unified: CPU KV buffer size = 10560.00 MiB llama_kv_cache_unified: KV self size = 20480.00 MiB, K (f16): 10240.00 MiB, V (f16): 10240.00 MiB llama_context: CUDA0 compute buffer size = 10916.00 MiB llama_context: CUDA_Host compute buffer size = 170.01 MiB llama_context: graph nodes = 2438 llama_context: graph splits = 433 (with bs=512), 69 (with bs=1) time=2025-05-23T17:28:38.262+08:00 level=INFO source=server.go:630 msg="llama runner started in 27.02 seconds" _
Author
Owner

@davidtheITguy commented on GitHub (May 24, 2025):

I'm seeing the exact same issue on 0.7.x. Running llama3.1:8b on Windows with an Nvidia RTX 2000 ada

@davidtheITguy commented on GitHub (May 24, 2025): I'm seeing the exact same issue on 0.7.x. Running llama3.1:8b on Windows with an Nvidia RTX 2000 ada
Author
Owner

@sempervictus commented on GitHub (May 25, 2025):

@rick-github - is it possible to add a detailed explainer if not just tuning knobs for these calcs to the user? Output logs show what it wants but not really "the why."

@sempervictus commented on GitHub (May 25, 2025): @rick-github - is it possible to add a detailed explainer if not just tuning knobs for these calcs to the user? Output logs show what it wants but not really "the why."
Author
Owner

@davidtheITguy commented on GitHub (May 25, 2025):

Additional information only: reverting back to 0.6.8 immediately cured my NVIDIA RTX 2000 ada with 8G VRAM and default ollama server settings.

@davidtheITguy commented on GitHub (May 25, 2025): Additional information only: reverting back to 0.6.8 immediately cured my NVIDIA RTX 2000 ada with 8G VRAM and default ollama server settings.
Author
Owner

@lowlyocean commented on GitHub (May 25, 2025):

The new ollama engine is awful. Q2 models that used to fit entirely in VRAM now end up being loaded 100% CPU. We can't even create Q2 quants anymore

@lowlyocean commented on GitHub (May 25, 2025): The new ollama engine is awful. Q2 models that used to fit entirely in VRAM now end up being loaded 100% CPU. We can't even create Q2 quants anymore
Author
Owner

@sempervictus commented on GitHub (May 26, 2025):

@lowlyocean - AFAIK it can be disabled with an environment variable at load-time. That said, this much delta probably merits a comprehensive analysis across hardware (consumer and professional).

@sempervictus commented on GitHub (May 26, 2025): @lowlyocean - AFAIK it can be disabled with an environment variable at load-time. That said, this much delta probably merits a comprehensive analysis across hardware (consumer and professional).
Author
Owner

@alexveli1 commented on GitHub (Jun 4, 2025):

Faced same issue on ollama 0.9.0. Fallback to 0.6.8 didn't help.
On machine with RTX 4090 ran gemma3:12b-it-q8_0 with num_ctx = 40000 with 19Gb of VRAM occupied. On with 2xA100 same config occupied 65Gb. Ollama on RTX 4090 runs on bare metal and on 2xA100 in docker. OS - Ubuntu 24.04 both cases.
Decided to revise environment variables for container. Started with OLLAMA_NUM_PARALLEL. Changed my value from 10 to 2 and VRAM usage for gemma3:12b-it-q8_0 dropped from 65Gb to 24Gb for 40K context. My conclusion for my case - processing more parallel requests requires more reserved VRAM (what a discovery :)). On my RTX 4090 I used value of 1 (since only one user) and on 2xA100 = 10, since I want to server several users.
My stats for OLLAMA_NUM_PARALLEL vs VRAM for gemma3:12b-it-q8_0 for 40K Context Window on 2xA100:

  • 0 = 65Gb
  • 1 = 19Gb
  • 2 = 24Gb
  • 5 = 37Gb
  • 10 = 65Gb
    Neverthless, diff between nvidia-smi and ollama ps in VRAM usage still present (20Gb vs 24Gb for var value of 2 and 40K num_ctx).
@alexveli1 commented on GitHub (Jun 4, 2025): Faced same issue on ollama 0.9.0. Fallback to 0.6.8 didn't help. On machine with RTX 4090 ran gemma3:12b-it-q8_0 with num_ctx = 40000 with 19Gb of VRAM occupied. On with 2xA100 same config occupied 65Gb. Ollama on RTX 4090 runs on bare metal and on 2xA100 in docker. OS - Ubuntu 24.04 both cases. Decided to revise environment variables for container. Started with OLLAMA_NUM_PARALLEL. Changed my value from 10 to 2 and VRAM usage for gemma3:12b-it-q8_0 dropped from 65Gb to 24Gb for 40K context. My conclusion for my case - processing more parallel requests requires more reserved VRAM (what a discovery :)). On my RTX 4090 I used value of 1 (since only one user) and on 2xA100 = 10, since I want to server several users. My stats for OLLAMA_NUM_PARALLEL vs VRAM for gemma3:12b-it-q8_0 for 40K Context Window on 2xA100: - 0 = 65Gb - 1 = 19Gb - 2 = 24Gb - 5 = 37Gb - 10 = 65Gb Neverthless, diff between nvidia-smi and ollama ps in VRAM usage still present (20Gb vs 24Gb for var value of 2 and 40K num_ctx).
Author
Owner

@davidtheITguy commented on GitHub (Jun 4, 2025):

Two things I've seen: 1) reverting back to old ollama versions may work, but take care to clean out everything including old and possibly cached CUDA libs. 2) the recent engine changes appear to be mitigated with n_ctx size modifications: reduce your n_ctx size and vram reqs reduce and models fit. Not saying I like it but that's what worked for me

@davidtheITguy commented on GitHub (Jun 4, 2025): Two things I've seen: 1) reverting back to old ollama versions may work, but take care to clean out everything including old and possibly cached CUDA libs. 2) the recent engine changes appear to be mitigated with n_ctx size modifications: reduce your n_ctx size and vram reqs reduce and models fit. Not saying I like it but that's what worked for me
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#7117