[GH-ISSUE #10968] Qwen3 increases context to 64k, four GPUs, why is the proportion only 67%, CPU is 33%? #69286

Closed
opened 2026-05-04 17:39:51 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @Jin8999 on GitHub (Jun 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10968

What is the issue?

Or, to put it another way, why does the 35G model pulled from the Olama official website run at 148G after adding context?

Modify Modelfile:

PARAMETER num_ctx 65536

ollama ps:

Image

nvidia-smi:

Image

Relevant log output

llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen3.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 25600
llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 64
llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                          general.file_type u32              = 7
llama_model_loader: - type  f32:  257 tensors
llama_model_loader: - type  f16:   64 tensors
llama_model_loader: - type q8_0:  386 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 32.71 GiB (8.58 BPW) 
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 32.76 B
print_info: general.name     = Qwen3 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-04T18:16:04.080+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/home/ubuntu/qwen3-ollama/ollama/bin/ollama runner --model /home/ubuntu/qwen3-ollama/models/blobs/sha256-de447d788da3df6b4ea340408b13fc2c3a2043a2dfc19178b12d501a4bd96484 --ctx-size 65536 --batch-size 512 --n-gpu-layers 4 --threads 48 --parallel 1 --tensor-split 1,1,1,1 --port 44153"
time=2025-06-04T18:16:04.081+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-04T18:16:04.081+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-04T18:16:04.081+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-06-04T18:16:04.094+08:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from /home/ubuntu/qwen3-ollama/ollama/lib/ollama/libggml-cpu-haswell.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
  Device 0: NVIDIA ***, VMM: yes
  Device 1: NVIDIA ***, VMM: yes
  Device 2: NVIDIA ***, VMM: yes
  Device 3: NVIDIA ***, VMM: yes
load_backend: loaded CUDA backend from /home/ubuntu/qwen3-ollama/ollama/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-06-04T18:16:04.495+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-06-04T18:16:04.495+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:44153"
time=2025-06-04T18:16:04.583+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 23818 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) - 23818 MiB free
llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090) - 23818 MiB free
llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090) - 23818 MiB free
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 32B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen3.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 25600
llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 64
llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                          general.file_type u32              = 7
llama_model_loader: - type  f32:  257 tensors
llama_model_loader: - type  f16:   64 tensors
llama_model_loader: - type q8_0:  386 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 32.71 GiB (8.58 BPW) 
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 5120
print_info: n_layer          = 64
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 25600
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 32B
print_info: model params     = 32.76 B
print_info: general.name     = Qwen3 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 4 repeating layers to GPU
load_tensors: offloaded 4/65 layers to GPU
load_tensors:        CUDA0 model buffer size =   498.79 MiB
load_tensors:        CUDA1 model buffer size =   498.79 MiB
load_tensors:        CUDA2 model buffer size =   498.79 MiB
load_tensors:        CUDA3 model buffer size =   498.79 MiB
load_tensors:   CPU_Mapped model buffer size = 31503.91 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 65536
llama_context: n_ctx_per_seq = 65536
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (65536) > n_ctx_train (40960) -- possible training context overflow
llama_context:        CPU  output buffer size =     0.60 MiB
llama_kv_cache_unified: kv_size = 65536, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =   256.00 MiB
llama_kv_cache_unified:      CUDA1 KV buffer size =   256.00 MiB
llama_kv_cache_unified:      CUDA2 KV buffer size =   256.00 MiB
llama_kv_cache_unified:      CUDA3 KV buffer size =   256.00 MiB
llama_kv_cache_unified:        CPU KV buffer size = 15360.00 MiB
llama_kv_cache_unified: KV self size  = 16384.00 MiB, K (f16): 8192.00 MiB, V (f16): 8192.00 MiB
llama_context:      CUDA0 compute buffer size =  8740.00 MiB
llama_context:      CUDA1 compute buffer size =  8372.00 MiB
llama_context:      CUDA2 compute buffer size =  8372.00 MiB
llama_context:      CUDA3 compute buffer size =  8372.00 MiB
llama_context:  CUDA_Host compute buffer size =   138.01 MiB
llama_context: graph nodes  = 2438
llama_context: graph splits = 787 (with bs=512), 126 (with bs=1)
time=2025-06-04T18:16:10.850+08:00 level=INFO source=server.go:630 msg="llama runner started in 6.77 seconds"
[GIN] 2025/06/04 - 18:16:10 | 200 |  8.181069274s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

No response

Originally created by @Jin8999 on GitHub (Jun 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10968 ### What is the issue? ### **Or, to put it another way, why does the 35G model pulled from the Olama official website run at 148G after adding context?** ### Modify Modelfile: PARAMETER num_ctx 65536 ### ollama ps: ![Image](https://github.com/user-attachments/assets/9faf5798-95ee-4563-bce9-06647b3c9d02) ### nvidia-smi: ![Image](https://github.com/user-attachments/assets/983a2768-7723-4c1d-84af-c66f0ea076c0) ### Relevant log output ```shell llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen3.block_count u32 = 64 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 7 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type f16: 64 tensors llama_model_loader: - type q8_0: 386 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 32.71 GiB (8.58 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 32.76 B print_info: general.name = Qwen3 32B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-04T18:16:04.080+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/home/ubuntu/qwen3-ollama/ollama/bin/ollama runner --model /home/ubuntu/qwen3-ollama/models/blobs/sha256-de447d788da3df6b4ea340408b13fc2c3a2043a2dfc19178b12d501a4bd96484 --ctx-size 65536 --batch-size 512 --n-gpu-layers 4 --threads 48 --parallel 1 --tensor-split 1,1,1,1 --port 44153" time=2025-06-04T18:16:04.081+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-04T18:16:04.081+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-04T18:16:04.081+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-06-04T18:16:04.094+08:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from /home/ubuntu/qwen3-ollama/ollama/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: NVIDIA ***, VMM: yes Device 1: NVIDIA ***, VMM: yes Device 2: NVIDIA ***, VMM: yes Device 3: NVIDIA ***, VMM: yes load_backend: loaded CUDA backend from /home/ubuntu/qwen3-ollama/ollama/lib/ollama/cuda_v12/libggml-cuda.so time=2025-06-04T18:16:04.495+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-06-04T18:16:04.495+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:44153" time=2025-06-04T18:16:04.583+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 23818 MiB free llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) - 23818 MiB free llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090) - 23818 MiB free llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090) - 23818 MiB free llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 32B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen3.block_count u32 = 64 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 7 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type f16: 64 tensors llama_model_loader: - type q8_0: 386 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 32.71 GiB (8.58 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 25600 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = Qwen3 32B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 4 repeating layers to GPU load_tensors: offloaded 4/65 layers to GPU load_tensors: CUDA0 model buffer size = 498.79 MiB load_tensors: CUDA1 model buffer size = 498.79 MiB load_tensors: CUDA2 model buffer size = 498.79 MiB load_tensors: CUDA3 model buffer size = 498.79 MiB load_tensors: CPU_Mapped model buffer size = 31503.91 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 65536 llama_context: n_ctx_per_seq = 65536 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (65536) > n_ctx_train (40960) -- possible training context overflow llama_context: CPU output buffer size = 0.60 MiB llama_kv_cache_unified: kv_size = 65536, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 256.00 MiB llama_kv_cache_unified: CUDA1 KV buffer size = 256.00 MiB llama_kv_cache_unified: CUDA2 KV buffer size = 256.00 MiB llama_kv_cache_unified: CUDA3 KV buffer size = 256.00 MiB llama_kv_cache_unified: CPU KV buffer size = 15360.00 MiB llama_kv_cache_unified: KV self size = 16384.00 MiB, K (f16): 8192.00 MiB, V (f16): 8192.00 MiB llama_context: CUDA0 compute buffer size = 8740.00 MiB llama_context: CUDA1 compute buffer size = 8372.00 MiB llama_context: CUDA2 compute buffer size = 8372.00 MiB llama_context: CUDA3 compute buffer size = 8372.00 MiB llama_context: CUDA_Host compute buffer size = 138.01 MiB llama_context: graph nodes = 2438 llama_context: graph splits = 787 (with bs=512), 126 (with bs=1) time=2025-06-04T18:16:10.850+08:00 level=INFO source=server.go:630 msg="llama runner started in 6.77 seconds" [GIN] 2025/06/04 - 18:16:10 | 200 | 8.181069274s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-04 17:39:51 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

why does the 35G model pulled from the Olama official website run at 148G after adding context?

Because you added context. 35G is the size of the model weights. Think of it as the size of a program that you want to run, eg vi or nano. When you run the program, the program needs space to store what it's working on. For the editor, that's RAM. For a model, that's context. More context, more VRAM/RAM needed.

Some other issues from the log. You haven't included a full log so it's hard to say for sure, but it's possible that you've mis-understood the purpose of num_gpu. It's not for the number of GPUs, but for the number of layers to offload to the GPU. You normally don't have to set this, ollama will estimate it on its own.

Also you've set the context to 65536 but the model only supports a context of 40960.

<!-- gh-comment-id:2940011483 --> @rick-github commented on GitHub (Jun 4, 2025): > why does the 35G model pulled from the Olama official website run at 148G after adding context? Because you added context. 35G is the size of the model weights. Think of it as the size of a program that you want to run, eg `vi` or `nano`. When you run the program, the program needs space to store what it's working on. For the editor, that's RAM. For a model, that's context. More context, more VRAM/RAM needed. Some other issues from the log. You haven't included a full log so it's hard to say for sure, but it's possible that you've mis-understood the purpose of `num_gpu`. It's not for the number of GPUs, but for the number of layers to offload to the GPU. You normally don't have to set this, ollama will estimate it on its own. Also you've set the context to 65536 but the model only supports a context of 40960.
Author
Owner

@wingraver commented on GitHub (Jun 4, 2025):

@rick-github hi there. You haven't actually answered the question from the OP. I don't fully understand the maths of adding context but loading a 35G model and getting a size four times larger (148G) doesn't add up to me. There's been a number of these types of tickets raised on here... something's not right. How did Ollama decide to allocate 148G for the stated context... is there some formula that is being used?

<!-- gh-comment-id:2941691693 --> @wingraver commented on GitHub (Jun 4, 2025): @rick-github hi there. You haven't actually answered the question from the OP. I don't fully understand the maths of adding context but loading a 35G model and getting a size four times larger (148G) doesn't add up to me. There's been a number of these types of tickets raised on here... something's not right. How did Ollama decide to allocate 148G for the stated context... is there some formula that is being used?
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

You haven't actually answered the question from the OP.

As I mentioned, the log is incomplete. Better data, better analysis.

is there some formula that is being used?

Yes, as shown here:
0943001193/fs/ggml/ggml.go (L426)

Broadly speaking,

VRAM_{required} = size_{model weights} + size_{graph} * size_{parallel}

size_{graph} depends on the architecture, but is roughly proportional to

size_{graph} = size_{batch} * size_{embedding} * size_{heads} * context_{length}

size_{graph} is also non-linear on context_{length}, depending on max of various values.

If there is more than one device, the weights are partitioned over the available devices but the memory graph is duplicated per device. So you get the situation where splitting a model across multiple devices consumes more VRAM than hosting the model on a single device.

Because of the permutations of model architecture, device allocation, head count, embedding and vocab sizes, etc, the memory estimation is sometimes inaccurate. This can be exacerbated when additional memory modifiers like flash attention and KV cache quantization are thrown in the mix. This is why setting num_gpu is a common way of taking ollama's initial estimate and fine tuning it to maximize VRAM usage. Recent changes to ollama memory estimation logic have been made to compute the worst case memory graph in an effort to reduce runner OOMs, which results in estimations fluctuating version to version as the code is adjusted.

<!-- gh-comment-id:2941883584 --> @rick-github commented on GitHub (Jun 4, 2025): > You haven't actually answered the question from the OP. As I mentioned, the log is incomplete. Better data, better analysis. > is there some formula that is being used? Yes, as shown here: https://github.com/ollama/ollama/blob/09430011936652cf55925184aaed6f2cebf62a75/fs/ggml/ggml.go#L426 Broadly speaking, $VRAM_{required} = size_{model weights} + size_{graph} * size_{parallel}$ $`size_{graph}`$ depends on the architecture, but is roughly proportional to $size_{graph} = size_{batch} * size_{embedding} * size_{heads} * context_{length}$ $`size_{graph}`$ is also non-linear on $`context_{length}`$, depending on `max` of various values. If there is more than one device, the weights are partitioned over the available devices but the memory graph is duplicated per device. So you get the situation where splitting a model across multiple devices consumes more VRAM than hosting the model on a single device. Because of the permutations of model architecture, device allocation, head count, embedding and vocab sizes, etc, the memory estimation is sometimes inaccurate. This can be exacerbated when additional memory modifiers like flash attention and KV cache quantization are thrown in the mix. This is why setting `num_gpu` is a common way of taking ollama's initial estimate and fine tuning it to maximize VRAM usage. Recent changes to ollama memory estimation logic have been made to compute the worst case memory graph in an effort to reduce runner OOMs, which results in estimations fluctuating version to version as the code is adjusted.
Author
Owner

@dan-and commented on GitHub (Jun 4, 2025):

@rick-github thanks for explaining it. I never thought about that the memory graph needs to duplicated to all gpus. This explains why patch of grouping gpus instead of spreading over all available devices ( MR https://github.com/ollama/ollama/pull/10678 ) also changed the allocated memory.

<!-- gh-comment-id:2941993585 --> @dan-and commented on GitHub (Jun 4, 2025): @rick-github thanks for explaining it. I never thought about that the memory graph needs to duplicated to all gpus. This explains why patch of grouping gpus instead of spreading over all available devices ( MR https://github.com/ollama/ollama/pull/10678 ) also changed the allocated memory.
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

Yes, I quite like the work you did in #10678, I would like see it (or some version of it) merged.

<!-- gh-comment-id:2942006367 --> @rick-github commented on GitHub (Jun 4, 2025): Yes, I quite like the work you did in #10678, I would like see it (or some version of it) merged.
Author
Owner

@Jin8999 commented on GitHub (Jun 5, 2025):

Thank you for your reply,I don't think I set num_gpu, I just specified the CUDA_VISIBLEDEVICES parameter during runtime. Additionally, on the Qwen3 official website, it is supported to expand the context up to 128K. For the question I raised, not only increasing the context, but also reducing it will still make the model larger than before. @rick-github

<!-- gh-comment-id:2943808995 --> @Jin8999 commented on GitHub (Jun 5, 2025): Thank you for your reply,I don't think I set num_gpu, I just specified the CUDA_VISIBLEDEVICES parameter during runtime. Additionally, on the Qwen3 official website, it is supported to expand the context up to 128K. For the question I raised, not only increasing the context, but also reducing it will still make the model larger than before. @rick-github
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

A complete log would facilitate more in-depth analysis.

<!-- gh-comment-id:2943826504 --> @rick-github commented on GitHub (Jun 5, 2025): A complete log would facilitate more in-depth analysis.
Author
Owner

@ccebelenski commented on GitHub (Jun 5, 2025):

Yeah, the new-ish memory estimator is fairly pessimistic - I get it that in a mixed-card setup it's going to be weird, but in a more homogeneous setup I think it might be able to do better. Right now it pretty much forces me to create a parameterized version from a model file to optimize it because the guess ollama makes is so bad. I was tripped up on the fact that 4x cards with 8GB context each is 32GB estimated, causing a lot of layers forced back to the CPU even though it would fit fine.

<!-- gh-comment-id:2946348449 --> @ccebelenski commented on GitHub (Jun 5, 2025): Yeah, the new-ish memory estimator is fairly pessimistic - I get it that in a mixed-card setup it's going to be weird, but in a more homogeneous setup I think it might be able to do better. Right now it pretty much forces me to create a parameterized version from a model file to optimize it because the guess ollama makes is so bad. I was tripped up on the fact that 4x cards with 8GB context each is 32GB estimated, causing a lot of layers forced back to the CPU even though it would fit fine.
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

@ccebelenski If you can provide logs it may help in finetuning the estimation logic.

<!-- gh-comment-id:2946419140 --> @rick-github commented on GitHub (Jun 5, 2025): @ccebelenski If you can provide logs it may help in finetuning the estimation logic.
Author
Owner

@ccebelenski commented on GitHub (Jun 5, 2025):

@rick-github Absolutely. Here's an example I think fits. There's plenty of VRAM available still, I haven't forced the layers to load with num_gpu here (it will if I force it, and it fits fine with that context size), yet it didn't offload all the layers to the GPU.

ollama ps reports:
NAME ID SIZE PROCESSOR UNTIL
hf.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF:Q8_0 86350117bdb0 74 GB 12%/88% CPU/GPU 59 minutes from now

which is kind of non-intuitive from my perspective for the purposes of estimation while probably technically being correct if you add everything up.

ollama-memory.txt

<!-- gh-comment-id:2946499045 --> @ccebelenski commented on GitHub (Jun 5, 2025): @rick-github Absolutely. Here's an example I think fits. There's plenty of VRAM available still, I haven't forced the layers to load with num_gpu here (it will if I force it, and it fits fine with that context size), yet it didn't offload all the layers to the GPU. ollama ps reports: NAME ID SIZE PROCESSOR UNTIL hf.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF:Q8_0 86350117bdb0 74 GB 12%/88% CPU/GPU 59 minutes from now which is kind of non-intuitive from my perspective for the purposes of estimation while probably technically being correct if you add everything up. [ollama-memory.txt](https://github.com/user-attachments/files/20618748/ollama-memory.txt)
Author
Owner

@rick-github commented on GitHub (Jun 5, 2025):

@ccebelenski Can you add the earlier part of the log where it shows estimations?

<!-- gh-comment-id:2946508108 --> @rick-github commented on GitHub (Jun 5, 2025): @ccebelenski Can you add the earlier part of the log where it shows estimations?
Author
Owner

@ccebelenski commented on GitHub (Jun 5, 2025):

@rick-github yeah, sorry _ cut it off accidentally

ollama-memory2.txt

.

<!-- gh-comment-id:2946553359 --> @ccebelenski commented on GitHub (Jun 5, 2025): @rick-github yeah, sorry _ cut it off accidentally [ollama-memory2.txt](https://github.com/user-attachments/files/20618828/ollama-memory2.txt) .
Author
Owner

@dan-and commented on GitHub (Jun 5, 2025):

I don't want to hijack this issue, but your issue @ccebelenski is addressed by my merge-request at #10678 . I have memory examples included, where you can see that spreading over 4 GPUs adds up so much overhead and grouping the GPUs helps with reducing the overhead and adding speed (less PCIe communication).

<!-- gh-comment-id:2946890563 --> @dan-and commented on GitHub (Jun 5, 2025): I don't want to hijack this issue, but your issue @ccebelenski is addressed by my merge-request at #10678 . I have memory examples included, where you can see that spreading over 4 GPUs adds up so much overhead and grouping the GPUs helps with reducing the overhead and adding speed (less PCIe communication).
Author
Owner

@ccebelenski commented on GitHub (Jun 5, 2025):

Absolutely - I've read your merge-request so I hope this adds to the urgency of moving this along @dan-and . I also hope we can revisit the memory estimation process (yet again) - It's not in a good place right now with it blowing up VRAM requirements to avoid an OOM. The loader should have everything it needs to get really close if it's smart enough so we wouldn't need to add the padding. I'm not familiar with the code myself, or I would take a deeper look, and obviously I know I'm probably being a bit naive or it would have been addressed better the first time. I suspect it might need some kind of "look ahead" based on how I think it's doing the setup in memory - it seems multi-stage which makes sense given the split between context space and model space but perhaps it just needs to optimize it a bit more or fine-tune the buffer space we're adding or make it smarter when the model is split somehow. (As an aside - I've wondered if it's possible to prioritize offload - for example bias weights to GPU when context would normally overflow?)

I don't want to hijack this issue, but your issue @ccebelenski is addressed by my merge-request at #10678 . I have memory examples included, where you can see that spreading over 4 GPUs adds up so much overhead and grouping the GPUs helps with reducing the overhead and adding speed (less PCIe communication).

<!-- gh-comment-id:2947036040 --> @ccebelenski commented on GitHub (Jun 5, 2025): Absolutely - I've read your merge-request so I hope this adds to the urgency of moving this along @dan-and . I also hope we can revisit the memory estimation process (yet again) - It's not in a good place right now with it blowing up VRAM requirements to avoid an OOM. The loader should have everything it needs to get really close if it's smart enough so we wouldn't need to add the padding. I'm not familiar with the code myself, or I would take a deeper look, and obviously I know I'm probably being a bit naive or it would have been addressed better the first time. I suspect it might need some kind of "look ahead" based on how I think it's doing the setup in memory - it seems multi-stage which makes sense given the split between context space and model space but perhaps it just needs to optimize it a bit more or fine-tune the buffer space we're adding or make it smarter when the model is split somehow. (As an aside - I've wondered if it's possible to prioritize offload - for example bias weights to GPU when context would normally overflow?) > I don't want to hijack this issue, but your issue [@ccebelenski](https://github.com/ccebelenski) is addressed by my merge-request at [#10678](https://github.com/ollama/ollama/pull/10678) . I have memory examples included, where you can see that spreading over 4 GPUs adds up so much overhead and grouping the GPUs helps with reducing the overhead and adding speed (less PCIe communication).
Author
Owner

@Jin8999 commented on GitHub (Jul 4, 2025):

My problem is still not solved. Now that I have expanded the model from 40k to 50k, why does it still occupy CPU with enough GPU?

  1. This is the Modelfile file file that I extended by 50k
Image
  1. ollama ps:
Image
<!-- gh-comment-id:3035181910 --> @Jin8999 commented on GitHub (Jul 4, 2025): My problem is still not solved. Now that I have expanded the model from 40k to 50k, why does it still occupy CPU with enough GPU? 1. This is the Modelfile file file that I extended by 50k <img width="240" alt="Image" src="https://github.com/user-attachments/assets/95786f52-a22c-4f45-acfd-ae88fdb13ddb" /> 2. ollama ps: <img width="595" alt="Image" src="https://github.com/user-attachments/assets/400112ee-c332-4bc8-bed8-4de30bf8abc6" />
Author
Owner

@rick-github commented on GitHub (Jul 4, 2025):

A complete log would facilitate more in-depth analysis.

<!-- gh-comment-id:3035195329 --> @rick-github commented on GitHub (Jul 4, 2025): A complete log would facilitate more in-depth analysis.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69286