[GH-ISSUE #10832] unsloth/Qwen3-32B-128K-GGUF can not use multi GPU #7113

Closed
opened 2026-04-12 19:06:10 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @adamhj on GitHub (May 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10832

What is the issue?

Today I'm trying to run unsloth/Qwen3-32B-128K-GGUF(Q6_K) on ollama, on a machine has 24G×6 nvidia A10, but after the model was loaded, I found that it was running 100% on CPU

unsloth/Qwen3:32B-128K-Q6_K    3ddf246c242f    60 GB    100% CPU     4 minutes from now

I can run the official Qwen3:32B-Q6_K model on the same machine without any problem.

I suspect the problem is that the 128k model uses RoPE Scaling and cannot utilize multiple GPUs. Indeed, I also tried running unsloth/Qwen3-8B-128K-GGUF(Q6_K) on the same machine. It ran, but I noticed that only the first GPU was utilized, with its VRAM nearly full, while the others remained empty. This is not the case when I run other models, where the VRAM is distributed evenly across all six cards.

Relevant log output

[root@126gpu-10 Qwen3-30B-A3B-GGUF]# docker logs --tail 10 -f ollama
[GIN] 2025/05/23 - 05:46:03 | 200 |         9m31s | 10.3.2.153 | POST     "/v1/chat/completions"
time=2025-05-23T05:51:09.589Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=6.462163498 model=/root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08
time=2025-05-23T05:51:11.920Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=8.793217106 model=/root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08
time=2025-05-23T05:51:14.104Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=10.977573067 model=/root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08
[GIN] 2025/05/23 - 06:34:00 | 200 |      70.738µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/23 - 06:34:00 | 200 |      36.864µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/05/23 - 10:43:24 | 200 |      64.701µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/23 - 10:43:24 | 200 |      59.119µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/05/23 - 10:44:55 | 200 |      59.305µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/23 - 10:44:55 | 200 |      69.853µs |       127.0.0.1 | GET      "/api/ps"
time=2025-05-23T10:46:01.917Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-23T10:46:05.808Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-23T10:46:05.835Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-23T10:46:05.836Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.839Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.842Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.845Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.847Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.850Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.852Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.855Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.857Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.860Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.863Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.865Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.868Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:05.870Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:08.223Z level=INFO source=server.go:106 msg="system memory" total="377.1 GiB" free="366.6 GiB" free_swap="15.5 GiB"
time=2025-05-23T10:46:08.224Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-23T10:46:08.224Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=0 layers.split="" memory.available="[21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="56.4 GiB" memory.required.partial="0 B" memory.required.kv="32.0 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="24.4 GiB" memory.weights.repeating="23.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="42.7 GiB" memory.graph.partial="42.7 GiB"
llama_model_loader: loaded meta data with 36 key-value pairs and 707 tensors from /root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3-32B-128K
llama_model_loader: - kv   3:                           general.finetune str              = 128k
llama_model_loader: - kv   4:                           general.basename str              = Qwen3-32B-128K
llama_model_loader: - kv   5:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   6:                         general.size_label str              = 32B
llama_model_loader: - kv   7:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   8:                          qwen3.block_count u32              = 64
llama_model_loader: - kv   9:                       qwen3.context_length u32              = 131072
llama_model_loader: - kv  10:                     qwen3.embedding_length u32              = 5120
llama_model_loader: - kv  11:                  qwen3.feed_forward_length u32              = 25600
llama_model_loader: - kv  12:                 qwen3.attention.head_count u32              = 64
llama_model_loader: - kv  13:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  15:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  16:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  17:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  18:                    qwen3.rope.scaling.type str              = yarn
llama_model_loader: - kv  19:                  qwen3.rope.scaling.factor f32              = 4.000000
llama_model_loader: - kv  20: qwen3.rope.scaling.original_context_length u32              = 32768
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 151654
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  29:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  30:               general.quantization_version u32              = 2
llama_model_loader: - kv  31:                          general.file_type u32              = 18
llama_model_loader: - kv  32:                      quantize.imatrix.file str              = Qwen3-32B-128K-GGUF/imatrix_unsloth.dat
llama_model_loader: - kv  33:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen3-32B-128K.txt
llama_model_loader: - kv  34:             quantize.imatrix.entries_count i32              = 448
llama_model_loader: - kv  35:              quantize.imatrix.chunks_count i32              = 685
llama_model_loader: - type  f32:  257 tensors
llama_model_loader: - type q6_K:  450 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q6_K
print_info: file size   = 25.03 GiB (6.56 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 32.76 B
print_info: general.name     = Qwen3-32B-128K
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151654 '<|vision_pad|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-23T10:46:08.427Z level=INFO source=server.go:410 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 --ctx-size 131072 --batch-size 4 --threads 64 --no-mmap --parallel 1 --port 34438"
time=2025-05-23T10:46:08.428Z level=INFO source=sched.go:452 msg="loaded runners" count=1
time=2025-05-23T10:46:08.428Z level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
time=2025-05-23T10:46:08.428Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-23T10:46:08.470Z level=INFO source=runner.go:853 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-05-23T10:46:08.487Z level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-05-23T10:46:08.489Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:34438"
llama_model_loader: loaded meta data with 36 key-value pairs and 707 tensors from /root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3-32B-128K
llama_model_loader: - kv   3:                           general.finetune str              = 128k
llama_model_loader: - kv   4:                           general.basename str              = Qwen3-32B-128K
llama_model_loader: - kv   5:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   6:                         general.size_label str              = 32B
llama_model_loader: - kv   7:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   8:                          qwen3.block_count u32              = 64
llama_model_loader: - kv   9:                       qwen3.context_length u32              = 131072
llama_model_loader: - kv  10:                     qwen3.embedding_length u32              = 5120
llama_model_loader: - kv  11:                  qwen3.feed_forward_length u32              = 25600
llama_model_loader: - kv  12:                 qwen3.attention.head_count u32              = 64
llama_model_loader: - kv  13:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  15:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  16:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  17:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  18:                    qwen3.rope.scaling.type str              = yarn
llama_model_loader: - kv  19:                  qwen3.rope.scaling.factor f32              = 4.000000
llama_model_loader: - kv  20: qwen3.rope.scaling.original_context_length u32              = 32768
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 151654
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  29:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  30:               general.quantization_version u32              = 2
llama_model_loader: - kv  31:                          general.file_type u32              = 18
llama_model_loader: - kv  32:                      quantize.imatrix.file str              = Qwen3-32B-128K-GGUF/imatrix_unsloth.dat
llama_model_loader: - kv  33:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen3-32B-128K.txt
llama_model_loader: - kv  34:             quantize.imatrix.entries_count i32              = 448
llama_model_loader: - kv  35:              quantize.imatrix.chunks_count i32              = 685
llama_model_loader: - type  f32:  257 tensors
llama_model_loader: - type q6_K:  450 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q6_K
print_info: file size   = 25.03 GiB (6.56 BPW)
time=2025-05-23T10:46:08.680Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 64
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 25600
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = yarn
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 0.25
print_info: n_ctx_orig_yarn  = 32768
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 32B
print_info: model params     = 32.76 B
print_info: general.name     = Qwen3-32B-128K
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151654 '<|vision_pad|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors:          CPU model buffer size = 25632.22 MiB
llama_context: constructing llama_context
llama_context: n_batch is less than GGML_KQ_MASK_PAD - increasing to 64
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 131072
llama_context: n_ctx_per_seq = 131072
llama_context: n_batch       = 64
llama_context: n_ubatch      = 64
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 0.25
llama_context:        CPU  output buffer size =     0.60 MiB
init: kv_size = 131072, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1
[GIN] 2025/05/23 - 10:46:31 | 200 |      58.685µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/23 - 10:46:31 | 200 |      75.827µs |       127.0.0.1 | GET      "/api/ps"
init:        CPU KV buffer size = 32768.00 MiB
llama_context: KV self size  = 32768.00 MiB, K (f16): 16384.00 MiB, V (f16): 16384.00 MiB
llama_context:        CPU compute buffer size =  2086.50 MiB
llama_context: graph nodes  = 2438
llama_context: graph splits = 1
time=2025-05-23T10:46:36.053Z level=INFO source=server.go:628 msg="llama runner started in 27.63 seconds"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.6.8

Originally created by @adamhj on GitHub (May 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10832 ### What is the issue? Today I'm trying to run unsloth/Qwen3-32B-128K-GGUF(Q6_K) on ollama, on a machine has 24G×6 nvidia A10, but after the model was loaded, I found that it was running 100% on CPU ```NAME ID SIZE PROCESSOR UNTIL unsloth/Qwen3:32B-128K-Q6_K 3ddf246c242f 60 GB 100% CPU 4 minutes from now ``` I can run the official Qwen3:32B-Q6_K model on the same machine without any problem. I suspect the problem is that the 128k model uses RoPE Scaling and cannot utilize multiple GPUs. Indeed, I also tried running unsloth/Qwen3-8B-128K-GGUF(Q6_K) on the same machine. It ran, but I noticed that only the first GPU was utilized, with its VRAM nearly full, while the others remained empty. This is not the case when I run other models, where the VRAM is distributed evenly across all six cards. ### Relevant log output ``` [root@126gpu-10 Qwen3-30B-A3B-GGUF]# docker logs --tail 10 -f ollama [GIN] 2025/05/23 - 05:46:03 | 200 | 9m31s | 10.3.2.153 | POST "/v1/chat/completions" time=2025-05-23T05:51:09.589Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=6.462163498 model=/root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 time=2025-05-23T05:51:11.920Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=8.793217106 model=/root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 time=2025-05-23T05:51:14.104Z level=WARN source=sched.go:655 msg="gpu VRAM usage didn't recover within timeout" seconds=10.977573067 model=/root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 [GIN] 2025/05/23 - 06:34:00 | 200 | 70.738µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/23 - 06:34:00 | 200 | 36.864µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/05/23 - 10:43:24 | 200 | 64.701µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/23 - 10:43:24 | 200 | 59.119µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/05/23 - 10:44:55 | 200 | 59.305µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/23 - 10:44:55 | 200 | 69.853µs | 127.0.0.1 | GET "/api/ps" time=2025-05-23T10:46:01.917Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-23T10:46:05.808Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-23T10:46:05.835Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-23T10:46:05.836Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.839Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.842Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.845Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.847Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.850Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.852Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.855Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.857Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.860Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.863Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.865Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.868Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:05.870Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:08.223Z level=INFO source=server.go:106 msg="system memory" total="377.1 GiB" free="366.6 GiB" free_swap="15.5 GiB" time=2025-05-23T10:46:08.224Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 time=2025-05-23T10:46:08.224Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=0 layers.split="" memory.available="[21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="56.4 GiB" memory.required.partial="0 B" memory.required.kv="32.0 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="24.4 GiB" memory.weights.repeating="23.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="42.7 GiB" memory.graph.partial="42.7 GiB" llama_model_loader: loaded meta data with 36 key-value pairs and 707 tensors from /root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3-32B-128K llama_model_loader: - kv 3: general.finetune str = 128k llama_model_loader: - kv 4: general.basename str = Qwen3-32B-128K llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 32B llama_model_loader: - kv 7: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 8: qwen3.block_count u32 = 64 llama_model_loader: - kv 9: qwen3.context_length u32 = 131072 llama_model_loader: - kv 10: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 11: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 12: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 13: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 15: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 16: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 17: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 18: qwen3.rope.scaling.type str = yarn llama_model_loader: - kv 19: qwen3.rope.scaling.factor f32 = 4.000000 llama_model_loader: - kv 20: qwen3.rope.scaling.original_context_length u32 = 32768 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151654 llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 29: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: general.file_type u32 = 18 llama_model_loader: - kv 32: quantize.imatrix.file str = Qwen3-32B-128K-GGUF/imatrix_unsloth.dat llama_model_loader: - kv 33: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-32B-128K.txt llama_model_loader: - kv 34: quantize.imatrix.entries_count i32 = 448 llama_model_loader: - kv 35: quantize.imatrix.chunks_count i32 = 685 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type q6_K: 450 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 25.03 GiB (6.56 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 32.76 B print_info: general.name = Qwen3-32B-128K print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151654 '<|vision_pad|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-23T10:46:08.427Z level=INFO source=server.go:410 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 --ctx-size 131072 --batch-size 4 --threads 64 --no-mmap --parallel 1 --port 34438" time=2025-05-23T10:46:08.428Z level=INFO source=sched.go:452 msg="loaded runners" count=1 time=2025-05-23T10:46:08.428Z level=INFO source=server.go:589 msg="waiting for llama runner to start responding" time=2025-05-23T10:46:08.428Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding" time=2025-05-23T10:46:08.470Z level=INFO source=runner.go:853 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-05-23T10:46:08.487Z level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-05-23T10:46:08.489Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:34438" llama_model_loader: loaded meta data with 36 key-value pairs and 707 tensors from /root/.ollama/models/blobs/sha256-6b818b3bbaee2722c6201aa17fc4c0733da71bbb12b7f77abfd75e5c0cb98e08 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3-32B-128K llama_model_loader: - kv 3: general.finetune str = 128k llama_model_loader: - kv 4: general.basename str = Qwen3-32B-128K llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 32B llama_model_loader: - kv 7: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 8: qwen3.block_count u32 = 64 llama_model_loader: - kv 9: qwen3.context_length u32 = 131072 llama_model_loader: - kv 10: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 11: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 12: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 13: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 15: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 16: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 17: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 18: qwen3.rope.scaling.type str = yarn llama_model_loader: - kv 19: qwen3.rope.scaling.factor f32 = 4.000000 llama_model_loader: - kv 20: qwen3.rope.scaling.original_context_length u32 = 32768 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151654 llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 29: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: general.file_type u32 = 18 llama_model_loader: - kv 32: quantize.imatrix.file str = Qwen3-32B-128K-GGUF/imatrix_unsloth.dat llama_model_loader: - kv 33: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-32B-128K.txt llama_model_loader: - kv 34: quantize.imatrix.entries_count i32 = 448 llama_model_loader: - kv 35: quantize.imatrix.chunks_count i32 = 685 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type q6_K: 450 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 25.03 GiB (6.56 BPW) time=2025-05-23T10:46:08.680Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model" load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 25600 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = yarn print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 0.25 print_info: n_ctx_orig_yarn = 32768 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = Qwen3-32B-128K print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151654 '<|vision_pad|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 25632.22 MiB llama_context: constructing llama_context llama_context: n_batch is less than GGML_KQ_MASK_PAD - increasing to 64 llama_context: n_seq_max = 1 llama_context: n_ctx = 131072 llama_context: n_ctx_per_seq = 131072 llama_context: n_batch = 64 llama_context: n_ubatch = 64 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 0.25 llama_context: CPU output buffer size = 0.60 MiB init: kv_size = 131072, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1 [GIN] 2025/05/23 - 10:46:31 | 200 | 58.685µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/23 - 10:46:31 | 200 | 75.827µs | 127.0.0.1 | GET "/api/ps" init: CPU KV buffer size = 32768.00 MiB llama_context: KV self size = 32768.00 MiB, K (f16): 16384.00 MiB, V (f16): 16384.00 MiB llama_context: CPU compute buffer size = 2086.50 MiB llama_context: graph nodes = 2438 llama_context: graph splits = 1 time=2025-05-23T10:46:36.053Z level=INFO source=server.go:628 msg="llama runner started in 27.63 seconds" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.8
GiteaMirror added the bug label 2026-04-12 19:06:10 -05:00
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

time=2025-05-23T10:46:08.224Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=65
 layers.offload=0 layers.split="" memory.available="[21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB]"
 memory.gpu_overhead="0 B" memory.required.full="56.4 GiB" memory.required.partial="0 B" memory.required.kv="32.0 GiB"
 memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="24.4 GiB" memory.weights.repeating="23.8 GiB"
 memory.weights.nonrepeating="608.6 MiB" memory.graph.full="42.7 GiB" memory.graph.partial="42.7 GiB"

Context size of 131072 tokens pushes the memory graph to a size (42.7G) where it won't fit on a GPU device (21.7G), so the model is loaded in system RAM.

<!-- gh-comment-id:2904159214 --> @rick-github commented on GitHub (May 23, 2025): ``` time=2025-05-23T10:46:08.224Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=0 layers.split="" memory.available="[21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB 21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="56.4 GiB" memory.required.partial="0 B" memory.required.kv="32.0 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="24.4 GiB" memory.weights.repeating="23.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="42.7 GiB" memory.graph.partial="42.7 GiB" ``` Context size of 131072 tokens pushes the memory graph to a size (42.7G) where it won't fit on a GPU device (21.7G), so the model is loaded in system RAM.
Author
Owner

@adamhj commented on GitHub (May 30, 2025):

Get it, thanks for your reply

<!-- gh-comment-id:2921029183 --> @adamhj commented on GitHub (May 30, 2025): Get it, thanks for your reply
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7113