[GH-ISSUE #10810] ollama uses qwen3:235b inference, and the GPU is basically not in use #53612

Closed
opened 2026-04-29 04:14:37 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @oyww0792 on GitHub (May 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10810

What is the issue?

May 22 07:36:17 ubuntuserver ollama[1413041]: [GIN] 2025/05/22 - 07:36:17 | 200 | 26.275076ms | 127.0.0.1 | POST "/api/show"
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.047Z level=INFO source=server.go:135 msg="system memory" total="251.6 GiB" free="247.1 GiB" free_swap="2.8 GiB"
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.048Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=95 layers.offload=65 layers.split=13,13,13,13,13 memory.available="[23.2 GiB 23.2 GiB 23.2 GiB 23.2 GiB 23.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="152.6 GiB" memory.required.partial="110.7 GiB" memory.required.kv="752.0 MiB" memory.required.allocations="[22.0 GiB 22.2 GiB 22.2 GiB 22.2 GiB 22.2 GiB]" memory.weights.total="132.3 GiB" memory.weights.repeating="131.8 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.0 GiB"
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: loaded meta data with 33 key-value pairs and 1131 tensors from /home/ollama/.ollama/models/blobs/sha256-aeacdadecbed8a07e42026d1a1d3cd30715bb2994ebe4e4ca4009e1a4abe8d5d (version GGUF V3 (latest))
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 0: general.architecture str = qwen3moe
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 1: general.type str = model
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 2: general.name str = Qwen3 235B A22B
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 3: general.basename str = Qwen3
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 4: general.size_label str = 235B-A22B
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 5: general.license str = apache-2.0
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/Qwen3-235...
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 7: general.tags arr[str,1] = ["text-generation"]
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 8: qwen3moe.block_count u32 = 94
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 9: qwen3moe.context_length u32 = 40960
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 10: qwen3moe.embedding_length u32 = 4096
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 11: qwen3moe.feed_forward_length u32 = 12288
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 12: qwen3moe.attention.head_count u32 = 64
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 13: qwen3moe.attention.head_count_kv u32 = 4
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 14: qwen3moe.rope.freq_base f32 = 1000000.000000
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 15: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 16: qwen3moe.expert_used_count u32 = 8
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 17: qwen3moe.attention.key_length u32 = 128
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 18: qwen3moe.attention.value_length u32 = 128
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 19: qwen3moe.expert_count u32 = 128
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 20: qwen3moe.expert_feed_forward_length u32 = 1536
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["? ?", "?? ??", "i n", "? t",...
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 31: general.quantization_version u32 = 2
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 32: general.file_type u32 = 15
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - type f32: 471 tensors
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - type f16: 94 tensors
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - type q4_K: 519 tensors
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - type q6_K: 47 tensors
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: file format = GGUF V3 (latest)
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: file type = Q4_K - Medium
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: file size = 132.63 GiB (4.85 BPW)
May 22 07:36:20 ubuntuserver ollama[1413041]: load: special tokens cache size = 26
May 22 07:36:20 ubuntuserver ollama[1413041]: load: token to piece cache size = 0.9311 MB
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: arch = qwen3moe
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: vocab_only = 1
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: model type = ?B
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: model params = 235.09 B
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: general.name = Qwen3 235B A22B
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: n_ff_exp = 0
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: vocab type = BPE
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: n_vocab = 151936
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: n_merges = 151387
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: BOS token = 151643 '<|endoftext|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOS token = 151645 '<|im_end|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOT token = 151645 '<|im_end|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: PAD token = 151643 '<|endoftext|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: LF token = 198 '?'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM PRE token = 151659 '<|fim_prefix|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM SUF token = 151661 '<|fim_suffix|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM MID token = 151660 '<|fim_middle|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM PAD token = 151662 '<|fim_pad|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM REP token = 151663 '<|repo_name|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM SEP token = 151664 '<|file_sep|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151643 '<|endoftext|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151645 '<|im_end|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151662 '<|fim_pad|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151663 '<|repo_name|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151664 '<|file_sep|>'
May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: max token length = 256
May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_load: vocab only - skipping tensors
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.223Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /home/ollama/.ollama/models/blobs/sha256-aeacdadecbed8a07e42026d1a1d3cd30715bb2994ebe4e4ca4009e1a4abe8d5d --ctx-size 4096 --batch-size 512 --n-gpu-layers 65 --threads 64 --parallel 1 --tensor-split 13,13,13,13,13 --port 37587"
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.223Z level=INFO source=sched.go:472 msg="loaded runners" count=1
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.223Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.223Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.234Z level=INFO source=runner.go:815 msg="starting go runner"
May 22 07:36:20 ubuntuserver ollama[1413041]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-x64.so
May 22 07:36:20 ubuntuserver ollama[1413041]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
May 22 07:36:20 ubuntuserver ollama[1413041]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
May 22 07:36:20 ubuntuserver ollama[1413041]: ggml_cuda_init: found 5 CUDA devices:
May 22 07:36:20 ubuntuserver ollama[1413041]: Device 0: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
May 22 07:36:20 ubuntuserver ollama[1413041]: Device 1: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
May 22 07:36:20 ubuntuserver ollama[1413041]: Device 2: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
May 22 07:36:20 ubuntuserver ollama[1413041]: Device 3: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
May 22 07:36:20 ubuntuserver ollama[1413041]: Device 4: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
May 22 07:36:20 ubuntuserver ollama[1413041]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.911Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 CUDA.4.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.4.USE_GRAPHS=1 CUDA.4.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.912Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:37587"
May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.976Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA4 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: loaded meta data with 33 key-value pairs and 1131 tensors from /home/ollama/.ollama/models/blobs/sha256-aeacdadecbed8a07e42026d1a1d3cd30715bb2994ebe4e4ca4009e1a4abe8d5d (version GGUF V3 (latest))
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 0: general.architecture str = qwen3moe
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 1: general.type str = model
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 2: general.name str = Qwen3 235B A22B
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 3: general.basename str = Qwen3
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 4: general.size_label str = 235B-A22B
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 5: general.license str = apache-2.0
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/Qwen3-235...
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 7: general.tags arr[str,1] = ["text-generation"]
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 8: qwen3moe.block_count u32 = 94
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 9: qwen3moe.context_length u32 = 40960
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 10: qwen3moe.embedding_length u32 = 4096
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 11: qwen3moe.feed_forward_length u32 = 12288
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 12: qwen3moe.attention.head_count u32 = 64
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 13: qwen3moe.attention.head_count_kv u32 = 4
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 14: qwen3moe.rope.freq_base f32 = 1000000.000000
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 15: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 16: qwen3moe.expert_used_count u32 = 8
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 17: qwen3moe.attention.key_length u32 = 128
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 18: qwen3moe.attention.value_length u32 = 128
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 19: qwen3moe.expert_count u32 = 128
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 20: qwen3moe.expert_feed_forward_length u32 = 1536
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["? ?", "?? ??", "i n", "? t",...
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 31: general.quantization_version u32 = 2
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 32: general.file_type u32 = 15
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - type f32: 471 tensors
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - type f16: 94 tensors
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - type q4_K: 519 tensors
May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - type q6_K: 47 tensors
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: file format = GGUF V3 (latest)
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: file type = Q4_K - Medium
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: file size = 132.63 GiB (4.85 BPW)
May 22 07:36:21 ubuntuserver ollama[1413041]: load: special tokens cache size = 26
May 22 07:36:21 ubuntuserver ollama[1413041]: load: token to piece cache size = 0.9311 MB
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: arch = qwen3moe
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: vocab_only = 0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_ctx_train = 40960
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd = 4096
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_layer = 94
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_head = 64
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_head_kv = 4
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_rot = 128
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_swa = 0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_swa_pattern = 1
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd_head_k = 128
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd_head_v = 128
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_gqa = 16
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd_k_gqa = 512
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd_v_gqa = 512
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_norm_eps = 0.0e+00
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_norm_rms_eps = 1.0e-06
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_clamp_kqv = 0.0e+00
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_max_alibi_bias = 0.0e+00
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_logit_scale = 0.0e+00
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_attn_scale = 0.0e+00
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_ff = 12288
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_expert = 128
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_expert_used = 8
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: causal attn = 1
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: pooling type = 0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: rope type = 2
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: rope scaling = linear
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: freq_base_train = 1000000.0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: freq_scale_train = 1
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_ctx_orig_yarn = 40960
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: rope_finetuned = unknown
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_d_conv = 0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_d_inner = 0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_d_state = 0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_dt_rank = 0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_dt_b_c_rms = 0
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: model type = 235B.A22B
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: model params = 235.09 B
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: general.name = Qwen3 235B A22B
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_ff_exp = 1536
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: vocab type = BPE
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_vocab = 151936
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_merges = 151387
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: BOS token = 151643 '<|endoftext|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOS token = 151645 '<|im_end|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOT token = 151645 '<|im_end|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: PAD token = 151643 '<|endoftext|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: LF token = 198 '?'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM PRE token = 151659 '<|fim_prefix|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM SUF token = 151661 '<|fim_suffix|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM MID token = 151660 '<|fim_middle|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM PAD token = 151662 '<|fim_pad|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM REP token = 151663 '<|repo_name|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM SEP token = 151664 '<|file_sep|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151643 '<|endoftext|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151645 '<|im_end|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151662 '<|fim_pad|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151663 '<|repo_name|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151664 '<|file_sep|>'
May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: max token length = 256
May 22 07:36:21 ubuntuserver ollama[1413041]: load_tensors: loading model tensors, this can take a while... (mmap = true)
May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: offloading 65 repeating layers to GPU
May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: offloaded 65/95 layers to GPU
May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA0 model buffer size = 18201.04 MiB
May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA1 model buffer size = 18201.04 MiB
May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA2 model buffer size = 18399.04 MiB
May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA3 model buffer size = 18201.04 MiB
May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA4 model buffer size = 19785.04 MiB
May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CPU_Mapped model buffer size = 43022.27 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: constructing llama_context
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_seq_max = 1
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_ctx = 4096
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_ctx_per_seq = 4096
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_batch = 512
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_ubatch = 512
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: causal_attn = 1
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: flash_attn = 0
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: freq_base = 1000000.0
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: freq_scale = 1
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CPU output buffer size = 0.60 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: kv_size = 4096, type_k = 'f16', type_v = 'f16', n_layer = 94, can_shift = 1, padding = 32
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA0 KV buffer size = 104.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA1 KV buffer size = 104.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA2 KV buffer size = 104.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA3 KV buffer size = 104.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA4 KV buffer size = 104.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CPU KV buffer size = 232.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: KV self size = 752.00 MiB, K (f16): 376.00 MiB, V (f16): 376.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA0 compute buffer size = 1126.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA1 compute buffer size = 568.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA2 compute buffer size = 568.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA3 compute buffer size = 568.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA4 compute buffer size = 568.00 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA_Host compute buffer size = 16.01 MiB
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: graph nodes = 6116
May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: graph splits = 414 (with bs=512), 65 (with bs=1)
May 22 07:36:34 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:34.757Z level=INFO source=server.go:630 msg="llama runner started in 14.53 seconds"

Image

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.7.0-rc1

Originally created by @oyww0792 on GitHub (May 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10810 ### What is the issue? May 22 07:36:17 ubuntuserver ollama[1413041]: [GIN] 2025/05/22 - 07:36:17 | 200 | 26.275076ms | 127.0.0.1 | POST "/api/show" May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.047Z level=INFO source=server.go:135 msg="system memory" total="251.6 GiB" free="247.1 GiB" free_swap="2.8 GiB" May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.048Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=95 layers.offload=65 layers.split=13,13,13,13,13 memory.available="[23.2 GiB 23.2 GiB 23.2 GiB 23.2 GiB 23.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="152.6 GiB" memory.required.partial="110.7 GiB" memory.required.kv="752.0 MiB" memory.required.allocations="[22.0 GiB 22.2 GiB 22.2 GiB 22.2 GiB 22.2 GiB]" memory.weights.total="132.3 GiB" memory.weights.repeating="131.8 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.0 GiB" May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: loaded meta data with 33 key-value pairs and 1131 tensors from /home/ollama/.ollama/models/blobs/sha256-aeacdadecbed8a07e42026d1a1d3cd30715bb2994ebe4e4ca4009e1a4abe8d5d (version GGUF V3 (latest)) May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 0: general.architecture str = qwen3moe May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 1: general.type str = model May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 2: general.name str = Qwen3 235B A22B May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 3: general.basename str = Qwen3 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 4: general.size_label str = 235B-A22B May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 5: general.license str = apache-2.0 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/Qwen3-235... May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 7: general.tags arr[str,1] = ["text-generation"] May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 8: qwen3moe.block_count u32 = 94 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 9: qwen3moe.context_length u32 = 40960 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 10: qwen3moe.embedding_length u32 = 4096 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 11: qwen3moe.feed_forward_length u32 = 12288 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 12: qwen3moe.attention.head_count u32 = 64 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 13: qwen3moe.attention.head_count_kv u32 = 4 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 14: qwen3moe.rope.freq_base f32 = 1000000.000000 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 15: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 16: qwen3moe.expert_used_count u32 = 8 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 17: qwen3moe.attention.key_length u32 = 128 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 18: qwen3moe.attention.value_length u32 = 128 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 19: qwen3moe.expert_count u32 = 128 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 20: qwen3moe.expert_feed_forward_length u32 = 1536 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["? ?", "?? ??", "i n", "? t",... May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 31: general.quantization_version u32 = 2 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - kv 32: general.file_type u32 = 15 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - type f32: 471 tensors May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - type f16: 94 tensors May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - type q4_K: 519 tensors May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_loader: - type q6_K: 47 tensors May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: file format = GGUF V3 (latest) May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: file type = Q4_K - Medium May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: file size = 132.63 GiB (4.85 BPW) May 22 07:36:20 ubuntuserver ollama[1413041]: load: special tokens cache size = 26 May 22 07:36:20 ubuntuserver ollama[1413041]: load: token to piece cache size = 0.9311 MB May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: arch = qwen3moe May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: vocab_only = 1 May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: model type = ?B May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: model params = 235.09 B May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: general.name = Qwen3 235B A22B May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: n_ff_exp = 0 May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: vocab type = BPE May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: n_vocab = 151936 May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: n_merges = 151387 May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: BOS token = 151643 '<|endoftext|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOS token = 151645 '<|im_end|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOT token = 151645 '<|im_end|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: PAD token = 151643 '<|endoftext|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: LF token = 198 '?' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM MID token = 151660 '<|fim_middle|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM PAD token = 151662 '<|fim_pad|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM REP token = 151663 '<|repo_name|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: FIM SEP token = 151664 '<|file_sep|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151643 '<|endoftext|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151645 '<|im_end|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151662 '<|fim_pad|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151663 '<|repo_name|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: EOG token = 151664 '<|file_sep|>' May 22 07:36:20 ubuntuserver ollama[1413041]: print_info: max token length = 256 May 22 07:36:20 ubuntuserver ollama[1413041]: llama_model_load: vocab only - skipping tensors May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.223Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /home/ollama/.ollama/models/blobs/sha256-aeacdadecbed8a07e42026d1a1d3cd30715bb2994ebe4e4ca4009e1a4abe8d5d --ctx-size 4096 --batch-size 512 --n-gpu-layers 65 --threads 64 --parallel 1 --tensor-split 13,13,13,13,13 --port 37587" May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.223Z level=INFO source=sched.go:472 msg="loaded runners" count=1 May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.223Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.223Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.234Z level=INFO source=runner.go:815 msg="starting go runner" May 22 07:36:20 ubuntuserver ollama[1413041]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-x64.so May 22 07:36:20 ubuntuserver ollama[1413041]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no May 22 07:36:20 ubuntuserver ollama[1413041]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no May 22 07:36:20 ubuntuserver ollama[1413041]: ggml_cuda_init: found 5 CUDA devices: May 22 07:36:20 ubuntuserver ollama[1413041]: Device 0: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes May 22 07:36:20 ubuntuserver ollama[1413041]: Device 1: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes May 22 07:36:20 ubuntuserver ollama[1413041]: Device 2: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes May 22 07:36:20 ubuntuserver ollama[1413041]: Device 3: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes May 22 07:36:20 ubuntuserver ollama[1413041]: Device 4: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes May 22 07:36:20 ubuntuserver ollama[1413041]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.911Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 CUDA.4.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.4.USE_GRAPHS=1 CUDA.4.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.912Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:37587" May 22 07:36:20 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:20.976Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_load_from_file_impl: using device CUDA4 (NVIDIA GeForce RTX 4090 D) - 23736 MiB free May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: loaded meta data with 33 key-value pairs and 1131 tensors from /home/ollama/.ollama/models/blobs/sha256-aeacdadecbed8a07e42026d1a1d3cd30715bb2994ebe4e4ca4009e1a4abe8d5d (version GGUF V3 (latest)) May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 0: general.architecture str = qwen3moe May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 1: general.type str = model May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 2: general.name str = Qwen3 235B A22B May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 3: general.basename str = Qwen3 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 4: general.size_label str = 235B-A22B May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 5: general.license str = apache-2.0 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/Qwen3-235... May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 7: general.tags arr[str,1] = ["text-generation"] May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 8: qwen3moe.block_count u32 = 94 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 9: qwen3moe.context_length u32 = 40960 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 10: qwen3moe.embedding_length u32 = 4096 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 11: qwen3moe.feed_forward_length u32 = 12288 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 12: qwen3moe.attention.head_count u32 = 64 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 13: qwen3moe.attention.head_count_kv u32 = 4 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 14: qwen3moe.rope.freq_base f32 = 1000000.000000 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 15: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 16: qwen3moe.expert_used_count u32 = 8 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 17: qwen3moe.attention.key_length u32 = 128 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 18: qwen3moe.attention.value_length u32 = 128 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 19: qwen3moe.expert_count u32 = 128 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 20: qwen3moe.expert_feed_forward_length u32 = 1536 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["? ?", "?? ??", "i n", "? t",... May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 31: general.quantization_version u32 = 2 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - kv 32: general.file_type u32 = 15 May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - type f32: 471 tensors May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - type f16: 94 tensors May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - type q4_K: 519 tensors May 22 07:36:21 ubuntuserver ollama[1413041]: llama_model_loader: - type q6_K: 47 tensors May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: file format = GGUF V3 (latest) May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: file type = Q4_K - Medium May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: file size = 132.63 GiB (4.85 BPW) May 22 07:36:21 ubuntuserver ollama[1413041]: load: special tokens cache size = 26 May 22 07:36:21 ubuntuserver ollama[1413041]: load: token to piece cache size = 0.9311 MB May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: arch = qwen3moe May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: vocab_only = 0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_ctx_train = 40960 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd = 4096 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_layer = 94 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_head = 64 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_head_kv = 4 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_rot = 128 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_swa = 0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_swa_pattern = 1 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd_head_k = 128 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd_head_v = 128 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_gqa = 16 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd_k_gqa = 512 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_embd_v_gqa = 512 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_norm_eps = 0.0e+00 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_norm_rms_eps = 1.0e-06 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_clamp_kqv = 0.0e+00 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_max_alibi_bias = 0.0e+00 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_logit_scale = 0.0e+00 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: f_attn_scale = 0.0e+00 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_ff = 12288 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_expert = 128 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_expert_used = 8 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: causal attn = 1 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: pooling type = 0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: rope type = 2 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: rope scaling = linear May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: freq_base_train = 1000000.0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: freq_scale_train = 1 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_ctx_orig_yarn = 40960 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: rope_finetuned = unknown May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_d_conv = 0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_d_inner = 0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_d_state = 0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_dt_rank = 0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: ssm_dt_b_c_rms = 0 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: model type = 235B.A22B May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: model params = 235.09 B May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: general.name = Qwen3 235B A22B May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_ff_exp = 1536 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: vocab type = BPE May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_vocab = 151936 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: n_merges = 151387 May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: BOS token = 151643 '<|endoftext|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOS token = 151645 '<|im_end|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOT token = 151645 '<|im_end|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: PAD token = 151643 '<|endoftext|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: LF token = 198 '?' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM MID token = 151660 '<|fim_middle|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM PAD token = 151662 '<|fim_pad|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM REP token = 151663 '<|repo_name|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: FIM SEP token = 151664 '<|file_sep|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151643 '<|endoftext|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151645 '<|im_end|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151662 '<|fim_pad|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151663 '<|repo_name|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: EOG token = 151664 '<|file_sep|>' May 22 07:36:21 ubuntuserver ollama[1413041]: print_info: max token length = 256 May 22 07:36:21 ubuntuserver ollama[1413041]: load_tensors: loading model tensors, this can take a while... (mmap = true) May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: offloading 65 repeating layers to GPU May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: offloaded 65/95 layers to GPU May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA0 model buffer size = 18201.04 MiB May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA1 model buffer size = 18201.04 MiB May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA2 model buffer size = 18399.04 MiB May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA3 model buffer size = 18201.04 MiB May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CUDA4 model buffer size = 19785.04 MiB May 22 07:36:24 ubuntuserver ollama[1413041]: load_tensors: CPU_Mapped model buffer size = 43022.27 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: constructing llama_context May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_seq_max = 1 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_ctx = 4096 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_ctx_per_seq = 4096 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_batch = 512 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_ubatch = 512 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: causal_attn = 1 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: flash_attn = 0 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: freq_base = 1000000.0 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: freq_scale = 1 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CPU output buffer size = 0.60 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: kv_size = 4096, type_k = 'f16', type_v = 'f16', n_layer = 94, can_shift = 1, padding = 32 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA0 KV buffer size = 104.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA1 KV buffer size = 104.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA2 KV buffer size = 104.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA3 KV buffer size = 104.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CUDA4 KV buffer size = 104.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: CPU KV buffer size = 232.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_kv_cache_unified: KV self size = 752.00 MiB, K (f16): 376.00 MiB, V (f16): 376.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA0 compute buffer size = 1126.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA1 compute buffer size = 568.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA2 compute buffer size = 568.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA3 compute buffer size = 568.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA4 compute buffer size = 568.00 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: CUDA_Host compute buffer size = 16.01 MiB May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: graph nodes = 6116 May 22 07:36:34 ubuntuserver ollama[1413041]: llama_context: graph splits = 414 (with bs=512), 65 (with bs=1) May 22 07:36:34 ubuntuserver ollama[1413041]: time=2025-05-22T07:36:34.757Z level=INFO source=server.go:630 msg="llama runner started in 14.53 seconds" ![Image](https://github.com/user-attachments/assets/0b22bb6d-a24c-438b-ac2f-47fbb19770dc) ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.7.0-rc1
GiteaMirror added the bug label 2026-04-29 04:14:37 -05:00
Author
Owner

@rick-github commented on GitHub (May 22, 2025):

The model is 100% loaded in the GPUs. The low utilization is due to the layered nature of models, see here. qwen3:235b is an MoE model, only 22b of the 235b parameters are active, which may be further reducing the utilization.

<!-- gh-comment-id:2900859799 --> @rick-github commented on GitHub (May 22, 2025): The model is 100% loaded in the GPUs. The low utilization is due to the layered nature of models, see [here](https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990). qwen3:235b is an MoE model, only 22b of the 235b parameters are active, which may be further reducing the utilization.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53612