[GH-ISSUE #11619] Context size above 512K breaks flash attention and KV cache quantization #54186

Open
opened 2026-04-29 05:20:03 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @Expro on GitHub (Aug 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11619

What is the issue?

When flash attention and KV quantization is enabled, If you specify context size that is greater than 512K, even for models that supports it (unsloth/Qwen3-Coder-30B-A3B-Instruct-1M-GGUF for example), Ollama tries to run model without flash attention and KV cache quantization and crashes.

Relevant log output


OS

Docker

GPU

AMD

CPU

AMD

Ollama version

0.10.1

Originally created by @Expro on GitHub (Aug 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11619 ### What is the issue? When flash attention and KV quantization is enabled, If you specify context size that is greater than 512K, even for models that supports it (unsloth/Qwen3-Coder-30B-A3B-Instruct-1M-GGUF for example), Ollama tries to run model without flash attention and KV cache quantization and crashes. ### Relevant log output ```shell ``` ### OS Docker ### GPU AMD ### CPU AMD ### Ollama version 0.10.1
GiteaMirror added the bug label 2026-04-29 05:20:03 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 2, 2025):

Server logs will help in debugging.

<!-- gh-comment-id:3146053950 --> @rick-github commented on GitHub (Aug 2, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@Expro commented on GitHub (Aug 2, 2025):

time=2025-08-02T11:31:31.269Z level=INFO source=routes.go:1238 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:0 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-08-02T11:31:31.330Z level=INFO source=images.go:476 msg="total blobs: 41"
time=2025-08-02T11:31:31.332Z level=INFO source=images.go:483 msg="total unused blobs removed: 0"
time=2025-08-02T11:31:31.332Z level=INFO source=routes.go:1291 msg="Listening on [::]:11434 (version 0.10.1)"
time=2025-08-02T11:31:31.332Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-02T11:31:31.334Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-08-02T11:31:31.335Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-1eaed82168db2231 gpu_type=gfx1100
time=2025-08-02T11:31:31.335Z level=INFO source=types.go:130 msg="inference compute" id=GPU-1eaed82168db2231 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:7448 total="45.0 GiB" available="45.0 GiB"
[GIN] 2025/08/02 - 11:32:10 | 200 | 44.744µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/02 - 11:32:10 | 200 | 17.255126ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/08/02 - 11:32:18 | 200 | 26.884µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/02 - 11:32:18 | 200 | 76.446789ms | 127.0.0.1 | POST "/api/show"
time=2025-08-02T11:32:18.164Z level=INFO source=server.go:135 msg="system memory" total="88.3 GiB" free="77.2 GiB" free_swap="0 B"
time=2025-08-02T11:32:18.165Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=0 layers.split="" memory.available="[45.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="64.0 GiB" memory.required.partial="0 B" memory.required.kv="48.0 GiB" memory.required.allocations="[0 B]" memory.weights.total="16.0 GiB" memory.weights.repeating="15.7 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="64.0 GiB" memory.graph.partial="64.0 GiB"
time=2025-08-02T11:32:18.165Z level=WARN source=server.go:206 msg="flash attention enabled but not supported by gpu"
time=2025-08-02T11:32:18.165Z level=WARN source=server.go:229 msg="quantized kv cache requested but flash attention disabled" type=q8_0
llama_model_loader: loaded meta data with 45 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-09140eb6a13695c4543398f53bb634d5e11ed865d353b92310ede5bc8b4dbbf9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3moe
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3-Coder-30B-A3B-Instruct-1M
llama_model_loader: - kv 3: general.finetune str = Instruct-1m
llama_model_loader: - kv 4: general.basename str = Qwen3-Coder-30B-A3B-Instruct-1M
llama_model_loader: - kv 5: general.quantized_by str = Unsloth
llama_model_loader: - kv 6: general.size_label str = 30B-A3B
llama_model_loader: - kv 7: general.license str = apache-2.0
llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen3-Cod...
llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 10: general.base_model.count u32 = 1
llama_model_loader: - kv 11: general.base_model.0.name str = Qwen3 Coder 30B A3B Instruct
llama_model_loader: - kv 12: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3-Cod...
llama_model_loader: - kv 14: general.tags arr[str,2] = ["unsloth", "text-generation"]
llama_model_loader: - kv 15: qwen3moe.block_count u32 = 48
llama_model_loader: - kv 16: qwen3moe.context_length u32 = 1048576
llama_model_loader: - kv 17: qwen3moe.embedding_length u32 = 2048
llama_model_loader: - kv 18: qwen3moe.feed_forward_length u32 = 5472
llama_model_loader: - kv 19: qwen3moe.attention.head_count u32 = 32
llama_model_loader: - kv 20: qwen3moe.attention.head_count_kv u32 = 4
llama_model_loader: - kv 21: qwen3moe.rope.freq_base f32 = 10000000.000000
llama_model_loader: - kv 22: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 23: qwen3moe.expert_used_count u32 = 8
llama_model_loader: - kv 24: qwen3moe.attention.key_length u32 = 128
llama_model_loader: - kv 25: qwen3moe.attention.value_length u32 = 128
llama_model_loader: - kv 26: qwen3moe.expert_count u32 = 128
llama_model_loader: - kv 27: qwen3moe.expert_feed_forward_length u32 = 768
llama_model_loader: - kv 28: qwen3moe.expert_shared_feed_forward_length u32 = 0
llama_model_loader: - kv 29: qwen3moe.rope.scaling.type str = yarn
llama_model_loader: - kv 30: qwen3moe.rope.scaling.factor f32 = 4.000000
llama_model_loader: - kv 31: qwen3moe.rope.scaling.original_context_length u32 = 262144
llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 33: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 37: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 38: tokenizer.ggml.padding_token_id u32 = 151654
llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 40: tokenizer.chat_template str = {#- Copyright 2025-present the Unslot...
llama_model_loader: - kv 41: general.quantization_version u32 = 2
llama_model_loader: - kv 42: general.file_type u32 = 25
llama_model_loader: - kv 43: quantize.imatrix.file str = Qwen3-Coder-30B-A3B-Instruct-1M-GGUF/...
llama_model_loader: - kv 44: quantize.imatrix.entries_count u32 = 383
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 2 tensors
llama_model_loader: - type q5_K: 48 tensors
llama_model_loader: - type q6_K: 1 tensors
llama_model_loader: - type iq4_nl: 287 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = IQ4_NL - 4.5 bpw
print_info: file size = 16.12 GiB (4.53 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3moe
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 30.53 B
print_info: general.name = Qwen3-Coder-30B-A3B-Instruct-1M
print_info: n_ff_exp = 0
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 11 ','
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151654 '<|vision_pad|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-02T11:32:18.394Z level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-09140eb6a13695c4543398f53bb634d5e11ed865d353b92310ede5bc8b4dbbf9 --ctx-size 1048576 --batch-size 512 --threads 12 --no-mmap --parallel 1 --port 39285"
time=2025-08-02T11:32:18.394Z level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-02T11:32:18.394Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-02T11:32:18.395Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2025-08-02T11:32:18.407Z level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-08-02T11:32:18.412Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-08-02T11:32:18.413Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:39285"
llama_model_loader: loaded meta data with 45 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-09140eb6a13695c4543398f53bb634d5e11ed865d353b92310ede5bc8b4dbbf9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3moe
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3-Coder-30B-A3B-Instruct-1M
llama_model_loader: - kv 3: general.finetune str = Instruct-1m
llama_model_loader: - kv 4: general.basename str = Qwen3-Coder-30B-A3B-Instruct-1M
llama_model_loader: - kv 5: general.quantized_by str = Unsloth
llama_model_loader: - kv 6: general.size_label str = 30B-A3B
llama_model_loader: - kv 7: general.license str = apache-2.0
llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen3-Cod...
llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 10: general.base_model.count u32 = 1
llama_model_loader: - kv 11: general.base_model.0.name str = Qwen3 Coder 30B A3B Instruct
llama_model_loader: - kv 12: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3-Cod...
llama_model_loader: - kv 14: general.tags arr[str,2] = ["unsloth", "text-generation"]
llama_model_loader: - kv 15: qwen3moe.block_count u32 = 48
llama_model_loader: - kv 16: qwen3moe.context_length u32 = 1048576
llama_model_loader: - kv 17: qwen3moe.embedding_length u32 = 2048
llama_model_loader: - kv 18: qwen3moe.feed_forward_length u32 = 5472
llama_model_loader: - kv 19: qwen3moe.attention.head_count u32 = 32
llama_model_loader: - kv 20: qwen3moe.attention.head_count_kv u32 = 4
llama_model_loader: - kv 21: qwen3moe.rope.freq_base f32 = 10000000.000000
llama_model_loader: - kv 22: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 23: qwen3moe.expert_used_count u32 = 8
llama_model_loader: - kv 24: qwen3moe.attention.key_length u32 = 128
llama_model_loader: - kv 25: qwen3moe.attention.value_length u32 = 128
llama_model_loader: - kv 26: qwen3moe.expert_count u32 = 128
llama_model_loader: - kv 27: qwen3moe.expert_feed_forward_length u32 = 768
llama_model_loader: - kv 28: qwen3moe.expert_shared_feed_forward_length u32 = 0
llama_model_loader: - kv 29: qwen3moe.rope.scaling.type str = yarn
llama_model_loader: - kv 30: qwen3moe.rope.scaling.factor f32 = 4.000000
llama_model_loader: - kv 31: qwen3moe.rope.scaling.original_context_length u32 = 262144
llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 33: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 37: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 38: tokenizer.ggml.padding_token_id u32 = 151654
llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 40: tokenizer.chat_template str = {#- Copyright 2025-present the Unslot...
llama_model_loader: - kv 41: general.quantization_version u32 = 2
llama_model_loader: - kv 42: general.file_type u32 = 25
llama_model_loader: - kv 43: quantize.imatrix.file str = Qwen3-Coder-30B-A3B-Instruct-1M-GGUF/...
llama_model_loader: - kv 44: quantize.imatrix.entries_count u32 = 383
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 2 tensors
llama_model_loader: - type q5_K: 48 tensors
llama_model_loader: - type q6_K: 1 tensors
llama_model_loader: - type iq4_nl: 287 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = IQ4_NL - 4.5 bpw
print_info: file size = 16.12 GiB (4.53 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3moe
print_info: vocab_only = 0
print_info: n_ctx_train = 1048576
print_info: n_embd = 2048
print_info: n_layer = 48
print_info: n_head = 32
print_info: n_head_kv = 4
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 512
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 5472
print_info: n_expert = 128
print_info: n_expert_used = 8
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = yarn
print_info: freq_base_train = 10000000.0
print_info: freq_scale_train = 0.25
print_info: n_ctx_orig_yarn = 262144
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 30B.A3B
print_info: model params = 30.53 B
print_info: general.name = Qwen3-Coder-30B-A3B-Instruct-1M
print_info: n_ff_exp = 768
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 11 ','
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151654 '<|vision_pad|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: CPU model buffer size = 16503.15 MiB
time=2025-08-02T11:32:18.646Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 1048576
llama_context: n_ctx_per_seq = 1048576
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 10000000.0
llama_context: freq_scale = 0.25
llama_context: CPU output buffer size = 0.59 MiB
llama_kv_cache_unified: kv_size = 1048576, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1, padding = 32
time=2025-08-02T11:33:23.338Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2025-08-02T11:33:23.753Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"

<!-- gh-comment-id:3146451054 --> @Expro commented on GitHub (Aug 2, 2025): time=2025-08-02T11:31:31.269Z level=INFO source=routes.go:1238 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:0 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-08-02T11:31:31.330Z level=INFO source=images.go:476 msg="total blobs: 41" time=2025-08-02T11:31:31.332Z level=INFO source=images.go:483 msg="total unused blobs removed: 0" time=2025-08-02T11:31:31.332Z level=INFO source=routes.go:1291 msg="Listening on [::]:11434 (version 0.10.1)" time=2025-08-02T11:31:31.332Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-02T11:31:31.334Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-08-02T11:31:31.335Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-1eaed82168db2231 gpu_type=gfx1100 time=2025-08-02T11:31:31.335Z level=INFO source=types.go:130 msg="inference compute" id=GPU-1eaed82168db2231 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:7448 total="45.0 GiB" available="45.0 GiB" [GIN] 2025/08/02 - 11:32:10 | 200 | 44.744µs | 127.0.0.1 | HEAD "/" [GIN] 2025/08/02 - 11:32:10 | 200 | 17.255126ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/02 - 11:32:18 | 200 | 26.884µs | 127.0.0.1 | HEAD "/" [GIN] 2025/08/02 - 11:32:18 | 200 | 76.446789ms | 127.0.0.1 | POST "/api/show" time=2025-08-02T11:32:18.164Z level=INFO source=server.go:135 msg="system memory" total="88.3 GiB" free="77.2 GiB" free_swap="0 B" time=2025-08-02T11:32:18.165Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=0 layers.split="" memory.available="[45.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="64.0 GiB" memory.required.partial="0 B" memory.required.kv="48.0 GiB" memory.required.allocations="[0 B]" memory.weights.total="16.0 GiB" memory.weights.repeating="15.7 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="64.0 GiB" memory.graph.partial="64.0 GiB" time=2025-08-02T11:32:18.165Z level=WARN source=server.go:206 msg="flash attention enabled but not supported by gpu" time=2025-08-02T11:32:18.165Z level=WARN source=server.go:229 msg="quantized kv cache requested but flash attention disabled" type=q8_0 llama_model_loader: loaded meta data with 45 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-09140eb6a13695c4543398f53bb634d5e11ed865d353b92310ede5bc8b4dbbf9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3-Coder-30B-A3B-Instruct-1M llama_model_loader: - kv 3: general.finetune str = Instruct-1m llama_model_loader: - kv 4: general.basename str = Qwen3-Coder-30B-A3B-Instruct-1M llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 30B-A3B llama_model_loader: - kv 7: general.license str = apache-2.0 llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen3-Cod... llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 10: general.base_model.count u32 = 1 llama_model_loader: - kv 11: general.base_model.0.name str = Qwen3 Coder 30B A3B Instruct llama_model_loader: - kv 12: general.base_model.0.organization str = Qwen llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3-Cod... llama_model_loader: - kv 14: general.tags arr[str,2] = ["unsloth", "text-generation"] llama_model_loader: - kv 15: qwen3moe.block_count u32 = 48 llama_model_loader: - kv 16: qwen3moe.context_length u32 = 1048576 llama_model_loader: - kv 17: qwen3moe.embedding_length u32 = 2048 llama_model_loader: - kv 18: qwen3moe.feed_forward_length u32 = 5472 llama_model_loader: - kv 19: qwen3moe.attention.head_count u32 = 32 llama_model_loader: - kv 20: qwen3moe.attention.head_count_kv u32 = 4 llama_model_loader: - kv 21: qwen3moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 22: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: qwen3moe.expert_used_count u32 = 8 llama_model_loader: - kv 24: qwen3moe.attention.key_length u32 = 128 llama_model_loader: - kv 25: qwen3moe.attention.value_length u32 = 128 llama_model_loader: - kv 26: qwen3moe.expert_count u32 = 128 llama_model_loader: - kv 27: qwen3moe.expert_feed_forward_length u32 = 768 llama_model_loader: - kv 28: qwen3moe.expert_shared_feed_forward_length u32 = 0 llama_model_loader: - kv 29: qwen3moe.rope.scaling.type str = yarn llama_model_loader: - kv 30: qwen3moe.rope.scaling.factor f32 = 4.000000 llama_model_loader: - kv 31: qwen3moe.rope.scaling.original_context_length u32 = 262144 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 37: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 38: tokenizer.ggml.padding_token_id u32 = 151654 llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 40: tokenizer.chat_template str = {#- Copyright 2025-present the Unslot... llama_model_loader: - kv 41: general.quantization_version u32 = 2 llama_model_loader: - kv 42: general.file_type u32 = 25 llama_model_loader: - kv 43: quantize.imatrix.file str = Qwen3-Coder-30B-A3B-Instruct-1M-GGUF/... llama_model_loader: - kv 44: quantize.imatrix.entries_count u32 = 383 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 2 tensors llama_model_loader: - type q5_K: 48 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_nl: 287 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_NL - 4.5 bpw print_info: file size = 16.12 GiB (4.53 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3moe print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 30.53 B print_info: general.name = Qwen3-Coder-30B-A3B-Instruct-1M print_info: n_ff_exp = 0 print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151654 '<|vision_pad|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-02T11:32:18.394Z level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-09140eb6a13695c4543398f53bb634d5e11ed865d353b92310ede5bc8b4dbbf9 --ctx-size 1048576 --batch-size 512 --threads 12 --no-mmap --parallel 1 --port 39285" time=2025-08-02T11:32:18.394Z level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-02T11:32:18.394Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-02T11:32:18.395Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" time=2025-08-02T11:32:18.407Z level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-08-02T11:32:18.412Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-08-02T11:32:18.413Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:39285" llama_model_loader: loaded meta data with 45 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-09140eb6a13695c4543398f53bb634d5e11ed865d353b92310ede5bc8b4dbbf9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3-Coder-30B-A3B-Instruct-1M llama_model_loader: - kv 3: general.finetune str = Instruct-1m llama_model_loader: - kv 4: general.basename str = Qwen3-Coder-30B-A3B-Instruct-1M llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 30B-A3B llama_model_loader: - kv 7: general.license str = apache-2.0 llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen3-Cod... llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 10: general.base_model.count u32 = 1 llama_model_loader: - kv 11: general.base_model.0.name str = Qwen3 Coder 30B A3B Instruct llama_model_loader: - kv 12: general.base_model.0.organization str = Qwen llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3-Cod... llama_model_loader: - kv 14: general.tags arr[str,2] = ["unsloth", "text-generation"] llama_model_loader: - kv 15: qwen3moe.block_count u32 = 48 llama_model_loader: - kv 16: qwen3moe.context_length u32 = 1048576 llama_model_loader: - kv 17: qwen3moe.embedding_length u32 = 2048 llama_model_loader: - kv 18: qwen3moe.feed_forward_length u32 = 5472 llama_model_loader: - kv 19: qwen3moe.attention.head_count u32 = 32 llama_model_loader: - kv 20: qwen3moe.attention.head_count_kv u32 = 4 llama_model_loader: - kv 21: qwen3moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 22: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: qwen3moe.expert_used_count u32 = 8 llama_model_loader: - kv 24: qwen3moe.attention.key_length u32 = 128 llama_model_loader: - kv 25: qwen3moe.attention.value_length u32 = 128 llama_model_loader: - kv 26: qwen3moe.expert_count u32 = 128 llama_model_loader: - kv 27: qwen3moe.expert_feed_forward_length u32 = 768 llama_model_loader: - kv 28: qwen3moe.expert_shared_feed_forward_length u32 = 0 llama_model_loader: - kv 29: qwen3moe.rope.scaling.type str = yarn llama_model_loader: - kv 30: qwen3moe.rope.scaling.factor f32 = 4.000000 llama_model_loader: - kv 31: qwen3moe.rope.scaling.original_context_length u32 = 262144 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 37: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 38: tokenizer.ggml.padding_token_id u32 = 151654 llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 40: tokenizer.chat_template str = {#- Copyright 2025-present the Unslot... llama_model_loader: - kv 41: general.quantization_version u32 = 2 llama_model_loader: - kv 42: general.file_type u32 = 25 llama_model_loader: - kv 43: quantize.imatrix.file str = Qwen3-Coder-30B-A3B-Instruct-1M-GGUF/... llama_model_loader: - kv 44: quantize.imatrix.entries_count u32 = 383 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 2 tensors llama_model_loader: - type q5_K: 48 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_nl: 287 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_NL - 4.5 bpw print_info: file size = 16.12 GiB (4.53 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3moe print_info: vocab_only = 0 print_info: n_ctx_train = 1048576 print_info: n_embd = 2048 print_info: n_layer = 48 print_info: n_head = 32 print_info: n_head_kv = 4 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 5472 print_info: n_expert = 128 print_info: n_expert_used = 8 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = yarn print_info: freq_base_train = 10000000.0 print_info: freq_scale_train = 0.25 print_info: n_ctx_orig_yarn = 262144 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 30B.A3B print_info: model params = 30.53 B print_info: general.name = Qwen3-Coder-30B-A3B-Instruct-1M print_info: n_ff_exp = 768 print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151654 '<|vision_pad|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 16503.15 MiB time=2025-08-02T11:32:18.646Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 1048576 llama_context: n_ctx_per_seq = 1048576 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 10000000.0 llama_context: freq_scale = 0.25 llama_context: CPU output buffer size = 0.59 MiB llama_kv_cache_unified: kv_size = 1048576, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1, padding = 32 time=2025-08-02T11:33:23.338Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" time=2025-08-02T11:33:23.753Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Author
Owner

@rick-github commented on GitHub (Aug 2, 2025):

time=2025-08-02T11:32:18.165Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=0 layers.split="" memory.available="[45.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="64.0 GiB" memory.required.partial="0 B" memory.required.kv="48.0 GiB" memory.required.allocations="[0 B]" memory.weights.total="16.0 GiB" memory.weights.repeating="15.7 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="64.0 GiB" memory.graph.partial="64.0 GiB"

The size of the context results in no layers being loaded into the GPU. Flash attention is not supported on CPU.

<!-- gh-comment-id:3146473741 --> @rick-github commented on GitHub (Aug 2, 2025): ``` time=2025-08-02T11:32:18.165Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=0 layers.split="" memory.available="[45.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="64.0 GiB" memory.required.partial="0 B" memory.required.kv="48.0 GiB" memory.required.allocations="[0 B]" memory.weights.total="16.0 GiB" memory.weights.repeating="15.7 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="64.0 GiB" memory.graph.partial="64.0 GiB" ``` The size of the context results in no layers being loaded into the GPU. Flash attention is not supported on CPU.
Author
Owner

@Expro commented on GitHub (Aug 2, 2025):

Any way to control what lands on GPU? I have 48GB VRAM, all layers of model itself easily fits into GPU.

<!-- gh-comment-id:3146496182 --> @Expro commented on GitHub (Aug 2, 2025): Any way to control what lands on GPU? I have 48GB VRAM, all layers of model itself easily fits into GPU.
Author
Owner

@rick-github commented on GitHub (Aug 2, 2025):

Use a smaller context. The footprint of the cache can be reduced by enabling flash attention and setting KV cache quantization.

<!-- gh-comment-id:3146500657 --> @rick-github commented on GitHub (Aug 2, 2025): Use a smaller context. The footprint of the cache can be reduced by enabling flash attention and setting [KV cache quantization](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache).
Author
Owner

@Expro commented on GitHub (Aug 2, 2025):

Both are already enabled (as evident by env variables).

<!-- gh-comment-id:3146506692 --> @Expro commented on GitHub (Aug 2, 2025): Both are already enabled (as evident by env variables).
Author
Owner

@rick-github commented on GitHub (Sep 1, 2025):

time=2025-08-02T11:32:18.165Z level=WARN source=server.go:206 msg="flash attention enabled but not supported by gpu"
time=2025-08-02T11:32:18.165Z level=WARN source=server.go:229 msg="quantized kv cache requested but flash attention disabled" type=q8_0

Sadly FA is not supported on the GPU. A smaller context would help. The new memory management in recent releases may also reduce the footprint.

<!-- gh-comment-id:3243140608 --> @rick-github commented on GitHub (Sep 1, 2025): ``` time=2025-08-02T11:32:18.165Z level=WARN source=server.go:206 msg="flash attention enabled but not supported by gpu" time=2025-08-02T11:32:18.165Z level=WARN source=server.go:229 msg="quantized kv cache requested but flash attention disabled" type=q8_0 ``` Sadly FA is not supported on the GPU. A smaller context would help. The new memory management in recent releases may also reduce the footprint.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54186