[GH-ISSUE #15855] GLM4.7-flash (Unsloth Quants) does not have Flash Attention Support #72164

Closed
opened 2026-05-05 03:34:34 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @logxdx on GitHub (Apr 28, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15855

Same as title, GLM4.7-flash does not have FA support. This also leads to no support for kv cache.

Server Logs
time=2026-04-28T10:42:29.919+05:30 level=INFO source=routes.go:1752 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\logx\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-04-28T10:42:29.920+05:30 level=INFO source=routes.go:1754 msg="Ollama cloud disabled: false"
time=2026-04-28T10:42:29.934+05:30 level=INFO source=images.go:517 msg="total blobs: 60"
time=2026-04-28T10:42:29.938+05:30 level=INFO source=images.go:524 msg="total unused blobs removed: 0"
time=2026-04-28T10:42:29.940+05:30 level=INFO source=routes.go:1810 msg="Listening on 127.0.0.1:11434 (version 0.21.2)"
time=2026-04-28T10:42:29.942+05:30 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-28T10:42:29.950+05:30 level=WARN source=runner.go:502 msg="potentially incompatible library detected in PATH" location=C:\llamacpp\ggml-base.dll
time=2026-04-28T10:42:29.961+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65380"
time=2026-04-28T10:42:30.671+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65394"
time=2026-04-28T10:42:31.312+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65410"
time=2026-04-28T10:42:31.587+05:30 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-04-28T10:42:31.589+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65426"
time=2026-04-28T10:42:31.589+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65427"
time=2026-04-28T10:42:31.895+05:30 level=INFO source=types.go:42 msg="inference compute" id=GPU-71ff890f-0c96-f116-24c3-62d930353a93 filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4050 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.2 pci_id=0000:01:00.0 type=discrete total="6.0 GiB" available="5.6 GiB"
time=2026-04-28T10:42:31.895+05:30 level=INFO source=routes.go:1860 msg="vram-based default context" total_vram="6.0 GiB" default_num_ctx=4096
time=2026-04-28T10:42:36.341+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57092"
time=2026-04-28T10:42:36.562+05:30 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-04-28T10:42:36.562+05:30 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2026-04-28T10:42:36.562+05:30 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=4 threads=12
llama_model_loader: loaded meta data with 60 key-value pairs and 844 tensors from C:\Users\logx\.ollama\models\blobs\sha256-08a432581d3a797af07a021455ada33499185213ad250e85fb26daf3fe34c421 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   3:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   4:                               general.name str              = Glm-4.7-Flash
llama_model_loader: - kv   5:                           general.basename str              = Glm-4.7-Flash
llama_model_loader: - kv   6:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   7:                         general.size_label str              = 64x2.6B
llama_model_loader: - kv   8:                            general.license str              = mit
llama_model_loader: - kv   9:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  10:                   general.base_model.count u32              = 1
llama_model_loader: - kv  11:                  general.base_model.0.name str              = GLM 4.7 Flash
llama_model_loader: - kv  12:          general.base_model.0.organization str              = Zai Org
llama_model_loader: - kv  13:              general.base_model.0.repo_url str              = https://huggingface.co/zai-org/GLM-4....
llama_model_loader: - kv  14:                               general.tags arr[str,2]       = ["unsloth", "text-generation"]
llama_model_loader: - kv  15:                          general.languages arr[str,2]       = ["en", "zh"]
llama_model_loader: - kv  16:                      deepseek2.block_count u32              = 47
llama_model_loader: - kv  17:                   deepseek2.context_length u32              = 202752
llama_model_loader: - kv  18:                 deepseek2.embedding_length u32              = 2048
llama_model_loader: - kv  19:              deepseek2.feed_forward_length u32              = 10240
llama_model_loader: - kv  20:             deepseek2.attention.head_count u32              = 20
llama_model_loader: - kv  21:          deepseek2.attention.head_count_kv u32              = 1
llama_model_loader: - kv  22:                   deepseek2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  23: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  24:                deepseek2.expert_used_count u32              = 4
llama_model_loader: - kv  25:               deepseek2.expert_group_count u32              = 1
llama_model_loader: - kv  26:          deepseek2.expert_group_used_count u32              = 1
llama_model_loader: - kv  27:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  28:        deepseek2.leading_dense_block_count u32              = 1
llama_model_loader: - kv  29:                       deepseek2.vocab_size u32              = 154880
llama_model_loader: - kv  30:            deepseek2.attention.q_lora_rank u32              = 768
llama_model_loader: - kv  31:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  32:             deepseek2.attention.key_length u32              = 576
llama_model_loader: - kv  33:           deepseek2.attention.value_length u32              = 512
llama_model_loader: - kv  34:         deepseek2.attention.key_length_mla u32              = 256
llama_model_loader: - kv  35:       deepseek2.attention.value_length_mla u32              = 256
llama_model_loader: - kv  36:       deepseek2.expert_feed_forward_length u32              = 1536
llama_model_loader: - kv  37:                     deepseek2.expert_count u32              = 64
llama_model_loader: - kv  38:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  39:             deepseek2.expert_weights_scale f32              = 1.800000
llama_model_loader: - kv  40:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  41:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  42:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  43:                         tokenizer.ggml.pre str              = glm4
llama_model_loader: - kv  44:                      tokenizer.ggml.tokens arr[str,154880]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  45:                  tokenizer.ggml.token_type arr[i32,154880]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  46:                      tokenizer.ggml.merges arr[str,321649]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  47:                tokenizer.ggml.eos_token_id u32              = 154820
llama_model_loader: - kv  48:            tokenizer.ggml.padding_token_id u32              = 154821
llama_model_loader: - kv  49:                tokenizer.ggml.bos_token_id u32              = 154822
llama_model_loader: - kv  50:                tokenizer.ggml.eot_token_id u32              = 154827
llama_model_loader: - kv  51:            tokenizer.ggml.unknown_token_id u32              = 154820
llama_model_loader: - kv  52:                tokenizer.ggml.eom_token_id u32              = 154829
llama_model_loader: - kv  53:                    tokenizer.chat_template str              = [gMASK]<sop>\n{%- if tools -%}\n<|syste...
llama_model_loader: - kv  54:               general.quantization_version u32              = 2
llama_model_loader: - kv  55:                          general.file_type u32              = 10
llama_model_loader: - kv  56:                      quantize.imatrix.file str              = GLM-4.7-Flash-GGUF/imatrix_unsloth.gguf
llama_model_loader: - kv  57:                   quantize.imatrix.dataset str              = unsloth_calibration_GLM-4.7-Flash.txt
llama_model_loader: - kv  58:             quantize.imatrix.entries_count u32              = 607
llama_model_loader: - kv  59:              quantize.imatrix.chunks_count u32              = 85
llama_model_loader: - type  f32:  281 tensors
llama_model_loader: - type  f16:    5 tensors
llama_model_loader: - type q8_0:  172 tensors
llama_model_loader: - type q2_K:   92 tensors
llama_model_loader: - type q3_K:   37 tensors
llama_model_loader: - type q4_K:  192 tensors
llama_model_loader: - type q5_K:   44 tensors
llama_model_loader: - type q6_K:   21 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q2_K - Medium
print_info: file size   = 11.06 GiB (3.17 BPW)
load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 154820 ('<|endoftext|>')
load:   - 154827 ('<|user|>')
load:   - 154829 ('<|observation|>')
load: special tokens cache size = 36
load: token to piece cache size = 0.9811 MB
print_info: arch             = deepseek2
print_info: vocab_only       = 1
print_info: no_alloc         = 0
print_info: model type       = ?B
print_info: model params     = 29.94 B
print_info: general.name     = Glm-4.7-Flash
print_info: n_layer_dense_lead   = 0
print_info: n_lora_q             = 0
print_info: n_lora_kv            = 0
print_info: n_embd_head_k_mla    = 0
print_info: n_embd_head_v_mla    = 0
print_info: n_ff_exp             = 0
print_info: n_expert_shared      = 0
print_info: expert_weights_scale = 0.0
print_info: expert_weights_norm  = 0
print_info: expert_gating_func   = unknown
print_info: vocab type       = BPE
print_info: n_vocab          = 154880
print_info: n_merges         = 321649
print_info: BOS token        = 154822 '[gMASK]'
print_info: EOS token        = 154820 '<|endoftext|>'
print_info: EOT token        = 154827 '<|user|>'
print_info: EOM token        = 154829 '<|observation|>'
print_info: UNK token        = 154820 '<|endoftext|>'
print_info: PAD token        = 154821 '[MASK]'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 154838 '<|code_prefix|>'
print_info: FIM SUF token    = 154840 '<|code_suffix|>'
print_info: FIM MID token    = 154839 '<|code_middle|>'
print_info: EOG token        = 154820 '<|endoftext|>'
print_info: EOG token        = 154827 '<|user|>'
print_info: EOG token        = 154829 '<|observation|>'
print_info: max token length = 1024
llama_model_load: vocab only - skipping tensors
time=2026-04-28T10:42:36.902+05:30 level=WARN source=server.go:209 msg="flash attention enabled but not supported by model"
time=2026-04-28T10:42:36.902+05:30 level=WARN source=server.go:240 msg="OLLAMA_FLASH_ATTENTION must be enabled to use a quantized OLLAMA_KV_CACHE_TYPE" type=q8_0
time=2026-04-28T10:42:36.903+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\logx\\.ollama\\models\\blobs\\sha256-08a432581d3a797af07a021455ada33499185213ad250e85fb26daf3fe34c421 --port 57104"
time=2026-04-28T10:42:36.910+05:30 level=INFO source=sched.go:484 msg="system memory" total="15.7 GiB" free="6.5 GiB" free_swap="18.9 GiB"
time=2026-04-28T10:42:36.910+05:30 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-71ff890f-0c96-f116-24c3-62d930353a93 library=CUDA available="5.1 GiB" free="5.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-28T10:42:36.910+05:30 level=INFO source=server.go:511 msg="loading model" "model layers"=48 requested=-1
time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="3.7 GiB"
time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:245 msg="model weights" device=CPU size="7.2 GiB"
time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="597.7 MiB"
time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.1 GiB"
time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="562.6 MiB"
time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:272 msg="total memory" size="13.2 GiB"
time=2026-04-28T10:42:37.014+05:30 level=INFO source=runner.go:965 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\logx\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4050 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-71ff890f-0c96-f116-24c3-62d930353a93
load_backend: loaded CUDA backend from C:\Users\logx\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-04-28T10:42:37.079+05:30 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-04-28T10:42:37.079+05:30 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:57104"
time=2026-04-28T10:42:37.088+05:30 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:18000 KvCacheType: NumThreads:4 GPULayers:16[ID:GPU-71ff890f-0c96-f116-24c3-62d930353a93 Layers:16(31..46)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-28T10:42:37.089+05:30 level=INFO source=server.go:1364 msg="waiting for llama runner to start responding"
time=2026-04-28T10:42:37.089+05:30 level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-71ff890f-0c96-f116-24c3-62d930353a93 utilizing NVML memory reporting free: 5979209728 total: 6439305216
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4050 Laptop GPU) (0000:01:00.0) - 5702 MiB free
llama_model_loader: loaded meta data with 60 key-value pairs and 844 tensors from C:\Users\logx\.ollama\models\blobs\sha256-08a432581d3a797af07a021455ada33499185213ad250e85fb26daf3fe34c421 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   3:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   4:                               general.name str              = Glm-4.7-Flash
llama_model_loader: - kv   5:                           general.basename str              = Glm-4.7-Flash
llama_model_loader: - kv   6:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   7:                         general.size_label str              = 64x2.6B
llama_model_loader: - kv   8:                            general.license str              = mit
llama_model_loader: - kv   9:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  10:                   general.base_model.count u32              = 1
llama_model_loader: - kv  11:                  general.base_model.0.name str              = GLM 4.7 Flash
llama_model_loader: - kv  12:          general.base_model.0.organization str              = Zai Org
llama_model_loader: - kv  13:              general.base_model.0.repo_url str              = https://huggingface.co/zai-org/GLM-4....
llama_model_loader: - kv  14:                               general.tags arr[str,2]       = ["unsloth", "text-generation"]
llama_model_loader: - kv  15:                          general.languages arr[str,2]       = ["en", "zh"]
llama_model_loader: - kv  16:                      deepseek2.block_count u32              = 47
llama_model_loader: - kv  17:                   deepseek2.context_length u32              = 202752
llama_model_loader: - kv  18:                 deepseek2.embedding_length u32              = 2048
llama_model_loader: - kv  19:              deepseek2.feed_forward_length u32              = 10240
llama_model_loader: - kv  20:             deepseek2.attention.head_count u32              = 20
llama_model_loader: - kv  21:          deepseek2.attention.head_count_kv u32              = 1
llama_model_loader: - kv  22:                   deepseek2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  23: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  24:                deepseek2.expert_used_count u32              = 4
llama_model_loader: - kv  25:               deepseek2.expert_group_count u32              = 1
llama_model_loader: - kv  26:          deepseek2.expert_group_used_count u32              = 1
llama_model_loader: - kv  27:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  28:        deepseek2.leading_dense_block_count u32              = 1
llama_model_loader: - kv  29:                       deepseek2.vocab_size u32              = 154880
llama_model_loader: - kv  30:            deepseek2.attention.q_lora_rank u32              = 768
llama_model_loader: - kv  31:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  32:             deepseek2.attention.key_length u32              = 576
llama_model_loader: - kv  33:           deepseek2.attention.value_length u32              = 512
llama_model_loader: - kv  34:         deepseek2.attention.key_length_mla u32              = 256
llama_model_loader: - kv  35:       deepseek2.attention.value_length_mla u32              = 256
llama_model_loader: - kv  36:       deepseek2.expert_feed_forward_length u32              = 1536
llama_model_loader: - kv  37:                     deepseek2.expert_count u32              = 64
llama_model_loader: - kv  38:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  39:             deepseek2.expert_weights_scale f32              = 1.800000
llama_model_loader: - kv  40:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  41:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  42:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  43:                         tokenizer.ggml.pre str              = glm4
llama_model_loader: - kv  44:                      tokenizer.ggml.tokens arr[str,154880]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  45:                  tokenizer.ggml.token_type arr[i32,154880]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  46:                      tokenizer.ggml.merges arr[str,321649]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  47:                tokenizer.ggml.eos_token_id u32              = 154820
llama_model_loader: - kv  48:            tokenizer.ggml.padding_token_id u32              = 154821
llama_model_loader: - kv  49:                tokenizer.ggml.bos_token_id u32              = 154822
llama_model_loader: - kv  50:                tokenizer.ggml.eot_token_id u32              = 154827
llama_model_loader: - kv  51:            tokenizer.ggml.unknown_token_id u32              = 154820
llama_model_loader: - kv  52:                tokenizer.ggml.eom_token_id u32              = 154829
llama_model_loader: - kv  53:                    tokenizer.chat_template str              = [gMASK]<sop>\n{%- if tools -%}\n<|syste...
llama_model_loader: - kv  54:               general.quantization_version u32              = 2
llama_model_loader: - kv  55:                          general.file_type u32              = 10
llama_model_loader: - kv  56:                      quantize.imatrix.file str              = GLM-4.7-Flash-GGUF/imatrix_unsloth.gguf
llama_model_loader: - kv  57:                   quantize.imatrix.dataset str              = unsloth_calibration_GLM-4.7-Flash.txt
llama_model_loader: - kv  58:             quantize.imatrix.entries_count u32              = 607
llama_model_loader: - kv  59:              quantize.imatrix.chunks_count u32              = 85
llama_model_loader: - type  f32:  281 tensors
llama_model_loader: - type  f16:    5 tensors
llama_model_loader: - type q8_0:  172 tensors
llama_model_loader: - type q2_K:   92 tensors
llama_model_loader: - type q3_K:   37 tensors
llama_model_loader: - type q4_K:  192 tensors
llama_model_loader: - type q5_K:   44 tensors
llama_model_loader: - type q6_K:   21 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q2_K - Medium
print_info: file size   = 11.06 GiB (3.17 BPW)
load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 154820 ('<|endoftext|>')
load:   - 154827 ('<|user|>')
load:   - 154829 ('<|observation|>')
load: special tokens cache size = 36
load: token to piece cache size = 0.9811 MB
print_info: arch             = deepseek2
print_info: vocab_only       = 0
print_info: no_alloc         = 0
print_info: n_ctx_train      = 202752
print_info: n_embd           = 2048
print_info: n_embd_inp       = 2048
print_info: n_layer          = 47
print_info: n_head           = 20
print_info: n_head_kv        = 1
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 576
print_info: n_embd_head_v    = 512
print_info: n_gqa            = 20
print_info: n_embd_k_gqa     = 576
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 10240
print_info: n_expert         = 64
print_info: n_expert_used    = 4
print_info: n_expert_groups  = 1
print_info: n_group_used     = 1
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 202752
print_info: rope_yarn_log_mul= 0.0000
print_info: rope_finetuned   = unknown
print_info: model type       = ?B
print_info: model params     = 29.94 B
print_info: general.name     = Glm-4.7-Flash
print_info: n_layer_dense_lead   = 1
print_info: n_lora_q             = 768
print_info: n_lora_kv            = 512
print_info: n_embd_head_k_mla    = 256
print_info: n_embd_head_v_mla    = 256
print_info: n_ff_exp             = 1536
print_info: n_expert_shared      = 1
print_info: expert_weights_scale = 1.8
print_info: expert_weights_norm  = 1
print_info: expert_gating_func   = sigmoid
print_info: vocab type       = BPE
print_info: n_vocab          = 154880
print_info: n_merges         = 321649
print_info: BOS token        = 154822 '[gMASK]'
print_info: EOS token        = 154820 '<|endoftext|>'
print_info: EOT token        = 154827 '<|user|>'
print_info: EOM token        = 154829 '<|observation|>'
print_info: UNK token        = 154820 '<|endoftext|>'
print_info: PAD token        = 154821 '[MASK]'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 154838 '<|code_prefix|>'
print_info: FIM SUF token    = 154840 '<|code_suffix|>'
print_info: FIM MID token    = 154839 '<|code_middle|>'
print_info: EOG token        = 154820 '<|endoftext|>'
print_info: EOG token        = 154827 '<|user|>'
print_info: EOG token        = 154829 '<|observation|>'
print_info: max token length = 1024
load_tensors: loading model tensors, this can take a while... (mmap = false)
ggml_cuda_host_malloc: failed to allocate 7327.33 MiB of pinned memory: out of memory
load_tensors: offloading 16 repeating layers to GPU
load_tensors: offloaded 16/48 layers to GPU
load_tensors:          CPU model buffer size =   170.16 MiB
load_tensors:        CUDA0 model buffer size =  3831.49 MiB
load_tensors:          CPU model buffer size =  7327.33 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 18176
llama_context: n_ctx_seq     = 18176
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_seq (18176) < n_ctx_train (202752) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.60 MiB
llama_kv_cache:        CPU KV buffer size =  1169.28 MiB
llama_kv_cache:      CUDA0 KV buffer size =   603.50 MiB
llama_kv_cache: size = 1772.78 MiB ( 18176 cells,  47 layers,  1/1 seqs), K (f16):  938.53 MiB, V (f16):  834.25 MiB
llama_context:      CUDA0 compute buffer size =   838.34 MiB
llama_context:  CUDA_Host compute buffer size =    41.51 MiB
llama_context: graph nodes  = 3504
llama_context: graph splits = 619 (with bs=512), 3 (with bs=1)
time=2026-04-28T10:42:52.162+05:30 level=INFO source=server.go:1402 msg="llama runner started in 15.25 seconds"
time=2026-04-28T10:42:52.173+05:30 level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-28T10:42:52.176+05:30 level=INFO source=server.go:1364 msg="waiting for llama runner to start responding"
time=2026-04-28T10:42:52.181+05:30 level=INFO source=server.go:1402 msg="llama runner started in 15.28 seconds"
time=2026-04-28T10:43:58.589+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 62091"
time=2026-04-28T10:43:59.572+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 62110"
Originally created by @logxdx on GitHub (Apr 28, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15855 Same as title, GLM4.7-flash does not have FA support. This also leads to no support for kv cache. <details> <summary>Server Logs</summary> ```bash time=2026-04-28T10:42:29.919+05:30 level=INFO source=routes.go:1752 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\logx\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-04-28T10:42:29.920+05:30 level=INFO source=routes.go:1754 msg="Ollama cloud disabled: false" time=2026-04-28T10:42:29.934+05:30 level=INFO source=images.go:517 msg="total blobs: 60" time=2026-04-28T10:42:29.938+05:30 level=INFO source=images.go:524 msg="total unused blobs removed: 0" time=2026-04-28T10:42:29.940+05:30 level=INFO source=routes.go:1810 msg="Listening on 127.0.0.1:11434 (version 0.21.2)" time=2026-04-28T10:42:29.942+05:30 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-28T10:42:29.950+05:30 level=WARN source=runner.go:502 msg="potentially incompatible library detected in PATH" location=C:\llamacpp\ggml-base.dll time=2026-04-28T10:42:29.961+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65380" time=2026-04-28T10:42:30.671+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65394" time=2026-04-28T10:42:31.312+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65410" time=2026-04-28T10:42:31.587+05:30 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-04-28T10:42:31.589+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65426" time=2026-04-28T10:42:31.589+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65427" time=2026-04-28T10:42:31.895+05:30 level=INFO source=types.go:42 msg="inference compute" id=GPU-71ff890f-0c96-f116-24c3-62d930353a93 filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4050 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.2 pci_id=0000:01:00.0 type=discrete total="6.0 GiB" available="5.6 GiB" time=2026-04-28T10:42:31.895+05:30 level=INFO source=routes.go:1860 msg="vram-based default context" total_vram="6.0 GiB" default_num_ctx=4096 time=2026-04-28T10:42:36.341+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57092" time=2026-04-28T10:42:36.562+05:30 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-04-28T10:42:36.562+05:30 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2026-04-28T10:42:36.562+05:30 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=4 threads=12 llama_model_loader: loaded meta data with 60 key-value pairs and 844 tensors from C:\Users\logx\.ollama\models\blobs\sha256-08a432581d3a797af07a021455ada33499185213ad250e85fb26daf3fe34c421 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 3: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 4: general.name str = Glm-4.7-Flash llama_model_loader: - kv 5: general.basename str = Glm-4.7-Flash llama_model_loader: - kv 6: general.quantized_by str = Unsloth llama_model_loader: - kv 7: general.size_label str = 64x2.6B llama_model_loader: - kv 8: general.license str = mit llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 10: general.base_model.count u32 = 1 llama_model_loader: - kv 11: general.base_model.0.name str = GLM 4.7 Flash llama_model_loader: - kv 12: general.base_model.0.organization str = Zai Org llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/zai-org/GLM-4.... llama_model_loader: - kv 14: general.tags arr[str,2] = ["unsloth", "text-generation"] llama_model_loader: - kv 15: general.languages arr[str,2] = ["en", "zh"] llama_model_loader: - kv 16: deepseek2.block_count u32 = 47 llama_model_loader: - kv 17: deepseek2.context_length u32 = 202752 llama_model_loader: - kv 18: deepseek2.embedding_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.feed_forward_length u32 = 10240 llama_model_loader: - kv 20: deepseek2.attention.head_count u32 = 20 llama_model_loader: - kv 21: deepseek2.attention.head_count_kv u32 = 1 llama_model_loader: - kv 22: deepseek2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 23: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 24: deepseek2.expert_used_count u32 = 4 llama_model_loader: - kv 25: deepseek2.expert_group_count u32 = 1 llama_model_loader: - kv 26: deepseek2.expert_group_used_count u32 = 1 llama_model_loader: - kv 27: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 28: deepseek2.leading_dense_block_count u32 = 1 llama_model_loader: - kv 29: deepseek2.vocab_size u32 = 154880 llama_model_loader: - kv 30: deepseek2.attention.q_lora_rank u32 = 768 llama_model_loader: - kv 31: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 32: deepseek2.attention.key_length u32 = 576 llama_model_loader: - kv 33: deepseek2.attention.value_length u32 = 512 llama_model_loader: - kv 34: deepseek2.attention.key_length_mla u32 = 256 llama_model_loader: - kv 35: deepseek2.attention.value_length_mla u32 = 256 llama_model_loader: - kv 36: deepseek2.expert_feed_forward_length u32 = 1536 llama_model_loader: - kv 37: deepseek2.expert_count u32 = 64 llama_model_loader: - kv 38: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 39: deepseek2.expert_weights_scale f32 = 1.800000 llama_model_loader: - kv 40: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 41: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 42: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 43: tokenizer.ggml.pre str = glm4 llama_model_loader: - kv 44: tokenizer.ggml.tokens arr[str,154880] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 45: tokenizer.ggml.token_type arr[i32,154880] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 46: tokenizer.ggml.merges arr[str,321649] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 47: tokenizer.ggml.eos_token_id u32 = 154820 llama_model_loader: - kv 48: tokenizer.ggml.padding_token_id u32 = 154821 llama_model_loader: - kv 49: tokenizer.ggml.bos_token_id u32 = 154822 llama_model_loader: - kv 50: tokenizer.ggml.eot_token_id u32 = 154827 llama_model_loader: - kv 51: tokenizer.ggml.unknown_token_id u32 = 154820 llama_model_loader: - kv 52: tokenizer.ggml.eom_token_id u32 = 154829 llama_model_loader: - kv 53: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n<|syste... llama_model_loader: - kv 54: general.quantization_version u32 = 2 llama_model_loader: - kv 55: general.file_type u32 = 10 llama_model_loader: - kv 56: quantize.imatrix.file str = GLM-4.7-Flash-GGUF/imatrix_unsloth.gguf llama_model_loader: - kv 57: quantize.imatrix.dataset str = unsloth_calibration_GLM-4.7-Flash.txt llama_model_loader: - kv 58: quantize.imatrix.entries_count u32 = 607 llama_model_loader: - kv 59: quantize.imatrix.chunks_count u32 = 85 llama_model_loader: - type f32: 281 tensors llama_model_loader: - type f16: 5 tensors llama_model_loader: - type q8_0: 172 tensors llama_model_loader: - type q2_K: 92 tensors llama_model_loader: - type q3_K: 37 tensors llama_model_loader: - type q4_K: 192 tensors llama_model_loader: - type q5_K: 44 tensors llama_model_loader: - type q6_K: 21 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q2_K - Medium print_info: file size = 11.06 GiB (3.17 BPW) load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 154820 ('<|endoftext|>') load: - 154827 ('<|user|>') load: - 154829 ('<|observation|>') load: special tokens cache size = 36 load: token to piece cache size = 0.9811 MB print_info: arch = deepseek2 print_info: vocab_only = 1 print_info: no_alloc = 0 print_info: model type = ?B print_info: model params = 29.94 B print_info: general.name = Glm-4.7-Flash print_info: n_layer_dense_lead = 0 print_info: n_lora_q = 0 print_info: n_lora_kv = 0 print_info: n_embd_head_k_mla = 0 print_info: n_embd_head_v_mla = 0 print_info: n_ff_exp = 0 print_info: n_expert_shared = 0 print_info: expert_weights_scale = 0.0 print_info: expert_weights_norm = 0 print_info: expert_gating_func = unknown print_info: vocab type = BPE print_info: n_vocab = 154880 print_info: n_merges = 321649 print_info: BOS token = 154822 '[gMASK]' print_info: EOS token = 154820 '<|endoftext|>' print_info: EOT token = 154827 '<|user|>' print_info: EOM token = 154829 '<|observation|>' print_info: UNK token = 154820 '<|endoftext|>' print_info: PAD token = 154821 '[MASK]' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 154838 '<|code_prefix|>' print_info: FIM SUF token = 154840 '<|code_suffix|>' print_info: FIM MID token = 154839 '<|code_middle|>' print_info: EOG token = 154820 '<|endoftext|>' print_info: EOG token = 154827 '<|user|>' print_info: EOG token = 154829 '<|observation|>' print_info: max token length = 1024 llama_model_load: vocab only - skipping tensors time=2026-04-28T10:42:36.902+05:30 level=WARN source=server.go:209 msg="flash attention enabled but not supported by model" time=2026-04-28T10:42:36.902+05:30 level=WARN source=server.go:240 msg="OLLAMA_FLASH_ATTENTION must be enabled to use a quantized OLLAMA_KV_CACHE_TYPE" type=q8_0 time=2026-04-28T10:42:36.903+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\logx\\.ollama\\models\\blobs\\sha256-08a432581d3a797af07a021455ada33499185213ad250e85fb26daf3fe34c421 --port 57104" time=2026-04-28T10:42:36.910+05:30 level=INFO source=sched.go:484 msg="system memory" total="15.7 GiB" free="6.5 GiB" free_swap="18.9 GiB" time=2026-04-28T10:42:36.910+05:30 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-71ff890f-0c96-f116-24c3-62d930353a93 library=CUDA available="5.1 GiB" free="5.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-28T10:42:36.910+05:30 level=INFO source=server.go:511 msg="loading model" "model layers"=48 requested=-1 time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="3.7 GiB" time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:245 msg="model weights" device=CPU size="7.2 GiB" time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="597.7 MiB" time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.1 GiB" time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="562.6 MiB" time=2026-04-28T10:42:36.911+05:30 level=INFO source=device.go:272 msg="total memory" size="13.2 GiB" time=2026-04-28T10:42:37.014+05:30 level=INFO source=runner.go:965 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\logx\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4050 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-71ff890f-0c96-f116-24c3-62d930353a93 load_backend: loaded CUDA backend from C:\Users\logx\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-04-28T10:42:37.079+05:30 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-04-28T10:42:37.079+05:30 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:57104" time=2026-04-28T10:42:37.088+05:30 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:18000 KvCacheType: NumThreads:4 GPULayers:16[ID:GPU-71ff890f-0c96-f116-24c3-62d930353a93 Layers:16(31..46)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-28T10:42:37.089+05:30 level=INFO source=server.go:1364 msg="waiting for llama runner to start responding" time=2026-04-28T10:42:37.089+05:30 level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-71ff890f-0c96-f116-24c3-62d930353a93 utilizing NVML memory reporting free: 5979209728 total: 6439305216 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4050 Laptop GPU) (0000:01:00.0) - 5702 MiB free llama_model_loader: loaded meta data with 60 key-value pairs and 844 tensors from C:\Users\logx\.ollama\models\blobs\sha256-08a432581d3a797af07a021455ada33499185213ad250e85fb26daf3fe34c421 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 3: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 4: general.name str = Glm-4.7-Flash llama_model_loader: - kv 5: general.basename str = Glm-4.7-Flash llama_model_loader: - kv 6: general.quantized_by str = Unsloth llama_model_loader: - kv 7: general.size_label str = 64x2.6B llama_model_loader: - kv 8: general.license str = mit llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 10: general.base_model.count u32 = 1 llama_model_loader: - kv 11: general.base_model.0.name str = GLM 4.7 Flash llama_model_loader: - kv 12: general.base_model.0.organization str = Zai Org llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/zai-org/GLM-4.... llama_model_loader: - kv 14: general.tags arr[str,2] = ["unsloth", "text-generation"] llama_model_loader: - kv 15: general.languages arr[str,2] = ["en", "zh"] llama_model_loader: - kv 16: deepseek2.block_count u32 = 47 llama_model_loader: - kv 17: deepseek2.context_length u32 = 202752 llama_model_loader: - kv 18: deepseek2.embedding_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.feed_forward_length u32 = 10240 llama_model_loader: - kv 20: deepseek2.attention.head_count u32 = 20 llama_model_loader: - kv 21: deepseek2.attention.head_count_kv u32 = 1 llama_model_loader: - kv 22: deepseek2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 23: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 24: deepseek2.expert_used_count u32 = 4 llama_model_loader: - kv 25: deepseek2.expert_group_count u32 = 1 llama_model_loader: - kv 26: deepseek2.expert_group_used_count u32 = 1 llama_model_loader: - kv 27: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 28: deepseek2.leading_dense_block_count u32 = 1 llama_model_loader: - kv 29: deepseek2.vocab_size u32 = 154880 llama_model_loader: - kv 30: deepseek2.attention.q_lora_rank u32 = 768 llama_model_loader: - kv 31: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 32: deepseek2.attention.key_length u32 = 576 llama_model_loader: - kv 33: deepseek2.attention.value_length u32 = 512 llama_model_loader: - kv 34: deepseek2.attention.key_length_mla u32 = 256 llama_model_loader: - kv 35: deepseek2.attention.value_length_mla u32 = 256 llama_model_loader: - kv 36: deepseek2.expert_feed_forward_length u32 = 1536 llama_model_loader: - kv 37: deepseek2.expert_count u32 = 64 llama_model_loader: - kv 38: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 39: deepseek2.expert_weights_scale f32 = 1.800000 llama_model_loader: - kv 40: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 41: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 42: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 43: tokenizer.ggml.pre str = glm4 llama_model_loader: - kv 44: tokenizer.ggml.tokens arr[str,154880] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 45: tokenizer.ggml.token_type arr[i32,154880] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 46: tokenizer.ggml.merges arr[str,321649] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 47: tokenizer.ggml.eos_token_id u32 = 154820 llama_model_loader: - kv 48: tokenizer.ggml.padding_token_id u32 = 154821 llama_model_loader: - kv 49: tokenizer.ggml.bos_token_id u32 = 154822 llama_model_loader: - kv 50: tokenizer.ggml.eot_token_id u32 = 154827 llama_model_loader: - kv 51: tokenizer.ggml.unknown_token_id u32 = 154820 llama_model_loader: - kv 52: tokenizer.ggml.eom_token_id u32 = 154829 llama_model_loader: - kv 53: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n<|syste... llama_model_loader: - kv 54: general.quantization_version u32 = 2 llama_model_loader: - kv 55: general.file_type u32 = 10 llama_model_loader: - kv 56: quantize.imatrix.file str = GLM-4.7-Flash-GGUF/imatrix_unsloth.gguf llama_model_loader: - kv 57: quantize.imatrix.dataset str = unsloth_calibration_GLM-4.7-Flash.txt llama_model_loader: - kv 58: quantize.imatrix.entries_count u32 = 607 llama_model_loader: - kv 59: quantize.imatrix.chunks_count u32 = 85 llama_model_loader: - type f32: 281 tensors llama_model_loader: - type f16: 5 tensors llama_model_loader: - type q8_0: 172 tensors llama_model_loader: - type q2_K: 92 tensors llama_model_loader: - type q3_K: 37 tensors llama_model_loader: - type q4_K: 192 tensors llama_model_loader: - type q5_K: 44 tensors llama_model_loader: - type q6_K: 21 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q2_K - Medium print_info: file size = 11.06 GiB (3.17 BPW) load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 154820 ('<|endoftext|>') load: - 154827 ('<|user|>') load: - 154829 ('<|observation|>') load: special tokens cache size = 36 load: token to piece cache size = 0.9811 MB print_info: arch = deepseek2 print_info: vocab_only = 0 print_info: no_alloc = 0 print_info: n_ctx_train = 202752 print_info: n_embd = 2048 print_info: n_embd_inp = 2048 print_info: n_layer = 47 print_info: n_head = 20 print_info: n_head_kv = 1 print_info: n_rot = 64 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 576 print_info: n_embd_head_v = 512 print_info: n_gqa = 20 print_info: n_embd_k_gqa = 576 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 10240 print_info: n_expert = 64 print_info: n_expert_used = 4 print_info: n_expert_groups = 1 print_info: n_group_used = 1 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 202752 print_info: rope_yarn_log_mul= 0.0000 print_info: rope_finetuned = unknown print_info: model type = ?B print_info: model params = 29.94 B print_info: general.name = Glm-4.7-Flash print_info: n_layer_dense_lead = 1 print_info: n_lora_q = 768 print_info: n_lora_kv = 512 print_info: n_embd_head_k_mla = 256 print_info: n_embd_head_v_mla = 256 print_info: n_ff_exp = 1536 print_info: n_expert_shared = 1 print_info: expert_weights_scale = 1.8 print_info: expert_weights_norm = 1 print_info: expert_gating_func = sigmoid print_info: vocab type = BPE print_info: n_vocab = 154880 print_info: n_merges = 321649 print_info: BOS token = 154822 '[gMASK]' print_info: EOS token = 154820 '<|endoftext|>' print_info: EOT token = 154827 '<|user|>' print_info: EOM token = 154829 '<|observation|>' print_info: UNK token = 154820 '<|endoftext|>' print_info: PAD token = 154821 '[MASK]' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 154838 '<|code_prefix|>' print_info: FIM SUF token = 154840 '<|code_suffix|>' print_info: FIM MID token = 154839 '<|code_middle|>' print_info: EOG token = 154820 '<|endoftext|>' print_info: EOG token = 154827 '<|user|>' print_info: EOG token = 154829 '<|observation|>' print_info: max token length = 1024 load_tensors: loading model tensors, this can take a while... (mmap = false) ggml_cuda_host_malloc: failed to allocate 7327.33 MiB of pinned memory: out of memory load_tensors: offloading 16 repeating layers to GPU load_tensors: offloaded 16/48 layers to GPU load_tensors: CPU model buffer size = 170.16 MiB load_tensors: CUDA0 model buffer size = 3831.49 MiB load_tensors: CPU model buffer size = 7327.33 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 18176 llama_context: n_ctx_seq = 18176 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_seq (18176) < n_ctx_train (202752) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.60 MiB llama_kv_cache: CPU KV buffer size = 1169.28 MiB llama_kv_cache: CUDA0 KV buffer size = 603.50 MiB llama_kv_cache: size = 1772.78 MiB ( 18176 cells, 47 layers, 1/1 seqs), K (f16): 938.53 MiB, V (f16): 834.25 MiB llama_context: CUDA0 compute buffer size = 838.34 MiB llama_context: CUDA_Host compute buffer size = 41.51 MiB llama_context: graph nodes = 3504 llama_context: graph splits = 619 (with bs=512), 3 (with bs=1) time=2026-04-28T10:42:52.162+05:30 level=INFO source=server.go:1402 msg="llama runner started in 15.25 seconds" time=2026-04-28T10:42:52.173+05:30 level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-28T10:42:52.176+05:30 level=INFO source=server.go:1364 msg="waiting for llama runner to start responding" time=2026-04-28T10:42:52.181+05:30 level=INFO source=server.go:1402 msg="llama runner started in 15.28 seconds" time=2026-04-28T10:43:58.589+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 62091" time=2026-04-28T10:43:59.572+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\logx\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 62110" ``` </details>
GiteaMirror added the feature request label 2026-05-05 03:34:34 -05:00
Author
Owner

@gotnochill815-web commented on GitHub (Apr 28, 2026):

Could you clarify whether this is due to:

  • Model-side limitation (GLM4.7-flash architecture)
  • Backend limitation (Ollama runtime not supporting FA for this model)
  • Or missing integration (e.g., FlashAttention v2 kernels not wired in)

If it's an integration gap, I can look into adding support for FA or at least enabling KV cache fallback.

<!-- gh-comment-id:4333690294 --> @gotnochill815-web commented on GitHub (Apr 28, 2026): Could you clarify whether this is due to: - Model-side limitation (GLM4.7-flash architecture) - Backend limitation (Ollama runtime not supporting FA for this model) - Or missing integration (e.g., FlashAttention v2 kernels not wired in) If it's an integration gap, I can look into adding support for FA or at least enabling KV cache fallback.
Author
Owner

@logxdx commented on GitHub (Apr 28, 2026):

I checked the model uploaded by unsloth and the one uploaded on the ollama library, the only difference I see is the difference in the architecture name.
The unsloth model uses deepseek2 while the ollama library model uses glmmoelite.
I checked the repo and glmmoelite supports FA.

I'm currently trying to edit the arch name on the unsloth model to see if FA works. Will let you know here if that works!

<!-- gh-comment-id:4333718805 --> @logxdx commented on GitHub (Apr 28, 2026): I checked the model uploaded by unsloth and the one uploaded on the ollama library, the only difference I see is the difference in the architecture name. The unsloth model uses `deepseek2` while the ollama library model uses `glmmoelite`. I checked the repo and `glmmoelite` supports FA. I'm currently trying to edit the arch name on the unsloth model to see if FA works. Will let you know here if that works!
Author
Owner

@gotnochill815-web commented on GitHub (Apr 28, 2026):

That’s a really useful observation.

If simply changing the architecture name from deepseek2 to glmmoelite enables FA, then it looks like FA support is being gated purely by the architecture string rather than actual model capability.

This suggests the issue is in how Ollama maps model architectures to attention backends.

Likely:

  • glmmoelite → routed to FA-enabled path
  • deepseek2 → routed to fallback attention (no FA, no KV cache)

If that’s the case, would it make sense to:

  1. Extend FA support to deepseek2 in the same way as glmmoelite
  2. Or make the backend select FA based on capability instead of architecture name

Curious to see your results from renaming the arch — that would confirm this is purely a backend dispatch issue.

<!-- gh-comment-id:4333830997 --> @gotnochill815-web commented on GitHub (Apr 28, 2026): That’s a really useful observation. If simply changing the architecture name from `deepseek2` to `glmmoelite` enables FA, then it looks like FA support is being gated purely by the architecture string rather than actual model capability. This suggests the issue is in how Ollama maps model architectures to attention backends. Likely: - `glmmoelite` → routed to FA-enabled path - `deepseek2` → routed to fallback attention (no FA, no KV cache) If that’s the case, would it make sense to: 1. Extend FA support to `deepseek2` in the same way as `glmmoelite` 2. Or make the backend select FA based on capability instead of architecture name Curious to see your results from renaming the arch — that would confirm this is purely a backend dispatch issue.
Author
Owner

@logxdx commented on GitHub (Apr 28, 2026):

It didn't work, the ollama version of the model has some metadata difference from the unsloth version. The deepseek2 arch does not support FA.

<!-- gh-comment-id:4334389804 --> @logxdx commented on GitHub (Apr 28, 2026): It didn't work, the ollama version of the model has some metadata difference from the unsloth version. The deepseek2 arch does not support FA.
Author
Owner

@gotnochill815-web commented on GitHub (Apr 28, 2026):

Got it, that helps.
Since renaming the arch didn’t enable FA, it seems there’s some metadata/config check blocking it beyond just architecture.
Do you know what conditions Ollama uses to enable Flash Attention?
like attention type, head dims, KV cache layout, kernel compatibility
Trying to understand whether deepseek2 can be made FA-compatible or if it fundamentally isn’t supported.

<!-- gh-comment-id:4334609444 --> @gotnochill815-web commented on GitHub (Apr 28, 2026): Got it, that helps. Since renaming the arch didn’t enable FA, it seems there’s some metadata/config check blocking it beyond just architecture. Do you know what conditions Ollama uses to enable Flash Attention? like attention type, head dims, KV cache layout, kernel compatibility Trying to understand whether `deepseek2` can be made FA-compatible or if it fundamentally isn’t supported.
Author
Owner

@logxdx commented on GitHub (Apr 29, 2026):

The metadata for the unsloth and ollama models are different for some keys in their GGUFs. That must be the cause. I have currently stopped looking further.

<!-- gh-comment-id:4345324116 --> @logxdx commented on GitHub (Apr 29, 2026): The metadata for the unsloth and ollama models are different for some keys in their GGUFs. That must be the cause. I have currently stopped looking further.
Author
Owner

@gotnochill815-web commented on GitHub (Apr 29, 2026):

Got it, makes sense looks like GGUF metadata difference is the cause, not the architecture.
Do you know which keys are different between the unsloth and ollama models?
If we identify the metadata fields controlling FA (like attention type or KV layout), maybe we can align them and enable FA for deepseek2.

<!-- gh-comment-id:4345380832 --> @gotnochill815-web commented on GitHub (Apr 29, 2026): Got it, makes sense looks like GGUF metadata difference is the cause, not the architecture. Do you know which keys are different between the unsloth and ollama models? If we identify the metadata fields controlling FA (like attention type or KV layout), maybe we can align them and enable FA for `deepseek2`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72164