[GH-ISSUE #10424] GLM-4-0414 32B returns GGGGGG[...] on generation with prompts >=64 tokens on ROCm backend #68909

Open
opened 2026-05-04 15:37:39 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @AdamNiederer on GitHub (Apr 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10424

What is the issue?

Using either of these quants (both are the fixed versions created with the recent llama.cpp patches):

https://huggingface.co/bartowski/THUDM_GLM-4-32B-0414-GGUF/blob/main/THUDM_GLM-4-32B-0414-IQ4_XS.gguf
https://huggingface.co/bartowski/THUDM_GLM-4-32B-0414-GGUF/blob/main/THUDM_GLM-4-32B-0414-IQ3_M.gguf

The following two prompts are exactly identical, except the latter has a single trailing space to bring it up to exactly 64 tokens. The same malformed output GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG will be returned for any prompt >= 64 tokens.

>>> 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8
The sequence you've provided is a repetition of the numbers from 0 to 8, followed by 9, and then it starts over again from 0 up to 8. It looks like this:

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8

This sequence could represent a simple counting pattern that cycles through the numbers 0 to 9, with the last cycle stopping at 8 instead of completing the full set up to 9. If you have 
any specific questions about this sequence or if there's something else I can assist you with, feel free to let me know!

total duration:       9.302270002s
load duration:        3.40446773s
prompt eval count:    63 token(s)
prompt eval duration: 42.921759ms
prompt eval rate:     1467.79 tokens/s
eval count:           171 token(s)
eval duration:        5.853662112s
eval rate:            29.21 tokens/s
>>> 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG

Does not occur on the 9B model, or when running on the CPU. Appears to be unaffected by:

  • Flash attention
  • KV cache quantization

Other observations:

  • The statistics readout from /set verbose appears to not display when this happens

It does work on llama.cpp - llama-cli for comparison:

$ llama-cli -m glm-4-0414\:32b-iq4_XS.gguf -ngl 999
> 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 - Hello, who are you?  

I'm an AI language model. How can I assist you today?

Relevant log output

llama_model_load: vocab only - skipping tensors
time=2025-04-26T17:11:16.066-04:00 level=INFO source=server.go:405 msg="starting llama server" cmd="ollama runner --model /var/lib/ollama/blobs/sha256-a185eee0f1566f199f15da2fa37ae108440267d0358878452471fe0b6740df75 --ctx-size 8192 --batch-size 512 --n-gpu-layers 62 --threads 8 --parallel 2 --port 35069"
time=2025-04-26T17:11:16.066-04:00 level=INFO source=sched.go:463 msg="loaded runners" count=1
time=2025-04-26T17:11:16.066-04:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-26T17:11:16.066-04:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-26T17:11:16.071-04:00 level=INFO source=runner.go:853 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from /ollama/build/lib/ollama/libggml-hip.so
load_backend: loaded CPU backend from /ollama/build/lib/ollama/libggml-cpu-icelake.so
time=2025-04-26T17:11:16.688-04:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7900 XT) - 20346 MiB free
time=2025-04-26T17:11:16.688-04:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:35069"
llama_model_loader: loaded meta data with 37 key-value pairs and 613 tensors from /var/lib/ollama/blobs/sha256-a185eee0f1566f199f15da2fa37ae108440267d0358878452471fe0b6740df75 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = glm4
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = GLM 4 32B 0414
llama_model_loader: - kv   3:                            general.version str              = 0414
llama_model_loader: - kv   4:                           general.basename str              = GLM-4
llama_model_loader: - kv   5:                         general.size_label str              = 32B
llama_model_loader: - kv   6:                            general.license str              = mit
llama_model_loader: - kv   7:                               general.tags arr[str,1]       = ["text-generation"]
llama_model_loader: - kv   8:                          general.languages arr[str,2]       = ["zh", "en"]
llama_model_loader: - kv   9:                           glm4.block_count u32              = 61
llama_model_loader: - kv  10:                        glm4.context_length u32              = 32768
llama_model_loader: - kv  11:                      glm4.embedding_length u32              = 6144
llama_model_loader: - kv  12:                   glm4.feed_forward_length u32              = 23040
llama_model_loader: - kv  13:                  glm4.attention.head_count u32              = 48
llama_model_loader: - kv  14:               glm4.attention.head_count_kv u32              = 2
llama_model_loader: - kv  15:                        glm4.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  16:      glm4.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                  glm4.attention.key_length u32              = 128
llama_model_loader: - kv  18:                glm4.attention.value_length u32              = 128
llama_model_loader: - kv  19:                  glm4.rope.dimension_count u32              = 64
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = glm4
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,151552]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,151552]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,318088]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 151336
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 151329
llama_model_loader: - kv  27:                tokenizer.ggml.eot_token_id u32              = 151336
llama_model_loader: - kv  28:            tokenizer.ggml.unknown_token_id u32              = 151329
llama_model_loader: - kv  29:                tokenizer.ggml.bos_token_id u32              = 151331
llama_model_loader: - kv  30:                    tokenizer.chat_template str              = [gMASK]<sop>{%- if tools -%}<|system|...
llama_model_loader: - kv  31:               general.quantization_version u32              = 2
llama_model_loader: - kv  32:                          general.file_type u32              = 30
llama_model_loader: - kv  33:                      quantize.imatrix.file str              = /models_out/GLM-4-32B-0414-GGUF/THUDM...
llama_model_loader: - kv  34:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav3.txt
llama_model_loader: - kv  35:             quantize.imatrix.entries_count i32              = 366
llama_model_loader: - kv  36:              quantize.imatrix.chunks_count i32              = 125
llama_model_loader: - type  f32:  245 tensors
llama_model_loader: - type q5_K:   61 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_xs:  306 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_XS - 4.25 bpw
print_info: file size   = 16.38 GiB (4.32 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 14
load: token to piece cache size = 0.9710 MB
print_info: arch             = glm4
print_info: vocab_only       = 0
print_info: n_ctx_train      = 32768
print_info: n_embd           = 6144
print_info: n_layer          = 61
print_info: n_head           = 48
print_info: n_head_kv        = 2
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 24
print_info: n_embd_k_gqa     = 256
print_info: n_embd_v_gqa     = 256
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 23040
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 32768
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 32B
print_info: model params     = 32.57 B
print_info: general.name     = GLM 4 32B 0414
print_info: vocab type       = BPE
print_info: n_vocab          = 151552
print_info: n_merges         = 318088
print_info: BOS token        = 151331 '[gMASK]'
print_info: EOS token        = 151336 '<|user|>'
print_info: EOT token        = 151336 '<|user|>'
print_info: UNK token        = 151329 '<|endoftext|>'
print_info: PAD token        = 151329 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 151329 '<|endoftext|>'
print_info: EOG token        = 151336 '<|user|>'
print_info: max token length = 1024
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-04-26T17:11:16.819-04:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
load_tensors: offloading 61 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 62/62 layers to GPU
load_tensors:        ROCm0 model buffer size = 16303.48 MiB
load_tensors:   CPU_Mapped model buffer size =   471.75 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_context:  ROCm_Host  output buffer size =     1.20 MiB
init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 1
init:      ROCm0 KV buffer size =   488.00 MiB
llama_context: KV self size  =  488.00 MiB, K (f16):  244.00 MiB, V (f16):  244.00 MiB
llama_context:      ROCm0 compute buffer size =   832.00 MiB
llama_context:  ROCm_Host compute buffer size =    28.01 MiB
llama_context: graph nodes  = 2507
llama_context: graph splits = 2
time=2025-04-26T17:11:18.574-04:00 level=INFO source=server.go:619 msg="llama runner started in 2.51 seconds"
[GIN] 2025/04/26 - 17:11:20 | 200 |  4.773881695s |       127.0.0.1 | POST     "/api/chat

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

5cfc1c39f3

Originally created by @AdamNiederer on GitHub (Apr 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10424 ### What is the issue? Using either of these quants (both are the fixed versions created with the recent llama.cpp patches): https://huggingface.co/bartowski/THUDM_GLM-4-32B-0414-GGUF/blob/main/THUDM_GLM-4-32B-0414-IQ4_XS.gguf https://huggingface.co/bartowski/THUDM_GLM-4-32B-0414-GGUF/blob/main/THUDM_GLM-4-32B-0414-IQ3_M.gguf The following two prompts are exactly identical, except the latter has a single trailing space to bring it up to exactly 64 tokens. The same malformed output `GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG` will be returned for any prompt >= 64 tokens. ``` >>> 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 The sequence you've provided is a repetition of the numbers from 0 to 8, followed by 9, and then it starts over again from 0 up to 8. It looks like this: 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 This sequence could represent a simple counting pattern that cycles through the numbers 0 to 9, with the last cycle stopping at 8 instead of completing the full set up to 9. If you have any specific questions about this sequence or if there's something else I can assist you with, feel free to let me know! total duration: 9.302270002s load duration: 3.40446773s prompt eval count: 63 token(s) prompt eval duration: 42.921759ms prompt eval rate: 1467.79 tokens/s eval count: 171 token(s) eval duration: 5.853662112s eval rate: 29.21 tokens/s >>> 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG ``` Does not occur on the 9B model, or when running on the CPU. Appears to be unaffected by: - Flash attention - KV cache quantization Other observations: - The statistics readout from `/set verbose` appears to not display when this happens It does work on llama.cpp - `llama-cli` for comparison: ``` $ llama-cli -m glm-4-0414\:32b-iq4_XS.gguf -ngl 999 > 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 - Hello, who are you? I'm an AI language model. How can I assist you today? ``` ### Relevant log output ```shell llama_model_load: vocab only - skipping tensors time=2025-04-26T17:11:16.066-04:00 level=INFO source=server.go:405 msg="starting llama server" cmd="ollama runner --model /var/lib/ollama/blobs/sha256-a185eee0f1566f199f15da2fa37ae108440267d0358878452471fe0b6740df75 --ctx-size 8192 --batch-size 512 --n-gpu-layers 62 --threads 8 --parallel 2 --port 35069" time=2025-04-26T17:11:16.066-04:00 level=INFO source=sched.go:463 msg="loaded runners" count=1 time=2025-04-26T17:11:16.066-04:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-26T17:11:16.066-04:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-26T17:11:16.071-04:00 level=INFO source=runner.go:853 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32 load_backend: loaded ROCm backend from /ollama/build/lib/ollama/libggml-hip.so load_backend: loaded CPU backend from /ollama/build/lib/ollama/libggml-cpu-icelake.so time=2025-04-26T17:11:16.688-04:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7900 XT) - 20346 MiB free time=2025-04-26T17:11:16.688-04:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:35069" llama_model_loader: loaded meta data with 37 key-value pairs and 613 tensors from /var/lib/ollama/blobs/sha256-a185eee0f1566f199f15da2fa37ae108440267d0358878452471fe0b6740df75 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = glm4 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = GLM 4 32B 0414 llama_model_loader: - kv 3: general.version str = 0414 llama_model_loader: - kv 4: general.basename str = GLM-4 llama_model_loader: - kv 5: general.size_label str = 32B llama_model_loader: - kv 6: general.license str = mit llama_model_loader: - kv 7: general.tags arr[str,1] = ["text-generation"] llama_model_loader: - kv 8: general.languages arr[str,2] = ["zh", "en"] llama_model_loader: - kv 9: glm4.block_count u32 = 61 llama_model_loader: - kv 10: glm4.context_length u32 = 32768 llama_model_loader: - kv 11: glm4.embedding_length u32 = 6144 llama_model_loader: - kv 12: glm4.feed_forward_length u32 = 23040 llama_model_loader: - kv 13: glm4.attention.head_count u32 = 48 llama_model_loader: - kv 14: glm4.attention.head_count_kv u32 = 2 llama_model_loader: - kv 15: glm4.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 16: glm4.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: glm4.attention.key_length u32 = 128 llama_model_loader: - kv 18: glm4.attention.value_length u32 = 128 llama_model_loader: - kv 19: glm4.rope.dimension_count u32 = 64 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = glm4 llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,151552] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,151552] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,318088] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 151336 llama_model_loader: - kv 26: tokenizer.ggml.padding_token_id u32 = 151329 llama_model_loader: - kv 27: tokenizer.ggml.eot_token_id u32 = 151336 llama_model_loader: - kv 28: tokenizer.ggml.unknown_token_id u32 = 151329 llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 151331 llama_model_loader: - kv 30: tokenizer.chat_template str = [gMASK]<sop>{%- if tools -%}<|system|... llama_model_loader: - kv 31: general.quantization_version u32 = 2 llama_model_loader: - kv 32: general.file_type u32 = 30 llama_model_loader: - kv 33: quantize.imatrix.file str = /models_out/GLM-4-32B-0414-GGUF/THUDM... llama_model_loader: - kv 34: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 35: quantize.imatrix.entries_count i32 = 366 llama_model_loader: - kv 36: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 245 tensors llama_model_loader: - type q5_K: 61 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_xs: 306 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_XS - 4.25 bpw print_info: file size = 16.38 GiB (4.32 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 14 load: token to piece cache size = 0.9710 MB print_info: arch = glm4 print_info: vocab_only = 0 print_info: n_ctx_train = 32768 print_info: n_embd = 6144 print_info: n_layer = 61 print_info: n_head = 48 print_info: n_head_kv = 2 print_info: n_rot = 64 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 24 print_info: n_embd_k_gqa = 256 print_info: n_embd_v_gqa = 256 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 23040 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 32768 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.57 B print_info: general.name = GLM 4 32B 0414 print_info: vocab type = BPE print_info: n_vocab = 151552 print_info: n_merges = 318088 print_info: BOS token = 151331 '[gMASK]' print_info: EOS token = 151336 '<|user|>' print_info: EOT token = 151336 '<|user|>' print_info: UNK token = 151329 '<|endoftext|>' print_info: PAD token = 151329 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 151329 '<|endoftext|>' print_info: EOG token = 151336 '<|user|>' print_info: max token length = 1024 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-04-26T17:11:16.819-04:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" load_tensors: offloading 61 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 62/62 layers to GPU load_tensors: ROCm0 model buffer size = 16303.48 MiB load_tensors: CPU_Mapped model buffer size = 471.75 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 10000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_context: ROCm_Host output buffer size = 1.20 MiB init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 1 init: ROCm0 KV buffer size = 488.00 MiB llama_context: KV self size = 488.00 MiB, K (f16): 244.00 MiB, V (f16): 244.00 MiB llama_context: ROCm0 compute buffer size = 832.00 MiB llama_context: ROCm_Host compute buffer size = 28.01 MiB llama_context: graph nodes = 2507 llama_context: graph splits = 2 time=2025-04-26T17:11:18.574-04:00 level=INFO source=server.go:619 msg="llama runner started in 2.51 seconds" [GIN] 2025/04/26 - 17:11:20 | 200 | 4.773881695s | 127.0.0.1 | POST "/api/chat ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 5cfc1c39f3d5822b0c0906f863f6df45c141c33b
GiteaMirror added the bug label 2026-05-04 15:37:39 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68909