[GH-ISSUE #7433] ollama ROCm multiple gpus, segfaulted when trying to run model bigger than 1 GPU's memory capacity #51237

Closed
opened 2026-04-28 18:58:04 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @rhudock on GitHub (Oct 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7433

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Segmentation fault when trying to run a model

Command

ollama run llama3.1:70b

Error

Error: llama runner process has terminated: signal: segmentation fault (core dumped)

Dmesg

[ 2857.607412] ollama_llama_se[18031]: segfault at 18 ip 00007e8f7e127b66 sp 00007ffd70563640 error 4 in libamdhip64.so.6.1.60102[7e8f7de21000+371000] likely on CPU 13 (core 17, socket 0)
[ 2857.607438] Code: 0d 4c 89 ef e8 3b 80 00 00 e9 bf fd ff ff 83 87 98 01 00 00 01 e9 b3 fd ff ff 48 89 c5 e9 e6 4d d3 ff 66 90 53 48 89 fb 89 d7 <4c> 8b 43 18 4d 85 c0 74 41 4c 8b 4b 20 31 c9 31 c0 eb 12 0f 1f 80
robert@robert-mercury:~$

Logs from Ollama

Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest))
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 0: general.architecture str = llama
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 1: general.type str = model
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 70B Instruct
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 3: general.finetune str = Instruct
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 5: general.size_label str = 70B
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 6: general.license str = llama3.1
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 9: llama.block_count u32 = 80
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 10: llama.context_length u32 = 131072
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 11: llama.embedding_length u32 = 8192
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 13: llama.attention.head_count u32 = 64
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 17: general.file_type u32 = 2
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 28: general.quantization_version u32 = 2
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type f32: 162 tensors
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type q4_0: 561 tensors
Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type q6_K: 1 tensors
Oct 30 19:29:31 robert-mercury ollama[15528]: time=2024-10-30T19:29:31.057-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_vocab: special tokens cache size = 256
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_vocab: token to piece cache size = 0.7999 MB
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: format = GGUF V3 (latest)
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: arch = llama
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: vocab type = BPE
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_vocab = 128256
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_merges = 280147
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: vocab_only = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ctx_train = 131072
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd = 8192
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_layer = 80
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_head = 64
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_head_kv = 8
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_rot = 128
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_swa = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_head_k = 128
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_head_v = 128
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_gqa = 8
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_k_gqa = 1024
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_v_gqa = 1024
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_norm_eps = 0.0e+00
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_logit_scale = 0.0e+00
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ff = 28672
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_expert = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_expert_used = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: causal attn = 1
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: pooling type = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope type = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope scaling = linear
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: freq_base_train = 500000.0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: freq_scale_train = 1
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ctx_orig_yarn = 131072
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope_finetuned = unknown
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_conv = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_inner = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_state = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_dt_rank = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model type = 70B
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model ftype = Q4_0
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model params = 70.55 B
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: general.name = Meta Llama 3.1 70B Instruct
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: LF token = 128 'Ä'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: max token length = 256
Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: found 6 ROCm devices:
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 3: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 4: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: Device 5: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Oct 30 19:29:35 robert-mercury ollama[15528]: llm_load_tensors: ggml ctx size = 2.37 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: time=2024-10-30T19:29:37.525-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
Oct 30 19:29:37 robert-mercury ollama[15528]: time=2024-10-30T19:29:37.911-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloading 80 repeating layers to GPU
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloading non-repeating layers to GPU
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloaded 81/81 layers to GPU
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm0 buffer size = 6426.88 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm1 buffer size = 6426.88 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm2 buffer size = 6426.88 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm3 buffer size = 5967.82 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm4 buffer size = 5967.82 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm5 buffer size = 6330.74 MiB
Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: CPU buffer size = 563.62 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_ctx = 8192
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_batch = 512
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_ubatch = 512
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: flash_attn = 0
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: freq_base = 500000.0
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: freq_scale = 1
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm0 KV buffer size = 448.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm1 KV buffer size = 448.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm2 KV buffer size = 448.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm3 KV buffer size = 416.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm4 KV buffer size = 416.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm5 KV buffer size = 384.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm_Host output buffer size = 2.08 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm0 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm1 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm2 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm3 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm4 compute buffer size = 1216.01 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm5 compute buffer size = 1216.02 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm_Host compute buffer size = 80.02 MiB
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: graph nodes = 2566
Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: graph splits = 7
Oct 30 19:29:51 robert-mercury ollama[15528]: time=2024-10-30T19:29:51.816-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
Oct 30 19:29:52 robert-mercury ollama[15528]: time=2024-10-30T19:29:52.067-04:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped)"
Oct 30 19:29:52 robert-mercury ollama[15528]: [GIN] 2024/10/30 - 19:29:52 | 500 | 21.309799963s | 127.0.0.1 | POST "/api/generate"
Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.068-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000938426 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574
Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.317-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.250407502 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574
Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.567-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.500792457 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.3.14

Originally created by @rhudock on GitHub (Oct 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7433 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Segmentation fault when trying to run a model ### Command ollama run llama3.1:70b ### Error Error: llama runner process has terminated: signal: segmentation fault (core dumped) ### Dmesg [ 2857.607412] ollama_llama_se[18031]: segfault at 18 ip 00007e8f7e127b66 sp 00007ffd70563640 error 4 in libamdhip64.so.6.1.60102[7e8f7de21000+371000] likely on CPU 13 (core 17, socket 0) [ 2857.607438] Code: 0d 4c 89 ef e8 3b 80 00 00 e9 bf fd ff ff 83 87 98 01 00 00 01 e9 b3 fd ff ff 48 89 c5 e9 e6 4d d3 ff 66 90 53 48 89 fb 89 d7 <4c> 8b 43 18 4d 85 c0 74 41 4c 8b 4b 20 31 c9 31 c0 eb 12 0f 1f 80 robert@robert-mercury:~$ ### Logs from Ollama Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest)) Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 0: general.architecture str = llama Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 1: general.type str = model Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 70B Instruct Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 3: general.finetune str = Instruct Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 5: general.size_label str = 70B Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 6: general.license str = llama3.1 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 9: llama.block_count u32 = 80 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 10: llama.context_length u32 = 131072 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 11: llama.embedding_length u32 = 8192 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 13: llama.attention.head_count u32 = 64 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 17: general.file_type u32 = 2 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type f32: 162 tensors Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type q4_0: 561 tensors Oct 30 19:29:30 robert-mercury ollama[15528]: llama_model_loader: - type q6_K: 1 tensors Oct 30 19:29:31 robert-mercury ollama[15528]: time=2024-10-30T19:29:31.057-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_vocab: special tokens cache size = 256 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_vocab: token to piece cache size = 0.7999 MB Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: format = GGUF V3 (latest) Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: arch = llama Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: vocab type = BPE Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_vocab = 128256 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_merges = 280147 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: vocab_only = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ctx_train = 131072 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd = 8192 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_layer = 80 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_head = 64 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_head_kv = 8 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_rot = 128 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_swa = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_head_k = 128 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_head_v = 128 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_gqa = 8 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_k_gqa = 1024 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_embd_v_gqa = 1024 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_norm_eps = 0.0e+00 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: f_logit_scale = 0.0e+00 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ff = 28672 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_expert = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_expert_used = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: causal attn = 1 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: pooling type = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope type = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope scaling = linear Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: freq_base_train = 500000.0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: freq_scale_train = 1 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: rope_finetuned = unknown Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_conv = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_inner = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_d_state = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_dt_rank = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model type = 70B Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model ftype = Q4_0 Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model params = 70.55 B Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: general.name = Meta Llama 3.1 70B Instruct Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: LF token = 128 'Ä' Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' Oct 30 19:29:31 robert-mercury ollama[15528]: llm_load_print_meta: max token length = 256 Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Oct 30 19:29:35 robert-mercury ollama[15528]: ggml_cuda_init: found 6 ROCm devices: Oct 30 19:29:35 robert-mercury ollama[15528]: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Oct 30 19:29:35 robert-mercury ollama[15528]: Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Oct 30 19:29:35 robert-mercury ollama[15528]: Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Oct 30 19:29:35 robert-mercury ollama[15528]: Device 3: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Oct 30 19:29:35 robert-mercury ollama[15528]: Device 4: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Oct 30 19:29:35 robert-mercury ollama[15528]: Device 5: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Oct 30 19:29:35 robert-mercury ollama[15528]: llm_load_tensors: ggml ctx size = 2.37 MiB Oct 30 19:29:37 robert-mercury ollama[15528]: time=2024-10-30T19:29:37.525-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" Oct 30 19:29:37 robert-mercury ollama[15528]: time=2024-10-30T19:29:37.911-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloading 80 repeating layers to GPU Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloading non-repeating layers to GPU Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: offloaded 81/81 layers to GPU Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm0 buffer size = 6426.88 MiB Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm1 buffer size = 6426.88 MiB Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm2 buffer size = 6426.88 MiB Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm3 buffer size = 5967.82 MiB Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm4 buffer size = 5967.82 MiB Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: ROCm5 buffer size = 6330.74 MiB Oct 30 19:29:37 robert-mercury ollama[15528]: llm_load_tensors: CPU buffer size = 563.62 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_ctx = 8192 Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_batch = 512 Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: n_ubatch = 512 Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: flash_attn = 0 Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: freq_base = 500000.0 Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: freq_scale = 1 Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm0 KV buffer size = 448.00 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm1 KV buffer size = 448.00 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm2 KV buffer size = 448.00 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm3 KV buffer size = 416.00 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm4 KV buffer size = 416.00 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_kv_cache_init: ROCm5 KV buffer size = 384.00 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm_Host output buffer size = 2.08 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm0 compute buffer size = 1216.01 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm1 compute buffer size = 1216.01 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm2 compute buffer size = 1216.01 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm3 compute buffer size = 1216.01 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm4 compute buffer size = 1216.01 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm5 compute buffer size = 1216.02 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: ROCm_Host compute buffer size = 80.02 MiB Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: graph nodes = 2566 Oct 30 19:29:51 robert-mercury ollama[15528]: llama_new_context_with_model: graph splits = 7 Oct 30 19:29:51 robert-mercury ollama[15528]: time=2024-10-30T19:29:51.816-04:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" Oct 30 19:29:52 robert-mercury ollama[15528]: time=2024-10-30T19:29:52.067-04:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped)" Oct 30 19:29:52 robert-mercury ollama[15528]: [GIN] 2024/10/30 - 19:29:52 | 500 | 21.309799963s | 127.0.0.1 | POST "/api/generate" Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.068-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000938426 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.317-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.250407502 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 Oct 30 19:29:57 robert-mercury ollama[15528]: time=2024-10-30T19:29:57.567-04:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.500792457 model=/usr/share/ollama/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.3.14
GiteaMirror added the needs more infobugamd labels 2026-04-28 18:58:05 -05:00
Author
Owner

@dhiltgen commented on GitHub (Feb 25, 2025):

Is this still a problem with the latest versions? I'm trying to determine if #7378 is still useful.

<!-- gh-comment-id:2682725139 --> @dhiltgen commented on GitHub (Feb 25, 2025): Is this still a problem with the latest versions? I'm trying to determine if #7378 is still useful.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51237