[GH-ISSUE #5629] Crashing or gibberish output on 3x Radeon GPUs #65549

Open
opened 2026-05-03 21:39:32 -05:00 by GiteaMirror · 23 comments
Owner

Originally created by @darwinvelez58 on GitHub (Jul 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5629

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

While running mixtral:8x7b-instruct-v0.1-q4_K_M on my physical machine with x3 7900 XTX I got this error:

[root@5dc6ecf27031 /]# ollama run mixtral:8x7b-instruct-v0.1-q4_K_M
Error: llama runner process has terminated: signal: segmentation fault (core dumped) 
[root@5dc6ecf27031 /]# 

Logs:

[GIN] 2024/07/11 - 13:22:44 | 200 |       16.23µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/11 - 13:22:44 | 200 |    7.724554ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-11T13:22:44.297Z level=INFO source=sched.go:754 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 library=rocm parallel=4 required="33.2 GiB"
time=2024-07-11T13:22:44.298Z level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split=11,11,11 memory.available="[24.0 GiB 24.0 GiB 24.0 GiB]" memory.required.full="33.2 GiB" memory.required.partial="33.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[11.3 GiB 11.3 GiB 10.6 GiB]" memory.weights.total="25.5 GiB" memory.weights.repeating="25.4 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.3 GiB"
time=2024-07-11T13:22:44.299Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1419561683/runners/rocm_v60101/ollama_llama_server --model /root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --tensor-split 11,11,11 --tensor-split 11,11,11 --port 41695"
time=2024-07-11T13:22:44.299Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-11T13:22:44.299Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-11T13:22:44.299Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="140134008951616" timestamp=1720704164
INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="140134008951616" timestamp=1720704164 total_threads=32
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="41695" tid="140134008951616" timestamp=1720704164
llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q4_K:  833 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 259
llm_load_vocab: token to piece cache size = 0.1637 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8x7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: max token length = 48
time=2024-07-11T13:22:44.549Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size =    1.53 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  8608.53 MiB
llm_load_tensors:      ROCm1 buffer size =  8608.53 MiB
llm_load_tensors:      ROCm2 buffer size =  7928.49 MiB
llm_load_tensors:  ROCm_Host buffer size =    70.31 MiB
time=2024-07-11T13:23:03.566Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
time=2024-07-11T13:23:04.460Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:      ROCm0 KV buffer size =   352.00 MiB
llama_kv_cache_init:      ROCm1 KV buffer size =   352.00 MiB
llama_kv_cache_init:      ROCm2 KV buffer size =   320.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:  ROCm_Host  output buffer size =     0.55 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      ROCm0 compute buffer size =   640.01 MiB
llama_new_context_with_model:      ROCm1 compute buffer size =   640.01 MiB
llama_new_context_with_model:      ROCm2 compute buffer size =   640.02 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =    72.02 MiB
llama_new_context_with_model: graph nodes  = 1510
llama_new_context_with_model: graph splits = 4
time=2024-07-11T13:23:06.864Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
[GIN] 2024/07/11 - 13:23:07 | 500 | 22.834580361s |       127.0.0.1 | POST     "/api/chat"
time=2024-07-11T13:23:07.115Z level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped) "
time=2024-07-11T13:23:12.116Z level=WARN source=sched.go:671 msg="gpu VRAM usage didn't recover within timeout" seconds=5.001085328 model=/root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8
time=2024-07-11T13:23:12.366Z level=WARN source=sched.go:671 msg="gpu VRAM usage didn't recover within timeout" seconds=5.251122065 model=/root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8
time=2024-07-11T13:23:12.616Z level=WARN source=sched.go:671 msg="gpu VRAM usage didn't recover within timeout" seconds=5.500799906 model=/root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8

I am running this docker version

docker run -d --restart unless-stopped --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11442:11434 --name dvz3 ollama/ollama:0.2.1-rocm

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.2.1-rocm

Originally created by @darwinvelez58 on GitHub (Jul 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5629 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? While running mixtral:8x7b-instruct-v0.1-q4_K_M on my physical machine with x3 7900 XTX I got this error: ``` [root@5dc6ecf27031 /]# ollama run mixtral:8x7b-instruct-v0.1-q4_K_M Error: llama runner process has terminated: signal: segmentation fault (core dumped) [root@5dc6ecf27031 /]# ``` Logs: ``` [GIN] 2024/07/11 - 13:22:44 | 200 | 16.23µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/11 - 13:22:44 | 200 | 7.724554ms | 127.0.0.1 | POST "/api/show" time=2024-07-11T13:22:44.297Z level=INFO source=sched.go:754 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 library=rocm parallel=4 required="33.2 GiB" time=2024-07-11T13:22:44.298Z level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split=11,11,11 memory.available="[24.0 GiB 24.0 GiB 24.0 GiB]" memory.required.full="33.2 GiB" memory.required.partial="33.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[11.3 GiB 11.3 GiB 10.6 GiB]" memory.weights.total="25.5 GiB" memory.weights.repeating="25.4 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.3 GiB" time=2024-07-11T13:22:44.299Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1419561683/runners/rocm_v60101/ollama_llama_server --model /root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --tensor-split 11,11,11 --tensor-split 11,11,11 --port 41695" time=2024-07-11T13:22:44.299Z level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-11T13:22:44.299Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-11T13:22:44.299Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="140134008951616" timestamp=1720704164 INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="140134008951616" timestamp=1720704164 total_threads=32 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="41695" tid="140134008951616" timestamp=1720704164 llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q4_K: 833 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 259 llm_load_vocab: token to piece cache size = 0.1637 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8x7B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: max token length = 48 time=2024-07-11T13:22:44.549Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 1.53 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 8608.53 MiB llm_load_tensors: ROCm1 buffer size = 8608.53 MiB llm_load_tensors: ROCm2 buffer size = 7928.49 MiB llm_load_tensors: ROCm_Host buffer size = 70.31 MiB time=2024-07-11T13:23:03.566Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 time=2024-07-11T13:23:04.460Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" llama_kv_cache_init: ROCm0 KV buffer size = 352.00 MiB llama_kv_cache_init: ROCm1 KV buffer size = 352.00 MiB llama_kv_cache_init: ROCm2 KV buffer size = 320.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 0.55 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: ROCm0 compute buffer size = 640.01 MiB llama_new_context_with_model: ROCm1 compute buffer size = 640.01 MiB llama_new_context_with_model: ROCm2 compute buffer size = 640.02 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 72.02 MiB llama_new_context_with_model: graph nodes = 1510 llama_new_context_with_model: graph splits = 4 time=2024-07-11T13:23:06.864Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" [GIN] 2024/07/11 - 13:23:07 | 500 | 22.834580361s | 127.0.0.1 | POST "/api/chat" time=2024-07-11T13:23:07.115Z level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped) " time=2024-07-11T13:23:12.116Z level=WARN source=sched.go:671 msg="gpu VRAM usage didn't recover within timeout" seconds=5.001085328 model=/root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 time=2024-07-11T13:23:12.366Z level=WARN source=sched.go:671 msg="gpu VRAM usage didn't recover within timeout" seconds=5.251122065 model=/root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 time=2024-07-11T13:23:12.616Z level=WARN source=sched.go:671 msg="gpu VRAM usage didn't recover within timeout" seconds=5.500799906 model=/root/.ollama/models/blobs/sha256-3a17f7cde150070bbc815645693fb93c311cc42e7deaf198364acadcf08458f8 ``` I am running this docker version docker run -d --restart unless-stopped --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11442:11434 --name dvz3 ollama/ollama:0.2.1-rocm ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.2.1-rocm
GiteaMirror added the bugamdneeds more info labels 2026-05-03 21:39:32 -05:00
Author
Owner

@darwinvelez58 commented on GitHub (Jul 11, 2024):

For mixtral 8x7b <=q3 is working perfect, but for q4+ the error is always Error: llama runner process has terminated: signal: segmentation fault (core dumped) . Which is weird since I have 72gb memory on the gpu:

MIXTRAL Q3 LOGS

[GIN] 2024/07/11 - 13:44:03 | 200 |    4.663769ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-11T13:44:03.964Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-61ac039c672160e7e289d8e0559d72f5f54e2c53b0e65ea57f012ea130d200ed gpu=0 parallel=4 available=25725169664 required="21.5 GiB"
time=2024-07-11T13:44:03.965Z level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[24.0 GiB]" memory.required.full="21.5 GiB" memory.required.partial="21.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[21.5 GiB]" memory.weights.total="19.7 GiB" memory.weights.repeating="19.6 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="580.0 MiB" memory.graph.partial="1.3 GiB"
time=2024-07-11T13:44:03.965Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1419561683/runners/rocm_v60101/ollama_llama_server --model /root/.ollama/models/blobs/sha256-61ac039c672160e7e289d8e0559d72f5f54e2c53b0e65ea57f012ea130d200ed --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 44087"
time=2024-07-11T13:44:03.965Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-11T13:44:03.965Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-11T13:44:03.965Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="126411530634048" timestamp=1720705443
INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="126411530634048" timestamp=1720705443 total_threads=32
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="44087" tid="126411530634048" timestamp=1720705443
llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-61ac039c672160e7e289d8e0559d72f5f54e2c53b0e65ea57f012ea130d200ed (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 11
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q3_K:  833 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 259
llm_load_vocab: token to piece cache size = 0.1637 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8x7B
llm_load_print_meta: model ftype      = Q3_K - Small
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 18.90 GiB (3.48 BPW) 
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: max token length = 48
time=2024-07-11T13:44:04.217Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size =    0.77 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 19297.55 MiB
llm_load_tensors:  ROCm_Host buffer size =    53.71 MiB
time=2024-07-11T13:44:11.943Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-11T13:44:12.233Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:  ROCm_Host  output buffer size =     0.55 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   560.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =    24.01 MiB
llama_new_context_with_model: graph nodes  = 1510
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="126411530634048" timestamp=1720705453
time=2024-07-11T13:44:13.876Z level=INFO source=server.go:609 msg="llama runner started in 9.91 seconds"
[GIN] 2024/07/11 - 13:44:13 | 200 |  9.923181499s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/11 - 13:44:24 | 200 |      19.226µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/11 - 13:44:24 | 200 |      648.49µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/07/11 - 13:44:31 | 200 |      16.952µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/11 - 13:44:31 | 200 |    4.878514ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/07/11 - 13:44:31 | 200 |    5.133763ms |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/11 - 13:45:37 | 200 |  908.416664ms |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/11 - 13:45:57 | 200 |  1.096975199s |       127.0.0.1 | POST     "/api/chat"

Q4 LOGS:

time=2024-07-11T13:46:42.969Z level=INFO source=sched.go:532 msg="updated VRAM based on existing loaded models" gpu=0 library=rocm total="24.0 GiB" available="2.5 GiB"
time=2024-07-11T13:46:42.969Z level=INFO source=sched.go:532 msg="updated VRAM based on existing loaded models" gpu=1 library=rocm total="24.0 GiB" available="24.0 GiB"
time=2024-07-11T13:46:42.969Z level=INFO source=sched.go:532 msg="updated VRAM based on existing loaded models" gpu=2 library=rocm total="24.0 GiB" available="24.0 GiB"
time=2024-07-11T13:46:42.975Z level=INFO source=sched.go:754 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-728969cf2d06e54ae8e8bec04eccb52c3db919587800c563917e2729b7172215 library=rocm parallel=4 required="30.6 GiB"
time=2024-07-11T13:46:42.977Z level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split=17,16,0 memory.available="[24.0 GiB 24.0 GiB 2.5 GiB]" memory.required.full="30.6 GiB" memory.required.partial="30.6 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[15.4 GiB 15.3 GiB 0 B]" memory.weights.total="25.5 GiB" memory.weights.repeating="25.4 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.3 GiB"
time=2024-07-11T13:46:42.977Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1419561683/runners/rocm_v60101/ollama_llama_server --model /root/.ollama/models/blobs/sha256-728969cf2d06e54ae8e8bec04eccb52c3db919587800c563917e2729b7172215 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --tensor-split 17,16,0 --tensor-split 17,16,0 --port 42121"
time=2024-07-11T13:46:42.977Z level=INFO source=sched.go:474 msg="loaded runners" count=2
time=2024-07-11T13:46:42.977Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-11T13:46:42.977Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="138680769692480" timestamp=1720705603
INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="138680769692480" timestamp=1720705603 total_threads=32
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="42121" tid="138680769692480" timestamp=1720705603
llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-728969cf2d06e54ae8e8bec04eccb52c3db919587800c563917e2729b7172215 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 14
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q4_K:  833 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 259
llm_load_vocab: token to piece cache size = 0.1637 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8x7B
llm_load_print_meta: model ftype      = Q4_K - Small
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: max token length = 48
time=2024-07-11T13:46:43.229Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size =    1.15 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 13304.09 MiB
llm_load_tensors:      ROCm1 buffer size = 11841.46 MiB
llm_load_tensors:  ROCm_Host buffer size =    70.31 MiB
time=2024-07-11T13:46:54.722Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   544.00 MiB
llama_kv_cache_init:      ROCm1 KV buffer size =   480.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:  ROCm_Host  output buffer size =     0.55 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
time=2024-07-11T13:46:57.228Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model:      ROCm0 compute buffer size =   640.01 MiB
llama_new_context_with_model:      ROCm1 compute buffer size =   640.02 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =    72.02 MiB
llama_new_context_with_model: graph nodes  = 1510
llama_new_context_with_model: graph splits = 3
time=2024-07-11T13:46:57.541Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
time=2024-07-11T13:46:57.792Z level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped) "
[GIN] 2024/07/11 - 13:46:57 | 500 | 14.831961081s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:2222985710 --> @darwinvelez58 commented on GitHub (Jul 11, 2024): For mixtral 8x7b <=q3 is working perfect, but for q4+ the error is always Error: llama runner process has terminated: signal: segmentation fault (core dumped) . Which is weird since I have 72gb memory on the gpu: **MIXTRAL Q3 LOGS** ``` [GIN] 2024/07/11 - 13:44:03 | 200 | 4.663769ms | 127.0.0.1 | POST "/api/show" time=2024-07-11T13:44:03.964Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-61ac039c672160e7e289d8e0559d72f5f54e2c53b0e65ea57f012ea130d200ed gpu=0 parallel=4 available=25725169664 required="21.5 GiB" time=2024-07-11T13:44:03.965Z level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[24.0 GiB]" memory.required.full="21.5 GiB" memory.required.partial="21.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[21.5 GiB]" memory.weights.total="19.7 GiB" memory.weights.repeating="19.6 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="580.0 MiB" memory.graph.partial="1.3 GiB" time=2024-07-11T13:44:03.965Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1419561683/runners/rocm_v60101/ollama_llama_server --model /root/.ollama/models/blobs/sha256-61ac039c672160e7e289d8e0559d72f5f54e2c53b0e65ea57f012ea130d200ed --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 44087" time=2024-07-11T13:44:03.965Z level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-11T13:44:03.965Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-11T13:44:03.965Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="126411530634048" timestamp=1720705443 INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="126411530634048" timestamp=1720705443 total_threads=32 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="44087" tid="126411530634048" timestamp=1720705443 llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-61ac039c672160e7e289d8e0559d72f5f54e2c53b0e65ea57f012ea130d200ed (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 11 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q3_K: 833 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 259 llm_load_vocab: token to piece cache size = 0.1637 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8x7B llm_load_print_meta: model ftype = Q3_K - Small llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 18.90 GiB (3.48 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: max token length = 48 time=2024-07-11T13:44:04.217Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 0.77 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 19297.55 MiB llm_load_tensors: ROCm_Host buffer size = 53.71 MiB time=2024-07-11T13:44:11.943Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding" time=2024-07-11T13:44:12.233Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 0.55 MiB llama_new_context_with_model: ROCm0 compute buffer size = 560.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1510 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="126411530634048" timestamp=1720705453 time=2024-07-11T13:44:13.876Z level=INFO source=server.go:609 msg="llama runner started in 9.91 seconds" [GIN] 2024/07/11 - 13:44:13 | 200 | 9.923181499s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/07/11 - 13:44:24 | 200 | 19.226µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/11 - 13:44:24 | 200 | 648.49µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/11 - 13:44:31 | 200 | 16.952µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/11 - 13:44:31 | 200 | 4.878514ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/07/11 - 13:44:31 | 200 | 5.133763ms | 127.0.0.1 | POST "/api/chat" [GIN] 2024/07/11 - 13:45:37 | 200 | 908.416664ms | 127.0.0.1 | POST "/api/chat" [GIN] 2024/07/11 - 13:45:57 | 200 | 1.096975199s | 127.0.0.1 | POST "/api/chat" ``` **Q4 LOGS:** ``` time=2024-07-11T13:46:42.969Z level=INFO source=sched.go:532 msg="updated VRAM based on existing loaded models" gpu=0 library=rocm total="24.0 GiB" available="2.5 GiB" time=2024-07-11T13:46:42.969Z level=INFO source=sched.go:532 msg="updated VRAM based on existing loaded models" gpu=1 library=rocm total="24.0 GiB" available="24.0 GiB" time=2024-07-11T13:46:42.969Z level=INFO source=sched.go:532 msg="updated VRAM based on existing loaded models" gpu=2 library=rocm total="24.0 GiB" available="24.0 GiB" time=2024-07-11T13:46:42.975Z level=INFO source=sched.go:754 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-728969cf2d06e54ae8e8bec04eccb52c3db919587800c563917e2729b7172215 library=rocm parallel=4 required="30.6 GiB" time=2024-07-11T13:46:42.977Z level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split=17,16,0 memory.available="[24.0 GiB 24.0 GiB 2.5 GiB]" memory.required.full="30.6 GiB" memory.required.partial="30.6 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[15.4 GiB 15.3 GiB 0 B]" memory.weights.total="25.5 GiB" memory.weights.repeating="25.4 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.3 GiB" time=2024-07-11T13:46:42.977Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1419561683/runners/rocm_v60101/ollama_llama_server --model /root/.ollama/models/blobs/sha256-728969cf2d06e54ae8e8bec04eccb52c3db919587800c563917e2729b7172215 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --tensor-split 17,16,0 --tensor-split 17,16,0 --port 42121" time=2024-07-11T13:46:42.977Z level=INFO source=sched.go:474 msg="loaded runners" count=2 time=2024-07-11T13:46:42.977Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-11T13:46:42.977Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="138680769692480" timestamp=1720705603 INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="138680769692480" timestamp=1720705603 total_threads=32 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="42121" tid="138680769692480" timestamp=1720705603 llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-728969cf2d06e54ae8e8bec04eccb52c3db919587800c563917e2729b7172215 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 14 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q4_K: 833 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 259 llm_load_vocab: token to piece cache size = 0.1637 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8x7B llm_load_print_meta: model ftype = Q4_K - Small llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: max token length = 48 time=2024-07-11T13:46:43.229Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 1.15 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 13304.09 MiB llm_load_tensors: ROCm1 buffer size = 11841.46 MiB llm_load_tensors: ROCm_Host buffer size = 70.31 MiB time=2024-07-11T13:46:54.722Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 544.00 MiB llama_kv_cache_init: ROCm1 KV buffer size = 480.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 0.55 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) time=2024-07-11T13:46:57.228Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: ROCm0 compute buffer size = 640.01 MiB llama_new_context_with_model: ROCm1 compute buffer size = 640.02 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 72.02 MiB llama_new_context_with_model: graph nodes = 1510 llama_new_context_with_model: graph splits = 3 time=2024-07-11T13:46:57.541Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" time=2024-07-11T13:46:57.792Z level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped) " [GIN] 2024/07/11 - 13:46:57 | 500 | 14.831961081s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@darwinvelez58 commented on GitHub (Jul 11, 2024):

Is the error because it tries to fit in one gpu only?

<!-- gh-comment-id:2222995722 --> @darwinvelez58 commented on GitHub (Jul 11, 2024): Is the error because it tries to fit in one gpu only?
Author
Owner

@konrad0101 commented on GitHub (Jul 11, 2024):

I get a similar out of memory error now too on AMD GPUs (2 x 16GB 7800 XTs) on ollama 0.2.1. The models loaded use a mix of the dual GPUs and CPU RAM (there is 64GB of RAM available and a small fraction is used). System is Ubuntu 22.04.

Jul 11 10:32:11 ubuntu ollama[1669]: time=2024-07-11T10:32:11.579-04:00 level=INFO source=server.go:609 msg="llama runner started in 6.37 seconds"
Jul 11 10:32:41 ubuntu ollama[1669]: CUDA error: out of memory
Jul 11 10:32:41 ubuntu ollama[1669]:   current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:290
Jul 11 10:32:41 ubuntu ollama[1669]:   ggml_cuda_device_malloc(&ptr, look_ahead_size, device)
Jul 11 10:32:41 ubuntu ollama[1669]: GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error"
Jul 11 10:32:41 ubuntu ollama[1669]: Could not attach to process.  If your uid matches the uid of the target
Jul 11 10:32:41 ubuntu ollama[1669]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
Jul 11 10:32:41 ubuntu ollama[1669]: again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
Jul 11 10:32:41 ubuntu ollama[1669]: ptrace: Inappropriate ioctl for device.
Jul 11 10:32:41 ubuntu ollama[1669]: No stack.
Jul 11 10:32:41 ubuntu ollama[1669]: The program is not being run.
Jul 11 10:32:41 ubuntu ollama[1669]: [GIN] 2024/07/11 - 10:32:41 | 500 | 41.129093901s |       127.0.0.1 | POST     "/api/chat"
Jul 11 10:32:41 ubuntu ollama[1669]: [GIN] 2024/07/11 - 10:32:41 | 500 | 41.129738041s |       127.0.0.1 | POST     "/api/chat"
Jul 11 10:32:41 ubuntu ollama[1669]: time=2024-07-11T10:32:41.268-04:00 level=ERROR source=prompt.go:82 msg="failed to encode prompt" err="health resp: Get \"http://127.0.0.1:42279/health\": dial tcp 127.0.0.1:42279: connect: connection refused"
Jul 11 10:32:41 ubuntu ollama[1669]: time=2024-07-11T10:32:41.268-04:00 level=WARN source=server.go:482 msg="llama runner process no longer running" sys=134 string="signal: aborted (core dumped)"
Jul 11 10:32:41 ubuntu ollama[1669]: [GIN] 2024/07/11 - 10:32:41 | 400 | 41.129738291s |       127.0.0.1 | POST     "/api/chat"
Jul 11 10:32:41 ubuntu ollama[1669]: [GIN] 2024/07/11 - 10:32:41 | 500 |  41.12937174s |       127.0.0.1 | POST     "/api/chat"
Jul 11 10:37:46 ubuntu ollama[1669]: time=2024-07-11T10:37:46.270-04:00 level=WARN source=sched.go:671 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000533915 model=/usr/share/ollama/.ollama/models/blobs/sha256-8a9611e7bca168be635d39d21927d2b8e7e8ea0b5d0998b7d5980daf1f8d4205
<!-- gh-comment-id:2223129405 --> @konrad0101 commented on GitHub (Jul 11, 2024): I get a similar out of memory error now too on AMD GPUs (2 x 16GB 7800 XTs) on `ollama 0.2.1`. The models loaded use a mix of the dual GPUs and CPU RAM (there is 64GB of RAM available and a small fraction is used). System is Ubuntu 22.04. ``` Jul 11 10:32:11 ubuntu ollama[1669]: time=2024-07-11T10:32:11.579-04:00 level=INFO source=server.go:609 msg="llama runner started in 6.37 seconds" Jul 11 10:32:41 ubuntu ollama[1669]: CUDA error: out of memory Jul 11 10:32:41 ubuntu ollama[1669]: current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:290 Jul 11 10:32:41 ubuntu ollama[1669]: ggml_cuda_device_malloc(&ptr, look_ahead_size, device) Jul 11 10:32:41 ubuntu ollama[1669]: GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error" Jul 11 10:32:41 ubuntu ollama[1669]: Could not attach to process. If your uid matches the uid of the target Jul 11 10:32:41 ubuntu ollama[1669]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try Jul 11 10:32:41 ubuntu ollama[1669]: again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf Jul 11 10:32:41 ubuntu ollama[1669]: ptrace: Inappropriate ioctl for device. Jul 11 10:32:41 ubuntu ollama[1669]: No stack. Jul 11 10:32:41 ubuntu ollama[1669]: The program is not being run. Jul 11 10:32:41 ubuntu ollama[1669]: [GIN] 2024/07/11 - 10:32:41 | 500 | 41.129093901s | 127.0.0.1 | POST "/api/chat" Jul 11 10:32:41 ubuntu ollama[1669]: [GIN] 2024/07/11 - 10:32:41 | 500 | 41.129738041s | 127.0.0.1 | POST "/api/chat" Jul 11 10:32:41 ubuntu ollama[1669]: time=2024-07-11T10:32:41.268-04:00 level=ERROR source=prompt.go:82 msg="failed to encode prompt" err="health resp: Get \"http://127.0.0.1:42279/health\": dial tcp 127.0.0.1:42279: connect: connection refused" Jul 11 10:32:41 ubuntu ollama[1669]: time=2024-07-11T10:32:41.268-04:00 level=WARN source=server.go:482 msg="llama runner process no longer running" sys=134 string="signal: aborted (core dumped)" Jul 11 10:32:41 ubuntu ollama[1669]: [GIN] 2024/07/11 - 10:32:41 | 400 | 41.129738291s | 127.0.0.1 | POST "/api/chat" Jul 11 10:32:41 ubuntu ollama[1669]: [GIN] 2024/07/11 - 10:32:41 | 500 | 41.12937174s | 127.0.0.1 | POST "/api/chat" Jul 11 10:37:46 ubuntu ollama[1669]: time=2024-07-11T10:37:46.270-04:00 level=WARN source=sched.go:671 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000533915 model=/usr/share/ollama/.ollama/models/blobs/sha256-8a9611e7bca168be635d39d21927d2b8e7e8ea0b5d0998b7d5980daf1f8d4205 ```
Author
Owner

@darwinvelez58 commented on GitHub (Jul 13, 2024):

any update for this?

<!-- gh-comment-id:2226911060 --> @darwinvelez58 commented on GitHub (Jul 13, 2024): any update for this?
Author
Owner

@dhiltgen commented on GitHub (Jul 23, 2024):

I haven't been able to repro, but I don't have a 3x Radeon setup - my dual Radeon test box seems to behave correctly.

This might be a ROCm regression, or a regression in llama.cpp between tag b3051 and b3171

<!-- gh-comment-id:2246415084 --> @dhiltgen commented on GitHub (Jul 23, 2024): I haven't been able to repro, but I don't have a 3x Radeon setup - my dual Radeon test box seems to behave correctly. This might be a ROCm regression, or a regression in llama.cpp between tag b3051 and b3171
Author
Owner

@OliverStutz commented on GitHub (Jul 23, 2024):

@dhiltgen we can get you access to a 3 card setup if that helps, let me know.

<!-- gh-comment-id:2246417453 --> @OliverStutz commented on GitHub (Jul 23, 2024): @dhiltgen we can get you access to a 3 card setup if that helps, let me know.
Author
Owner

@rasodu commented on GitHub (Jul 27, 2024):

After formatting my system, I'm experiencing issues with my dual AMD MI100 GPUs. When I run my model on both GPUs, they produce garbled output. However, if the entire model loads onto one GPU alone, it runs correctly. The problem only occurs when the model split across both GPUs.

My version are following:
Ubuntu: 22.04.4 LTS
ROCM install via: sudo amdgpu-install -y --accept-eula --usecase=dkms
Ollama docker image: ollama/ollama:0.3.0-rocm

Let me know if this issue isn't related to this issue and if I should open a different ticket.

server03$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.4 LTS
Release:        22.04
Codename:       jammy

server03$ apt list -a amdgpu-dkms
Listing... Done
amdgpu-dkms/jammy,jammy,now 1:6.7.0.60102-1781449.22.04 all [installed]
```


I'd like to add some additional context: I was previously running Ollama version 0.1.48 on my dual MI100 setup without any issues. After formatting my system, I've tried reverting back to the same version of Ollama (0.1.48), but unfortunately, I'm still experiencing problem.
<!-- gh-comment-id:2254189392 --> @rasodu commented on GitHub (Jul 27, 2024): After formatting my system, I'm experiencing issues with my dual AMD MI100 GPUs. When I run my model on both GPUs, they produce garbled output. However, if the entire model loads onto one GPU alone, it runs correctly. The problem only occurs when the model split across both GPUs. My version are following: Ubuntu: 22.04.4 LTS ROCM install via: sudo amdgpu-install -y --accept-eula --usecase=dkms Ollama docker image: ollama/ollama:0.3.0-rocm Let me know if this issue isn't related to this issue and if I should open a different ticket. ```` server03$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.4 LTS Release: 22.04 Codename: jammy server03$ apt list -a amdgpu-dkms Listing... Done amdgpu-dkms/jammy,jammy,now 1:6.7.0.60102-1781449.22.04 all [installed] ``` I'd like to add some additional context: I was previously running Ollama version 0.1.48 on my dual MI100 setup without any issues. After formatting my system, I've tried reverting back to the same version of Ollama (0.1.48), but unfortunately, I'm still experiencing problem.
Author
Owner

@rasodu commented on GitHub (Jul 27, 2024):

After some trial and error, I successfully resolved my issue by installing a previous version of the DKMS package from the AMD repository. If others want to try this solution, here's what worked for me...

  1. Check OS version:
    server03$ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 22.04.4 LTS
    Release:        22.04
    Codename:       jammy
    
  2. Check kernel version
    Server03$ uname -r
    6.5.0-45-generic
    
  3. Uninstall current installation based on how you have installed it
    • AMD GPU installer (If you installed via AMD installer)
      sudo amdgpu-install --uninstall
      
    • Package manager (If you installed via package manager)
      sudo apt-get remove rocm amdgpu-dkms
      
  4. Edit ROCM repository in file
    • /etc/apt/sources.list.d/amdgpu.list
      deb https://repo.radeon.com/amdgpu/6.1/ubuntu jammy main
      #deb-src https://repo.radeon.com/amdgpu/6.1/ubuntu jammy main
      
    • /etc/apt/sources.list.d/rocm.list (May not need this if file doesn't exist)
      deb [arch=amd64] https://repo.radeon.com/rocm/apt/6.1 jammy main
      
  5. Install DKMS: sudo apt install amdgpu-dkms
  6. Optional - install rocm(Not needed if using docker image): sudo apt install rocm
<!-- gh-comment-id:2254227310 --> @rasodu commented on GitHub (Jul 27, 2024): After some trial and error, I successfully resolved my issue by installing a previous version of the DKMS package from the AMD repository. If others want to try this solution, here's what worked for me... 1. Check OS version: ``` server03$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.4 LTS Release: 22.04 Codename: jammy ``` 3. Check kernel version ``` Server03$ uname -r 6.5.0-45-generic ``` 5. Uninstall current installation based on how you have installed it - AMD GPU installer (If you installed via AMD installer) ``` sudo amdgpu-install --uninstall ``` - Package manager (If you installed via package manager) ``` sudo apt-get remove rocm amdgpu-dkms ``` 7. Edit ROCM repository in file - /etc/apt/sources.list.d/amdgpu.list ``` deb https://repo.radeon.com/amdgpu/6.1/ubuntu jammy main #deb-src https://repo.radeon.com/amdgpu/6.1/ubuntu jammy main ``` - /etc/apt/sources.list.d/rocm.list (May not need this if file doesn't exist) ``` deb [arch=amd64] https://repo.radeon.com/rocm/apt/6.1 jammy main ``` 8. Install DKMS: sudo apt install amdgpu-dkms 9. Optional - install rocm(Not needed if using docker image): sudo apt install rocm
Author
Owner

@Speedway1 commented on GitHub (Aug 17, 2024):

@rasodu

Thank you for the above. However, for me, same version of Ubuntu and kernel as you, when I made those changes, I got this:
root@TH-AI2:~# apt install amdgpu-dkms
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package amdgpu-dkms

I am continuing to investigate but AMD may have changed their repo.

<!-- gh-comment-id:2294967694 --> @Speedway1 commented on GitHub (Aug 17, 2024): @rasodu Thank you for the above. However, for me, same version of Ubuntu and kernel as you, when I made those changes, I got this: root@TH-AI2:~# apt install amdgpu-dkms Reading package lists... Done Building dependency tree... Done Reading state information... Done E: Unable to locate package amdgpu-dkms I am continuing to investigate but AMD may have changed their repo.
Author
Owner

@Speedway1 commented on GitHub (Aug 17, 2024):

OK in addition to the steps from @rasodu above, you need to:
apt update && apt upgrade -y
amdgpu-install --usecase=graphics,opencl --no-32 --no-dkms

Not sure about the last 2 parameters just yet.

<!-- gh-comment-id:2294973689 --> @Speedway1 commented on GitHub (Aug 17, 2024): OK in addition to the steps from @rasodu above, you need to: apt update && apt upgrade -y amdgpu-install --usecase=graphics,opencl --no-32 --no-dkms Not sure about the last 2 parameters just yet.
Author
Owner

@Speedway1 commented on GitHub (Aug 18, 2024):

OK I needed to completely reinstall the ROCM drivers, but resetting to version 6.1 per @rasodu 's suggestion worked. I am now properly inferencing across multiple Radion 7900XTX GPUs.

<!-- gh-comment-id:2295285465 --> @Speedway1 commented on GitHub (Aug 18, 2024): OK I needed to completely reinstall the ROCM drivers, but resetting to version 6.1 per @rasodu 's suggestion worked. I am now properly inferencing across multiple Radion 7900XTX GPUs.
Author
Owner

@Speedway1 commented on GitHub (Aug 18, 2024):

Given that we have the solution (reset to earlier version of the AMD drivers), probably this issue can be marked as resolved?

<!-- gh-comment-id:2295285661 --> @Speedway1 commented on GitHub (Aug 18, 2024): Given that we have the solution (reset to earlier version of the AMD drivers), probably this issue can be marked as resolved?
Author
Owner

@OliverStutz commented on GitHub (Aug 19, 2024):

@Speedway1 if you think putting drivers to an earlier level is a solution do that but for real operations that is not a solution that is maybe a workaround.

<!-- gh-comment-id:2296662436 --> @OliverStutz commented on GitHub (Aug 19, 2024): @Speedway1 if you think putting drivers to an earlier level is a solution do that but for real operations that is not a solution that is maybe a workaround.
Author
Owner

@Speedway1 commented on GitHub (Aug 19, 2024):

@Speedway1 if you think putting drivers to an earlier level is a solution do that but for real operations that is not a solution that is maybe a workaround.

Agreed. Very annoying and only a work-around. However this is a bug, it seems, with AMD. They broke something. And unlike opensource here, it's nearly impossible to report it and get it fixed but won't stop us trying!

<!-- gh-comment-id:2296889132 --> @Speedway1 commented on GitHub (Aug 19, 2024): > @Speedway1 if you think putting drivers to an earlier level is a solution do that but for real operations that is not a solution that is maybe a workaround. Agreed. Very annoying and only a work-around. However this is a bug, it seems, with AMD. They broke something. And unlike opensource here, it's nearly impossible to report it and get it fixed but won't stop us trying!
Author
Owner

@OliverStutz commented on GitHub (Aug 19, 2024):

If we have a proper trace why this happens we could open a bug with AMD, i think this works though with normal Tensor on python, I'm not convinced yet that this is a driver issue.

<!-- gh-comment-id:2296911983 --> @OliverStutz commented on GitHub (Aug 19, 2024): If we have a proper trace why this happens we could open a bug with AMD, i think this works though with normal Tensor on python, I'm not convinced yet that this is a driver issue.
Author
Owner

@Speedway1 commented on GitHub (Aug 19, 2024):

If we have a proper trace why this happens we could open a bug with AMD, i think this works though with normal Tensor on python, I'm not convinced yet that this is a driver issue.

Ok. Well we will try and replicate here but we needed that primary box back into service so unfortunately can't reset that. But agreed on all fronts.

<!-- gh-comment-id:2296918477 --> @Speedway1 commented on GitHub (Aug 19, 2024): > If we have a proper trace why this happens we could open a bug with AMD, i think this works though with normal Tensor on python, I'm not convinced yet that this is a driver issue. Ok. Well we will try and replicate here but we needed that primary box back into service so unfortunately can't reset that. But agreed on all fronts.
Author
Owner

@rasodu commented on GitHub (Aug 20, 2024):

If we have a proper trace why this happens we could open a bug with AMD, i think this works though with normal Tensor on python, I'm not convinced yet that this is a driver issue.

After performing a system upgrade and updating Rocm/DKMS to version 6.2, I'm now seeing the correct output. This suggests to me that the issue may be related to compatibility between specific kernel and Rocm versions, and that certain combinations work correctly together.

$uname -r
6.8.0-40-generic

$lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.4 LTS
Release:        22.04
Codename:       jammy
<!-- gh-comment-id:2297807517 --> @rasodu commented on GitHub (Aug 20, 2024): > If we have a proper trace why this happens we could open a bug with AMD, i think this works though with normal Tensor on python, I'm not convinced yet that this is a driver issue. After performing a system upgrade and updating Rocm/DKMS to version 6.2, I'm now seeing the correct output. This suggests to me that the issue may be related to compatibility between specific kernel and Rocm versions, and that certain combinations work correctly together. ``` $uname -r 6.8.0-40-generic $lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.4 LTS Release: 22.04 Codename: jammy ```
Author
Owner

@OliverStutz commented on GitHub (Aug 20, 2024):

@rasodu good input, i will test downgrading the kernel

<!-- gh-comment-id:2299863976 --> @OliverStutz commented on GitHub (Aug 20, 2024): @rasodu good input, i will test downgrading the kernel
Author
Owner

@dhiltgen commented on GitHub (Oct 24, 2024):

Another thing to try is disabling P2P copy. For a while we had a flag set in the llama.cpp build to disable P2P copy but that workaround seemed to cause more problems for people with small amounts of system memory, so we've reverted it recently. For multi-GPU setups, the P2P copy might be the source of the gibberish. For direct GPU <--> GPU copy works only under certain conditions. Like both GPUs

  • should be under same PCI root port
  • Large BAR enabled
  • IOMMMU disabled

I believe setting NCCL_P2P_DISABLE=1 will disable this in an underlying library within ROCm.

<!-- gh-comment-id:2434135873 --> @dhiltgen commented on GitHub (Oct 24, 2024): Another thing to try is disabling P2P copy. For a while we had a flag set in the llama.cpp build to disable P2P copy but that workaround seemed to cause more problems for people with small amounts of system memory, so we've reverted it recently. For multi-GPU setups, the P2P copy might be the source of the gibberish. For direct GPU <--> GPU copy works only under certain conditions. Like both GPUs - should be under same PCI root port - Large BAR enabled - IOMMMU disabled I believe setting `NCCL_P2P_DISABLE=1` will disable this in an underlying library within ROCm.
Author
Owner

@dhiltgen commented on GitHub (Oct 26, 2024):

I found a system that reproduces the failure, and I've confirmed that putting -DGGML_CUDA_NO_PEER_COPY=1 back during the build resolves it, but this will break other systems with low system memory compared to VRAM. In my setup, setting NCCL_P2P_DISABLE=1 didn't work. If we can't find another workaround, I'll look into patching llama.cpp so that we can change the behavior at runtime instead of having to choose at compile time.

<!-- gh-comment-id:2439732924 --> @dhiltgen commented on GitHub (Oct 26, 2024): I found a system that reproduces the failure, and I've confirmed that putting `-DGGML_CUDA_NO_PEER_COPY=1` back during the build resolves it, but this will break other systems with low system memory compared to VRAM. In my setup, setting `NCCL_P2P_DISABLE=1` didn't work. If we can't find another workaround, I'll look into patching llama.cpp so that we can change the behavior at runtime instead of having to choose at compile time.
Author
Owner

@dhiltgen commented on GitHub (Nov 1, 2024):

Clarification to my point above - the system where I see this behavior only repro's with Windows. Linux works correctly without requiring the P2P workaround setting.

<!-- gh-comment-id:2452520740 --> @dhiltgen commented on GitHub (Nov 1, 2024): Clarification to my point above - the system where I see this behavior only repro's with Windows. Linux works correctly without requiring the P2P workaround setting.
Author
Owner

@joe2gaan commented on GitHub (Nov 27, 2024):

https://github.com/ollama/ollama/issues/7575

<!-- gh-comment-id:2504915337 --> @joe2gaan commented on GitHub (Nov 27, 2024): https://github.com/ollama/ollama/issues/7575
Author
Owner

@dhiltgen commented on GitHub (Feb 25, 2025):

Is this still a problem with the latest versions? I'm trying to determine if #7378 is still useful.

<!-- gh-comment-id:2682725492 --> @dhiltgen commented on GitHub (Feb 25, 2025): Is this still a problem with the latest versions? I'm trying to determine if #7378 is still useful.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65549