[GH-ISSUE #4820] Issue with Llama3 Model on Multiple AMD GPU #65083

Closed
opened 2026-05-03 19:43:03 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @rasodu on GitHub (Jun 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4820

What is the issue?

I'm experiencing an issue with running the llama3 model (specifically, version 70b-instruct-q6) on multiple AMD GPUs. While it works correctly on ollama/ollama:0.1.34-rocm, I've encountered a problem where it produces junk output when using ollama/ollama:0.1.35-rocm and ollama/ollama:0.1.41-rocm.

Interestingly, I've noticed that the junk output only occurs when the entire model fits within the GPU memory. If part of the model is stored in CPU memory, the output is generated correctly.

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.1.41

Originally created by @rasodu on GitHub (Jun 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4820 ### What is the issue? I'm experiencing an issue with running the llama3 model (specifically, version 70b-instruct-q6) on multiple AMD GPUs. While it works correctly on ollama/ollama:0.1.34-rocm, I've encountered a problem where it produces junk output when using ollama/ollama:0.1.35-rocm and ollama/ollama:0.1.41-rocm. Interestingly, I've noticed that the junk output only occurs when the entire model fits within the GPU memory. If part of the model is stored in CPU memory, the output is generated correctly. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.1.41
GiteaMirror added the gpubugamd labels 2026-05-03 19:43:04 -05:00
Author
Owner

@ccbadd commented on GitHub (Jun 10, 2024):

I'm having the same problem using 2X W6800's under Ubuntu 22.04 and rocm 6.1.2. It works fine under Windows with rocm 5.7 I think.

<!-- gh-comment-id:2159067216 --> @ccbadd commented on GitHub (Jun 10, 2024): I'm having the same problem using 2X W6800's under Ubuntu 22.04 and rocm 6.1.2. It works fine under Windows with rocm 5.7 I think.
Author
Owner

@Speedway1 commented on GitHub (Jun 22, 2024):

Same here. Works perfectly when only 1 GPU is needed. As soon as it's needing to use >1 GPUs, it fails badly. However llama.cpp runs across 2 GPUs without blinking.

Here is the syslog log for loading up Llama3:70b.

Jun 23 00:26:09 TH-AI2 ollama[414970]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="42637" tid="128082296689472" timestamp=1719098769
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /home/ollama/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d (version GGUF V3 (latest))
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-70B-Instruct
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   2:                          llama.block_count u32              = 80
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 28672
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 64
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  10:                          general.file_type u32              = 2
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv  21:               general.quantization_version u32              = 2
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - type  f32:  161 tensors
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - type q4_0:  561 tensors
Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - type q6_K:    1 tensors
Jun 23 00:26:10 TH-AI2 ollama[414874]: time=2024-06-23T00:26:10.034+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server loading model"
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_vocab: special tokens cache size = 256
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_vocab: token to piece cache size = 0.8000 MB
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: format           = GGUF V3 (latest)
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: arch             = llama
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: vocab type       = BPE
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_vocab          = 128256
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_merges         = 280147
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_ctx_train      = 8192
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd           = 8192
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_head           = 64
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_head_kv        = 8
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_layer          = 80
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_rot            = 128
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd_head_k    = 128
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd_head_v    = 128
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_gqa            = 8
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd_k_gqa     = 1024
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd_v_gqa     = 1024
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_ff             = 28672
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_expert         = 0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_expert_used    = 0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: causal attn      = 1
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: pooling type     = 0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: rope type        = 0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: rope scaling     = linear
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: freq_base_train  = 500000.0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: freq_scale_train = 1
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_ctx_orig_yarn  = 8192
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: rope_finetuned   = unknown
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: ssm_d_conv       = 0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: ssm_d_inner      = 0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: ssm_d_state      = 0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: ssm_dt_rank      = 0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: model type       = 70B
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: model ftype      = Q4_0
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: model params     = 70.55 B
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: model size       = 37.22 GiB (4.53 BPW)
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: general.name     = Meta-Llama-3-70B-Instruct
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: LF token         = 128 'Ä'
Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
Jun 23 00:26:11 TH-AI2 ollama[414874]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
Jun 23 00:26:11 TH-AI2 ollama[414874]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
Jun 23 00:26:11 TH-AI2 ollama[414874]: ggml_cuda_init: found 2 ROCm devices:
Jun 23 00:26:11 TH-AI2 ollama[414874]:   Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Jun 23 00:26:11 TH-AI2 ollama[414874]:   Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Jun 23 00:26:11 TH-AI2 ollama[414874]: llm_load_tensors: ggml ctx size =    1.10 MiB
Jun 23 00:26:15 TH-AI2 ollama[414874]: time=2024-06-23T00:26:15.252+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server not responding"
Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: offloading 80 repeating layers to GPU
Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: offloading non-repeating layers to GPU
Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: offloaded 81/81 layers to GPU
Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors:      ROCm0 buffer size = 18821.56 MiB
Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors:      ROCm1 buffer size = 18725.42 MiB
Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors:        CPU buffer size =   563.62 MiB
Jun 23 00:26:15 TH-AI2 ollama[414874]: time=2024-06-23T00:26:15.955+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server loading model"
Jun 23 00:26:24 TH-AI2 ollama[414874]: time=2024-06-23T00:26:24.956+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server not responding"
Jun 23 00:26:25 TH-AI2 ollama[414874]: time=2024-06-23T00:26:25.210+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server loading model"
Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: n_ctx      = 8192
Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: n_batch    = 512
Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: n_ubatch   = 512
Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: flash_attn = 0
Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: freq_base  = 500000.0
Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: freq_scale = 1
Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_kv_cache_init:      ROCm0 KV buffer size =  1312.00 MiB
Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_kv_cache_init:      ROCm1 KV buffer size =  1248.00 MiB
Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_new_context_with_model: KV self size  = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_new_context_with_model:  ROCm_Host  output buffer size =     2.08 MiB
Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model:      ROCm0 compute buffer size =  1216.01 MiB
Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model:      ROCm1 compute buffer size =  1216.02 MiB
Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model:  ROCm_Host compute buffer size =    80.02 MiB
Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model: graph nodes  = 2566
Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model: graph splits = 3
Jun 23 00:26:28 TH-AI2 kernel: [349399.669937] amd_iommu_report_page_fault: 798 callbacks suppressed
Jun 23 00:26:28 TH-AI2 kernel: [349399.669941] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0000100 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.669953] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0001000 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.669961] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0000200 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.669968] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0002a00 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.669975] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0001e00 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.669982] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0003100 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.669989] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0002800 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.669996] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0004100 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.670003] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0005300 flags=0x0020]
Jun 23 00:26:28 TH-AI2 kernel: [349399.670010] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0003700 flags=0x0020]
Jun 23 00:26:29 TH-AI2 ollama[414970]: INFO [main] model loaded | tid="128082296689472" timestamp=1719098789
Jun 23 00:26:30 TH-AI2 ollama[414874]: time=2024-06-23T00:26:30.229+01:00 level=INFO source=server.go:590 msg="llama runner started in 20.45 seconds"
Jun 23 00:26:30 TH-AI2 ollama[414874]: [GIN] 2024/06/23 - 00:26:30 | 200 | 22.214323687s |   192.168.0.140 | POST     "/api/chat"
Jun 23 00:26:57 TH-AI2 kernel: [349428.322612] amd_iommu_report_page_fault: 34 callbacks suppressed
Jun 23 00:26:57 TH-AI2 kernel: [349428.322616] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1000000 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322630] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1001000 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322639] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1000800 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322647] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1002900 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322655] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1003000 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322662] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1000e00 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322670] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1002300 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322678] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1003b00 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322686] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1004000 flags=0x0020]
Jun 23 00:26:57 TH-AI2 kernel: [349428.322693] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1005200 flags=0x0020]
Jun 23 00:26:59 TH-AI2 ollama[414874]: [GIN] 2024/06/23 - 00:26:59 | 200 |  2.617670909s |   192.168.0.140 | POST     "/api/chat"
Jun 23 00:27:15 TH-AI2 kernel: [349446.199584] [UFW BLOCK] IN=eno1 OUT= MAC=01:00:5e:00:00:01:10:27:f5:e4:67:28:08:00 SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2 

<!-- gh-comment-id:2184250400 --> @Speedway1 commented on GitHub (Jun 22, 2024): Same here. Works perfectly when only 1 GPU is needed. As soon as it's needing to use >1 GPUs, it fails badly. However llama.cpp runs across 2 GPUs without blinking. Here is the syslog log for loading up Llama3:70b. ``` Jun 23 00:26:09 TH-AI2 ollama[414970]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="42637" tid="128082296689472" timestamp=1719098769 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /home/ollama/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d (version GGUF V3 (latest)) Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 0: general.architecture str = llama Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 2: llama.block_count u32 = 80 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 3: llama.context_length u32 = 8192 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 10: general.file_type u32 = 2 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - kv 21: general.quantization_version u32 = 2 Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - type f32: 161 tensors Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - type q4_0: 561 tensors Jun 23 00:26:09 TH-AI2 ollama[414874]: llama_model_loader: - type q6_K: 1 tensors Jun 23 00:26:10 TH-AI2 ollama[414874]: time=2024-06-23T00:26:10.034+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server loading model" Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_vocab: special tokens cache size = 256 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_vocab: token to piece cache size = 0.8000 MB Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: format = GGUF V3 (latest) Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: arch = llama Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: vocab type = BPE Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_vocab = 128256 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_merges = 280147 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_ctx_train = 8192 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd = 8192 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_head = 64 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_head_kv = 8 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_layer = 80 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_rot = 128 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd_head_k = 128 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd_head_v = 128 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_gqa = 8 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd_k_gqa = 1024 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_embd_v_gqa = 1024 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: f_logit_scale = 0.0e+00 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_ff = 28672 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_expert = 0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_expert_used = 0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: causal attn = 1 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: pooling type = 0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: rope type = 0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: rope scaling = linear Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: freq_base_train = 500000.0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: freq_scale_train = 1 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: n_ctx_orig_yarn = 8192 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: rope_finetuned = unknown Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: ssm_d_conv = 0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: ssm_d_inner = 0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: ssm_d_state = 0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: ssm_dt_rank = 0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: model type = 70B Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: model ftype = Q4_0 Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: model params = 70.55 B Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: LF token = 128 'Ä' Jun 23 00:26:10 TH-AI2 ollama[414874]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Jun 23 00:26:11 TH-AI2 ollama[414874]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jun 23 00:26:11 TH-AI2 ollama[414874]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes Jun 23 00:26:11 TH-AI2 ollama[414874]: ggml_cuda_init: found 2 ROCm devices: Jun 23 00:26:11 TH-AI2 ollama[414874]: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Jun 23 00:26:11 TH-AI2 ollama[414874]: Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Jun 23 00:26:11 TH-AI2 ollama[414874]: llm_load_tensors: ggml ctx size = 1.10 MiB Jun 23 00:26:15 TH-AI2 ollama[414874]: time=2024-06-23T00:26:15.252+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server not responding" Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: offloading 80 repeating layers to GPU Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: offloading non-repeating layers to GPU Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: offloaded 81/81 layers to GPU Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: ROCm0 buffer size = 18821.56 MiB Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: ROCm1 buffer size = 18725.42 MiB Jun 23 00:26:15 TH-AI2 ollama[414874]: llm_load_tensors: CPU buffer size = 563.62 MiB Jun 23 00:26:15 TH-AI2 ollama[414874]: time=2024-06-23T00:26:15.955+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server loading model" Jun 23 00:26:24 TH-AI2 ollama[414874]: time=2024-06-23T00:26:24.956+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server not responding" Jun 23 00:26:25 TH-AI2 ollama[414874]: time=2024-06-23T00:26:25.210+01:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server loading model" Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: n_ctx = 8192 Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: n_batch = 512 Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: n_ubatch = 512 Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: flash_attn = 0 Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: freq_base = 500000.0 Jun 23 00:26:25 TH-AI2 ollama[414874]: llama_new_context_with_model: freq_scale = 1 Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_kv_cache_init: ROCm0 KV buffer size = 1312.00 MiB Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_kv_cache_init: ROCm1 KV buffer size = 1248.00 MiB Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_new_context_with_model: ROCm_Host output buffer size = 2.08 MiB Jun 23 00:26:27 TH-AI2 ollama[414874]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model: ROCm0 compute buffer size = 1216.01 MiB Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model: ROCm1 compute buffer size = 1216.02 MiB Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model: ROCm_Host compute buffer size = 80.02 MiB Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model: graph nodes = 2566 Jun 23 00:26:28 TH-AI2 ollama[414874]: llama_new_context_with_model: graph splits = 3 Jun 23 00:26:28 TH-AI2 kernel: [349399.669937] amd_iommu_report_page_fault: 798 callbacks suppressed Jun 23 00:26:28 TH-AI2 kernel: [349399.669941] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0000100 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.669953] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0001000 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.669961] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0000200 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.669968] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0002a00 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.669975] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0001e00 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.669982] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0003100 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.669989] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0002800 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.669996] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0004100 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.670003] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0005300 flags=0x0020] Jun 23 00:26:28 TH-AI2 kernel: [349399.670010] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c0003700 flags=0x0020] Jun 23 00:26:29 TH-AI2 ollama[414970]: INFO [main] model loaded | tid="128082296689472" timestamp=1719098789 Jun 23 00:26:30 TH-AI2 ollama[414874]: time=2024-06-23T00:26:30.229+01:00 level=INFO source=server.go:590 msg="llama runner started in 20.45 seconds" Jun 23 00:26:30 TH-AI2 ollama[414874]: [GIN] 2024/06/23 - 00:26:30 | 200 | 22.214323687s | 192.168.0.140 | POST "/api/chat" Jun 23 00:26:57 TH-AI2 kernel: [349428.322612] amd_iommu_report_page_fault: 34 callbacks suppressed Jun 23 00:26:57 TH-AI2 kernel: [349428.322616] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1000000 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322630] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1001000 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322639] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1000800 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322647] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1002900 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322655] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1003000 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322662] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1000e00 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322670] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1002300 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322678] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1003b00 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322686] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1004000 flags=0x0020] Jun 23 00:26:57 TH-AI2 kernel: [349428.322693] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe0c1005200 flags=0x0020] Jun 23 00:26:59 TH-AI2 ollama[414874]: [GIN] 2024/06/23 - 00:26:59 | 200 | 2.617670909s | 192.168.0.140 | POST "/api/chat" Jun 23 00:27:15 TH-AI2 kernel: [349446.199584] [UFW BLOCK] IN=eno1 OUT= MAC=01:00:5e:00:00:01:10:27:f5:e4:67:28:08:00 SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2 ```
Author
Owner

@Speedway1 commented on GitHub (Jun 23, 2024):

After testing and comparing with llama.cpp which runs the same sized models without any problems, it seems like the issue here is the page faulting:


Jun 23 01:02:54 TH-AI2 kernel: [351586.158012] amd_iommu_report_page_fault: 486 callbacks suppressed
Jun 23 01:02:54 TH-AI2 kernel: [351586.158016] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e00000 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158029] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e01000 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158038] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e00c00 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158046] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e02900 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158054] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e01300 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158062] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e03100 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158070] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e02100 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158078] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e04900 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158085] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e05500 flags=0x0020]
Jun 23 01:02:54 TH-AI2 kernel: [351586.158093] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e03b00 flags=0x0020]

<!-- gh-comment-id:2184256965 --> @Speedway1 commented on GitHub (Jun 23, 2024): After testing and comparing with llama.cpp which runs the same sized models without any problems, it seems like the issue here is the page faulting: ``` Jun 23 01:02:54 TH-AI2 kernel: [351586.158012] amd_iommu_report_page_fault: 486 callbacks suppressed Jun 23 01:02:54 TH-AI2 kernel: [351586.158016] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e00000 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158029] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e01000 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158038] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e00c00 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158046] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e02900 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158054] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e01300 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158062] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e03100 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158070] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e02100 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158078] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e04900 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158085] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e05500 flags=0x0020] Jun 23 01:02:54 TH-AI2 kernel: [351586.158093] amdgpu 0000:03:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe581e03b00 flags=0x0020] ```
Author
Owner

@rasodu commented on GitHub (Jun 23, 2024):

I've successfully tested and can now run the entire model in-memory utilizing multiple GPUs with ollama/ollama:0.1.46-rocm. As a result, I'm closing this ticket.

<!-- gh-comment-id:2184278481 --> @rasodu commented on GitHub (Jun 23, 2024): I've successfully tested and can now run the entire model in-memory utilizing multiple GPUs with ollama/ollama:0.1.46-rocm. As a result, I'm closing this ticket.
Author
Owner

@rasodu commented on GitHub (Jun 23, 2024):

@Speedway1 , do you continue to experience the problem?

<!-- gh-comment-id:2184280664 --> @rasodu commented on GitHub (Jun 23, 2024): @Speedway1 , do you continue to experience the problem?
Author
Owner

@Speedway1 commented on GitHub (Jul 3, 2024):

ollama@TH-AI2:$ ollama list
NAME ID SIZE MODIFIED
deepseek-coder-v2:latest 8577f96d693e 8.9 GB 10 days ago
codestral:latest fcc0019dcee9 12 GB 11 days ago
qwen2:latest e0d4e1163c58 4.4 GB 11 days ago
command-r:latest b8cdfff0263c 20 GB 11 days ago
mxbai-embed-large:latest 468836162de7 669 MB 11 days ago
llama3:70b 786f3184aec0 39 GB 11 days ago
phi3:14b-medium-128k-instruct-f16 e89861c3ba63 27 GB 11 days ago
ollama@TH-AI2:
$ ollama run command-r:latest

Hello how are you?
???????????????????????????????

/bye

<!-- gh-comment-id:2204751974 --> @Speedway1 commented on GitHub (Jul 3, 2024): ollama@TH-AI2:~$ ollama list NAME ID SIZE MODIFIED deepseek-coder-v2:latest 8577f96d693e 8.9 GB 10 days ago codestral:latest fcc0019dcee9 12 GB 11 days ago qwen2:latest e0d4e1163c58 4.4 GB 11 days ago command-r:latest b8cdfff0263c 20 GB 11 days ago mxbai-embed-large:latest 468836162de7 669 MB 11 days ago llama3:70b 786f3184aec0 39 GB 11 days ago phi3:14b-medium-128k-instruct-f16 e89861c3ba63 27 GB 11 days ago ollama@TH-AI2:~$ ollama run command-r:latest >>> Hello how are you? ??????????????????????????????? >>> /bye
Author
Owner

@rasodu commented on GitHub (Jul 28, 2024):

@Speedway1 , please look at my comments on https://github.com/ollama/ollama/issues/5629 I Was getting similar error after I redid my setup and was able to resolve it.

<!-- gh-comment-id:2254605077 --> @rasodu commented on GitHub (Jul 28, 2024): @Speedway1 , please look at my comments on https://github.com/ollama/ollama/issues/5629 I Was getting similar error after I redid my setup and was able to resolve it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65083