[GH-ISSUE #11265] llama_model_load: error loading model: unable to allocate ROCm0 buffer #33186

Closed
opened 2026-04-22 15:37:14 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @cli-ish on GitHub (Jul 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11265

What is the issue?

All models suddenly don't work anymore.

GPU: NITRO+ AMD Radeon™ RX 7900 XTX Vapor-X 24GB
Rocm version: 6.4.1-1

Relevant log output

[GIN] 2025/07/02 - 10:05:08 | 200 |   17.397072ms |       127.0.0.1 | POST     "/api/show"
time=2025-07-02T10:05:08.353+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b gpu=GPU-3b299e8b1d06b48f parallel=2 available=24569131008 required="6.5 GiB"
time=2025-07-02T10:05:08.354+02:00 level=INFO source=server.go:135 msg="system memory" total="60.5 GiB" free="55.3 GiB" free_swap="63.4 GiB"
time=2025-07-02T10:05:08.354+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from /home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Dolphin 3.0 Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Cognitivecomputations
llama_model_loader: - kv   4:                           general.basename str              = Dolphin-3.0-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Llama 3.1 8B
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  11:                      general.dataset.count u32              = 13
llama_model_loader: - kv  12:                     general.dataset.0.name str              = Opc Sft Stage1
llama_model_loader: - kv  13:             general.dataset.0.organization str              = OpenCoder LLM
llama_model_loader: - kv  14:                 general.dataset.0.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  15:                     general.dataset.1.name str              = Opc Sft Stage2
llama_model_loader: - kv  16:             general.dataset.1.organization str              = OpenCoder LLM
llama_model_loader: - kv  17:                 general.dataset.1.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  18:                     general.dataset.2.name str              = Orca Agentinstruct 1M v1
llama_model_loader: - kv  19:                  general.dataset.2.version str              = v1
llama_model_loader: - kv  20:             general.dataset.2.organization str              = Microsoft
llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  22:                     general.dataset.3.name str              = Orca Math Word Problems 200k
llama_model_loader: - kv  23:             general.dataset.3.organization str              = Microsoft
llama_model_loader: - kv  24:                 general.dataset.3.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  25:                     general.dataset.4.name str              = Hermes Function Calling v1
llama_model_loader: - kv  26:                  general.dataset.4.version str              = v1
llama_model_loader: - kv  27:             general.dataset.4.organization str              = NousResearch
llama_model_loader: - kv  28:                 general.dataset.4.repo_url str              = https://huggingface.co/NousResearch/h...
llama_model_loader: - kv  29:                     general.dataset.5.name str              = NuminaMath CoT
llama_model_loader: - kv  30:             general.dataset.5.organization str              = AI MO
llama_model_loader: - kv  31:                 general.dataset.5.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  32:                     general.dataset.6.name str              = NuminaMath TIR
llama_model_loader: - kv  33:             general.dataset.6.organization str              = AI MO
llama_model_loader: - kv  34:                 general.dataset.6.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  35:                     general.dataset.7.name str              = Tulu 3 Sft Mixture
llama_model_loader: - kv  36:             general.dataset.7.organization str              = Allenai
llama_model_loader: - kv  37:                 general.dataset.7.repo_url str              = https://huggingface.co/allenai/tulu-3...
llama_model_loader: - kv  38:                     general.dataset.8.name str              = Dolphin Coder
llama_model_loader: - kv  39:             general.dataset.8.organization str              = Cognitivecomputations
llama_model_loader: - kv  40:                 general.dataset.8.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  41:                     general.dataset.9.name str              = Smoltalk
llama_model_loader: - kv  42:             general.dataset.9.organization str              = HuggingFaceTB
llama_model_loader: - kv  43:                 general.dataset.9.repo_url str              = https://huggingface.co/HuggingFaceTB/...
llama_model_loader: - kv  44:                    general.dataset.10.name str              = Samantha Data
llama_model_loader: - kv  45:            general.dataset.10.organization str              = Cognitivecomputations
llama_model_loader: - kv  46:                general.dataset.10.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  47:                    general.dataset.11.name str              = CodeFeedback Filtered Instruction
llama_model_loader: - kv  48:            general.dataset.11.organization str              = M A P
llama_model_loader: - kv  49:                general.dataset.11.repo_url str              = https://huggingface.co/m-a-p/CodeFeed...
llama_model_loader: - kv  50:                    general.dataset.12.name str              = Code Feedback
llama_model_loader: - kv  51:            general.dataset.12.organization str              = M A P
llama_model_loader: - kv  52:                general.dataset.12.repo_url str              = https://huggingface.co/m-a-p/Code-Fee...
llama_model_loader: - kv  53:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  54:                          llama.block_count u32              = 32
llama_model_loader: - kv  55:                       llama.context_length u32              = 131072
llama_model_loader: - kv  56:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  57:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  58:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  59:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  60:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  61:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  62:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  63:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  64:                          general.file_type u32              = 15
llama_model_loader: - kv  65:                           llama.vocab_size u32              = 128258
llama_model_loader: - kv  66:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  67:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  68:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  69:                      tokenizer.ggml.tokens arr[str,128258]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  70:                  tokenizer.ggml.token_type arr[i32,128258]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  71:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  72:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  73:                tokenizer.ggml.eos_token_id u32              = 128256
llama_model_loader: - kv  74:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  75:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  76:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
load: special tokens cache size = 258
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = Dolphin 3.0 Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128258
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128256 '<|im_end|>'
print_info: EOT token        = 128256 '<|im_end|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: EOG token        = 128256 '<|im_end|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-07-02T10:05:08.468+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 8 --parallel 2 --port 41115"
time=2025-07-02T10:05:08.469+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-07-02T10:05:08.469+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-07-02T10:05:08.469+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-07-02T10:05:08.474+02:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so
time=2025-07-02T10:05:08.926+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-07-02T10:05:08.927+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:41115"
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7900 XTX) - 24506 MiB free
llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from /home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Dolphin 3.0 Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Cognitivecomputations
llama_model_loader: - kv   4:                           general.basename str              = Dolphin-3.0-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Llama 3.1 8B
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  11:                      general.dataset.count u32              = 13
llama_model_loader: - kv  12:                     general.dataset.0.name str              = Opc Sft Stage1
llama_model_loader: - kv  13:             general.dataset.0.organization str              = OpenCoder LLM
llama_model_loader: - kv  14:                 general.dataset.0.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  15:                     general.dataset.1.name str              = Opc Sft Stage2
llama_model_loader: - kv  16:             general.dataset.1.organization str              = OpenCoder LLM
llama_model_loader: - kv  17:                 general.dataset.1.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  18:                     general.dataset.2.name str              = Orca Agentinstruct 1M v1
llama_model_loader: - kv  19:                  general.dataset.2.version str              = v1
llama_model_loader: - kv  20:             general.dataset.2.organization str              = Microsoft
llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  22:                     general.dataset.3.name str              = Orca Math Word Problems 200k
llama_model_loader: - kv  23:             general.dataset.3.organization str              = Microsoft
llama_model_loader: - kv  24:                 general.dataset.3.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  25:                     general.dataset.4.name str              = Hermes Function Calling v1
llama_model_loader: - kv  26:                  general.dataset.4.version str              = v1
llama_model_loader: - kv  27:             general.dataset.4.organization str              = NousResearch
llama_model_loader: - kv  28:                 general.dataset.4.repo_url str              = https://huggingface.co/NousResearch/h...
llama_model_loader: - kv  29:                     general.dataset.5.name str              = NuminaMath CoT
llama_model_loader: - kv  30:             general.dataset.5.organization str              = AI MO
llama_model_loader: - kv  31:                 general.dataset.5.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  32:                     general.dataset.6.name str              = NuminaMath TIR
llama_model_loader: - kv  33:             general.dataset.6.organization str              = AI MO
llama_model_loader: - kv  34:                 general.dataset.6.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  35:                     general.dataset.7.name str              = Tulu 3 Sft Mixture
llama_model_loader: - kv  36:             general.dataset.7.organization str              = Allenai
llama_model_loader: - kv  37:                 general.dataset.7.repo_url str              = https://huggingface.co/allenai/tulu-3...
llama_model_loader: - kv  38:                     general.dataset.8.name str              = Dolphin Coder
llama_model_loader: - kv  39:             general.dataset.8.organization str              = Cognitivecomputations
llama_model_loader: - kv  40:                 general.dataset.8.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  41:                     general.dataset.9.name str              = Smoltalk
llama_model_loader: - kv  42:             general.dataset.9.organization str              = HuggingFaceTB
llama_model_loader: - kv  43:                 general.dataset.9.repo_url str              = https://huggingface.co/HuggingFaceTB/...
llama_model_loader: - kv  44:                    general.dataset.10.name str              = Samantha Data
llama_model_loader: - kv  45:            general.dataset.10.organization str              = Cognitivecomputations
llama_model_loader: - kv  46:                general.dataset.10.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  47:                    general.dataset.11.name str              = CodeFeedback Filtered Instruction
llama_model_loader: - kv  48:            general.dataset.11.organization str              = M A P
llama_model_loader: - kv  49:                general.dataset.11.repo_url str              = https://huggingface.co/m-a-p/CodeFeed...
llama_model_loader: - kv  50:                    general.dataset.12.name str              = Code Feedback
llama_model_loader: - kv  51:            general.dataset.12.organization str              = M A P
llama_model_loader: - kv  52:                general.dataset.12.repo_url str              = https://huggingface.co/m-a-p/Code-Fee...
llama_model_loader: - kv  53:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  54:                          llama.block_count u32              = 32
llama_model_loader: - kv  55:                       llama.context_length u32              = 131072
llama_model_loader: - kv  56:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  57:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  58:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  59:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  60:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  61:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  62:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  63:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  64:                          general.file_type u32              = 15
llama_model_loader: - kv  65:                           llama.vocab_size u32              = 128258
llama_model_loader: - kv  66:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  67:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  68:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  69:                      tokenizer.ggml.tokens arr[str,128258]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  70:                  tokenizer.ggml.token_type arr[i32,128258]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  71:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  72:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  73:                tokenizer.ggml.eos_token_id u32              = 128256
llama_model_loader: - kv  74:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  75:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  76:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
time=2025-07-02T10:05:08.970+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
load: special tokens cache size = 258
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = Dolphin 3.0 Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128258
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128256 '<|im_end|>'
print_info: EOT token        = 128256 '<|im_end|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: EOG token        = 128256 '<|im_end|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
alloc_tensor_range: failed to initialize tensor output.weight
llama_model_load: error loading model: unable to allocate ROCm0 buffer
llama_model_load_from_file_impl: failed to load model
panic: unable to load model: /home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b

goroutine 16 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004f2360, {0x21, 0x0, 0x1, {0x0, 0x0, 0x0}, 0xc000599b00, 0x0}, {0x7ffdc9c08673, ...}, ...)
        /build/ollama/src/ollama/runner/llamarunner/runner.go:751 +0x395
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
        /build/ollama/src/ollama/runner/llamarunner/runner.go:848 +0xb57
time=2025-07-02T10:05:09.897+02:00 level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2"
time=2025-07-02T10:05:09.973+02:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate ROCm0 buffer\nllama_model_load_from_file_impl: failed to load model"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.9.2

Originally created by @cli-ish on GitHub (Jul 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11265 ### What is the issue? All models suddenly don't work anymore. GPU: NITRO+ AMD Radeon™ RX 7900 XTX Vapor-X 24GB Rocm version: 6.4.1-1 ### Relevant log output ```shell [GIN] 2025/07/02 - 10:05:08 | 200 | 17.397072ms | 127.0.0.1 | POST "/api/show" time=2025-07-02T10:05:08.353+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b gpu=GPU-3b299e8b1d06b48f parallel=2 available=24569131008 required="6.5 GiB" time=2025-07-02T10:05:08.354+02:00 level=INFO source=server.go:135 msg="system memory" total="60.5 GiB" free="55.3 GiB" free_swap="63.4 GiB" time=2025-07-02T10:05:08.354+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from /home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Dolphin 3.0 Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Cognitivecomputations llama_model_loader: - kv 4: general.basename str = Dolphin-3.0-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 11: general.dataset.count u32 = 13 llama_model_loader: - kv 12: general.dataset.0.name str = Opc Sft Stage1 llama_model_loader: - kv 13: general.dataset.0.organization str = OpenCoder LLM llama_model_loader: - kv 14: general.dataset.0.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 15: general.dataset.1.name str = Opc Sft Stage2 llama_model_loader: - kv 16: general.dataset.1.organization str = OpenCoder LLM llama_model_loader: - kv 17: general.dataset.1.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 18: general.dataset.2.name str = Orca Agentinstruct 1M v1 llama_model_loader: - kv 19: general.dataset.2.version str = v1 llama_model_loader: - kv 20: general.dataset.2.organization str = Microsoft llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 22: general.dataset.3.name str = Orca Math Word Problems 200k llama_model_loader: - kv 23: general.dataset.3.organization str = Microsoft llama_model_loader: - kv 24: general.dataset.3.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 25: general.dataset.4.name str = Hermes Function Calling v1 llama_model_loader: - kv 26: general.dataset.4.version str = v1 llama_model_loader: - kv 27: general.dataset.4.organization str = NousResearch llama_model_loader: - kv 28: general.dataset.4.repo_url str = https://huggingface.co/NousResearch/h... llama_model_loader: - kv 29: general.dataset.5.name str = NuminaMath CoT llama_model_loader: - kv 30: general.dataset.5.organization str = AI MO llama_model_loader: - kv 31: general.dataset.5.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 32: general.dataset.6.name str = NuminaMath TIR llama_model_loader: - kv 33: general.dataset.6.organization str = AI MO llama_model_loader: - kv 34: general.dataset.6.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 35: general.dataset.7.name str = Tulu 3 Sft Mixture llama_model_loader: - kv 36: general.dataset.7.organization str = Allenai llama_model_loader: - kv 37: general.dataset.7.repo_url str = https://huggingface.co/allenai/tulu-3... llama_model_loader: - kv 38: general.dataset.8.name str = Dolphin Coder llama_model_loader: - kv 39: general.dataset.8.organization str = Cognitivecomputations llama_model_loader: - kv 40: general.dataset.8.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 41: general.dataset.9.name str = Smoltalk llama_model_loader: - kv 42: general.dataset.9.organization str = HuggingFaceTB llama_model_loader: - kv 43: general.dataset.9.repo_url str = https://huggingface.co/HuggingFaceTB/... llama_model_loader: - kv 44: general.dataset.10.name str = Samantha Data llama_model_loader: - kv 45: general.dataset.10.organization str = Cognitivecomputations llama_model_loader: - kv 46: general.dataset.10.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 47: general.dataset.11.name str = CodeFeedback Filtered Instruction llama_model_loader: - kv 48: general.dataset.11.organization str = M A P llama_model_loader: - kv 49: general.dataset.11.repo_url str = https://huggingface.co/m-a-p/CodeFeed... llama_model_loader: - kv 50: general.dataset.12.name str = Code Feedback llama_model_loader: - kv 51: general.dataset.12.organization str = M A P llama_model_loader: - kv 52: general.dataset.12.repo_url str = https://huggingface.co/m-a-p/Code-Fee... llama_model_loader: - kv 53: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 54: llama.block_count u32 = 32 llama_model_loader: - kv 55: llama.context_length u32 = 131072 llama_model_loader: - kv 56: llama.embedding_length u32 = 4096 llama_model_loader: - kv 57: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 58: llama.attention.head_count u32 = 32 llama_model_loader: - kv 59: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 60: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 61: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 62: llama.attention.key_length u32 = 128 llama_model_loader: - kv 63: llama.attention.value_length u32 = 128 llama_model_loader: - kv 64: general.file_type u32 = 15 llama_model_loader: - kv 65: llama.vocab_size u32 = 128258 llama_model_loader: - kv 66: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 67: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 68: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 69: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 70: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 71: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 72: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 73: tokenizer.ggml.eos_token_id u32 = 128256 llama_model_loader: - kv 74: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 75: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 76: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) load: special tokens cache size = 258 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.03 B print_info: general.name = Dolphin 3.0 Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128258 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128256 '<|im_end|>' print_info: EOT token = 128256 '<|im_end|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: EOG token = 128256 '<|im_end|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-07-02T10:05:08.468+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 8 --parallel 2 --port 41115" time=2025-07-02T10:05:08.469+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-07-02T10:05:08.469+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-07-02T10:05:08.469+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-07-02T10:05:08.474+02:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32 load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so time=2025-07-02T10:05:08.926+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-07-02T10:05:08.927+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:41115" llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7900 XTX) - 24506 MiB free llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from /home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Dolphin 3.0 Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Cognitivecomputations llama_model_loader: - kv 4: general.basename str = Dolphin-3.0-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 11: general.dataset.count u32 = 13 llama_model_loader: - kv 12: general.dataset.0.name str = Opc Sft Stage1 llama_model_loader: - kv 13: general.dataset.0.organization str = OpenCoder LLM llama_model_loader: - kv 14: general.dataset.0.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 15: general.dataset.1.name str = Opc Sft Stage2 llama_model_loader: - kv 16: general.dataset.1.organization str = OpenCoder LLM llama_model_loader: - kv 17: general.dataset.1.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 18: general.dataset.2.name str = Orca Agentinstruct 1M v1 llama_model_loader: - kv 19: general.dataset.2.version str = v1 llama_model_loader: - kv 20: general.dataset.2.organization str = Microsoft llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 22: general.dataset.3.name str = Orca Math Word Problems 200k llama_model_loader: - kv 23: general.dataset.3.organization str = Microsoft llama_model_loader: - kv 24: general.dataset.3.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 25: general.dataset.4.name str = Hermes Function Calling v1 llama_model_loader: - kv 26: general.dataset.4.version str = v1 llama_model_loader: - kv 27: general.dataset.4.organization str = NousResearch llama_model_loader: - kv 28: general.dataset.4.repo_url str = https://huggingface.co/NousResearch/h... llama_model_loader: - kv 29: general.dataset.5.name str = NuminaMath CoT llama_model_loader: - kv 30: general.dataset.5.organization str = AI MO llama_model_loader: - kv 31: general.dataset.5.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 32: general.dataset.6.name str = NuminaMath TIR llama_model_loader: - kv 33: general.dataset.6.organization str = AI MO llama_model_loader: - kv 34: general.dataset.6.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 35: general.dataset.7.name str = Tulu 3 Sft Mixture llama_model_loader: - kv 36: general.dataset.7.organization str = Allenai llama_model_loader: - kv 37: general.dataset.7.repo_url str = https://huggingface.co/allenai/tulu-3... llama_model_loader: - kv 38: general.dataset.8.name str = Dolphin Coder llama_model_loader: - kv 39: general.dataset.8.organization str = Cognitivecomputations llama_model_loader: - kv 40: general.dataset.8.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 41: general.dataset.9.name str = Smoltalk llama_model_loader: - kv 42: general.dataset.9.organization str = HuggingFaceTB llama_model_loader: - kv 43: general.dataset.9.repo_url str = https://huggingface.co/HuggingFaceTB/... llama_model_loader: - kv 44: general.dataset.10.name str = Samantha Data llama_model_loader: - kv 45: general.dataset.10.organization str = Cognitivecomputations llama_model_loader: - kv 46: general.dataset.10.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 47: general.dataset.11.name str = CodeFeedback Filtered Instruction llama_model_loader: - kv 48: general.dataset.11.organization str = M A P llama_model_loader: - kv 49: general.dataset.11.repo_url str = https://huggingface.co/m-a-p/CodeFeed... llama_model_loader: - kv 50: general.dataset.12.name str = Code Feedback llama_model_loader: - kv 51: general.dataset.12.organization str = M A P llama_model_loader: - kv 52: general.dataset.12.repo_url str = https://huggingface.co/m-a-p/Code-Fee... llama_model_loader: - kv 53: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 54: llama.block_count u32 = 32 llama_model_loader: - kv 55: llama.context_length u32 = 131072 llama_model_loader: - kv 56: llama.embedding_length u32 = 4096 llama_model_loader: - kv 57: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 58: llama.attention.head_count u32 = 32 llama_model_loader: - kv 59: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 60: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 61: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 62: llama.attention.key_length u32 = 128 llama_model_loader: - kv 63: llama.attention.value_length u32 = 128 llama_model_loader: - kv 64: general.file_type u32 = 15 llama_model_loader: - kv 65: llama.vocab_size u32 = 128258 llama_model_loader: - kv 66: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 67: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 68: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 69: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 70: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 71: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 72: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 73: tokenizer.ggml.eos_token_id u32 = 128256 llama_model_loader: - kv 74: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 75: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 76: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) time=2025-07-02T10:05:08.970+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" load: special tokens cache size = 258 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 8B print_info: model params = 8.03 B print_info: general.name = Dolphin 3.0 Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128258 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128256 '<|im_end|>' print_info: EOT token = 128256 '<|im_end|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: EOG token = 128256 '<|im_end|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) alloc_tensor_range: failed to initialize tensor output.weight llama_model_load: error loading model: unable to allocate ROCm0 buffer llama_model_load_from_file_impl: failed to load model panic: unable to load model: /home/vincent/.ollama/models/blobs/sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b goroutine 16 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004f2360, {0x21, 0x0, 0x1, {0x0, 0x0, 0x0}, 0xc000599b00, 0x0}, {0x7ffdc9c08673, ...}, ...) /build/ollama/src/ollama/runner/llamarunner/runner.go:751 +0x395 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 /build/ollama/src/ollama/runner/llamarunner/runner.go:848 +0xb57 time=2025-07-02T10:05:09.897+02:00 level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2" time=2025-07-02T10:05:09.973+02:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate ROCm0 buffer\nllama_model_load_from_file_impl: failed to load model" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.9.2
GiteaMirror added the bug label 2026-04-22 15:37:14 -05:00
Author
Owner

@Oxion commented on GitHub (Jul 5, 2025):

Same kind of error, but on 5090, cant run any model (even 8b) after update to 0.9.5

upd: full reinstall fixed the problem
@cli-ish

<!-- gh-comment-id:3039811227 --> @Oxion commented on GitHub (Jul 5, 2025): Same kind of error, but on 5090, cant run any model (even 8b) after update to 0.9.5 upd: full reinstall fixed the problem @cli-ish
Author
Owner

@cli-ish commented on GitHub (Jul 6, 2025):

@Oxion Thanks this indeed worked. I reinstalled ollama and ollama-rocm and then it worked again.

<!-- gh-comment-id:3041051502 --> @cli-ish commented on GitHub (Jul 6, 2025): @Oxion Thanks this indeed worked. I reinstalled ollama and ollama-rocm and then it worked again.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33186