[GH-ISSUE #4831] ollama-rocm fails to load various models with open-soruce drivers #3054

Closed
opened 2026-04-12 13:29:13 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @ms178 on GitHub (Jun 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4831

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I've just installed ollama-rocm on CachyOS (https://archlinux.org/packages/extra/x86_64/ollama-rocm/) and the required dependencies, but loading the llama3-chatqa or llava-llama3 model, I get the following warnings/errors:

time=2024-06-05T10:56:17.756+02:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.279895057
time=2024-06-05T10:56:17.783+02:00 level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-06-05T10:56:17.785+02:00 level=INFO sourc
Error: llama runner process has terminated: signal: segmentation fault

GPU is a Radeon 6950 XT with 16 GB VRAM.

I was told by the CachyOS devs to use AMD's closed source driver instead, but that is not an option for me. Also while recommended, it doesn't state to be a hard requirement. So either the message should be re-worded or there is a bug somewhere.

CachyOS is at ROCm 6.0.2.

OS

Linux

GPU

AMD

CPU

Intel

Ollama version

0.1.41

Originally created by @ms178 on GitHub (Jun 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4831 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I've just installed ollama-rocm on CachyOS (https://archlinux.org/packages/extra/x86_64/ollama-rocm/) and the required dependencies, but loading the llama3-chatqa or llava-llama3 model, I get the following warnings/errors: ``` time=2024-06-05T10:56:17.756+02:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.279895057 time=2024-06-05T10:56:17.783+02:00 level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-06-05T10:56:17.785+02:00 level=INFO sourc Error: llama runner process has terminated: signal: segmentation fault ``` GPU is a Radeon 6950 XT with 16 GB VRAM. I was told by the CachyOS devs to use AMD's closed source driver instead, but that is not an option for me. Also while recommended, it doesn't state to be a hard requirement. So either the message should be re-worded or there is a bug somewhere. CachyOS is at ROCm 6.0.2. ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.1.41
GiteaMirror added the gpuamdbug labels 2026-04-12 13:29:13 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jun 18, 2024):

That log output looks short. Can you update to the latest version and try the following to get some more diagnostics to try to help understand where it's crashing?

sudo systemctl stop ollama
OLLAMA_DEBUG=1 ollama serve 2>&1 | tee server.log

Then in another terminal try to load a model, and if it crashes, share that server log.

<!-- gh-comment-id:2176991030 --> @dhiltgen commented on GitHub (Jun 18, 2024): That log output looks short. Can you update to the latest version and try the following to get some more diagnostics to try to help understand where it's crashing? ``` sudo systemctl stop ollama OLLAMA_DEBUG=1 ollama serve 2>&1 | tee server.log ``` Then in another terminal try to load a model, and if it crashes, share that server log.
Author
Owner

@ms178 commented on GitHub (Jun 19, 2024):

With the latest ollama-rocm on CachyOS, the issue still persists.

server.log

<!-- gh-comment-id:2177314796 --> @ms178 commented on GitHub (Jun 19, 2024): With the latest ollama-rocm on CachyOS, the issue still persists. [server.log](https://github.com/user-attachments/files/15894014/server.log)
Author
Owner

@dhiltgen commented on GitHub (Jun 19, 2024):

Excerpt:

time=2024-06-19T02:41:19.618+02:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1030 driver=0.0 name=1002:73a5 total="16.0 GiB" available="16.0 GiB"
...
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no
llm_load_tensors: ggml ctx size =    0.35 MiB
llm_load_tensors: offloading 27 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 28/28 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  8376.27 MiB
llm_load_tensors:        CPU buffer size =   112.50 MiB
time=2024-06-19T02:45:11.484+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.05"
time=2024-06-19T02:45:11.735+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.25"
time=2024-06-19T02:45:11.989+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.43"
time=2024-06-19T02:45:12.240+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.63"
time=2024-06-19T02:45:12.491+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.83"
time=2024-06-19T02:45:12.742+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.99"
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_kv_cache_init:      ROCm0 KV buffer size =   540.00 MiB
llama_new_context_with_model: KV self size  =  540.00 MiB, K (f16):  324.00 MiB, V (f16):  216.00 MiB
llama_new_context_with_model:  ROCm_Host  output buffer size =     0.40 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   212.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     8.01 MiB
llama_new_context_with_model: graph nodes  = 1924
llama_new_context_with_model: graph splits = 2
[1718757913] warming up the model with an empty run
time=2024-06-19T02:45:13.243+02:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault "
<!-- gh-comment-id:2177349639 --> @dhiltgen commented on GitHub (Jun 19, 2024): Excerpt: ``` time=2024-06-19T02:41:19.618+02:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1030 driver=0.0 name=1002:73a5 total="16.0 GiB" available="16.0 GiB" ... ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no llm_load_tensors: ggml ctx size = 0.35 MiB llm_load_tensors: offloading 27 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 28/28 layers to GPU llm_load_tensors: ROCm0 buffer size = 8376.27 MiB llm_load_tensors: CPU buffer size = 112.50 MiB time=2024-06-19T02:45:11.484+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.05" time=2024-06-19T02:45:11.735+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.25" time=2024-06-19T02:45:11.989+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.43" time=2024-06-19T02:45:12.240+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.63" time=2024-06-19T02:45:12.491+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.83" time=2024-06-19T02:45:12.742+02:00 level=DEBUG source=server.go:578 msg="model load progress 0.99" llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 0.025 llama_kv_cache_init: ROCm0 KV buffer size = 540.00 MiB llama_new_context_with_model: KV self size = 540.00 MiB, K (f16): 324.00 MiB, V (f16): 216.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 0.40 MiB llama_new_context_with_model: ROCm0 compute buffer size = 212.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 8.01 MiB llama_new_context_with_model: graph nodes = 1924 llama_new_context_with_model: graph splits = 2 [1718757913] warming up the model with an empty run time=2024-06-19T02:45:13.243+02:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault " ```
Author
Owner

@dhiltgen commented on GitHub (Jul 3, 2024):

We've recently updated to leveraging ROCm v6.1.1 in our build. You might try uninstalling the older ROCm version on the host, and use the version we install via our installation script. (after removing rocm on the host, re-run our install script and it should take care of downloading our rocm automatically)

<!-- gh-comment-id:2207501095 --> @dhiltgen commented on GitHub (Jul 3, 2024): We've recently updated to leveraging ROCm v6.1.1 in our build. You might try uninstalling the older ROCm version on the host, and use the version we install via our installation script. (after removing rocm on the host, re-run our install script and it should take care of downloading our rocm automatically)
Author
Owner

@ms178 commented on GitHub (Jul 24, 2024):

Sorry, I had some busy weeks but have some good news. Ollama-rocm does now finally work on CachyOS when using sudo chwd --ai_sdk -a pci nonfree 0300 to install all AI/ROCm related packages.

<!-- gh-comment-id:2248746705 --> @ms178 commented on GitHub (Jul 24, 2024): Sorry, I had some busy weeks but have some good news. Ollama-rocm does now finally work on CachyOS when using `sudo chwd --ai_sdk -a pci nonfree 0300` to install all AI/ROCm related packages.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3054