[GH-ISSUE #2442] Error: unable to initialize llm library Radeon card detected, but permissions not set up properly. Either run ollama as root, or add you user account to the render group. #47937

Closed
opened 2026-04-28 05:58:31 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @pladaria on GitHub (Feb 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2442

Originally assigned to: @dhiltgen on GitHub.

I'm unable to run ollama. My setup:

  • OS: Linux
  • CPU+GPU: AMD Ryzen 3 2200G with Radeon Vega Graphics
  • GPU: nVidia Tesla P40 - 24GB RAM
$ ollama serve
time=2024-02-10T12:21:38.851+01:00 level=INFO source=images.go:863 msg="total blobs: 0"
time=2024-02-10T12:21:38.851+01:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0"
time=2024-02-10T12:21:38.851+01:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)"
time=2024-02-10T12:21:38.851+01:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
Error: unable to initialize llm library Radeon card detected, but permissions not set up properly.  Either run ollama as root, or add you user account to the render group.

Same result using sudo or adding myself to the render group.

Also tried:

OLLAMA_LLM_LIBRARY="cuda_v11" ollama serve
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve

with the same results

Originally created by @pladaria on GitHub (Feb 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2442 Originally assigned to: @dhiltgen on GitHub. I'm unable to run ollama. My setup: * OS: Linux * CPU+GPU: AMD Ryzen 3 2200G with Radeon Vega Graphics * GPU: nVidia Tesla P40 - 24GB RAM ``` $ ollama serve time=2024-02-10T12:21:38.851+01:00 level=INFO source=images.go:863 msg="total blobs: 0" time=2024-02-10T12:21:38.851+01:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0" time=2024-02-10T12:21:38.851+01:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)" time=2024-02-10T12:21:38.851+01:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..." Error: unable to initialize llm library Radeon card detected, but permissions not set up properly. Either run ollama as root, or add you user account to the render group. ``` Same result using `sudo` or adding myself to the `render` group. Also tried: `OLLAMA_LLM_LIBRARY="cuda_v11" ollama serve` `OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve` with the same results
GiteaMirror added the question label 2026-04-28 05:58:31 -05:00
Author
Owner

@unclemcz commented on GitHub (Feb 28, 2024):

https://github.com/ollama/ollama/issues/2392#issuecomment-1968245555

<!-- gh-comment-id:1968420252 --> @unclemcz commented on GitHub (Feb 28, 2024): https://github.com/ollama/ollama/issues/2392#issuecomment-1968245555
Author
Owner

@user414 commented on GitHub (Feb 28, 2024):

We had similar issue, except that adding to the render group work. Please note that on linux for the groups changes to take full effect you have to logout and login again or reboot the system. If adding users to a group is not possible for whatever reasons changing the udev permission is also an option see here

https://github.com/ROCm/ROCm/issues/1798#issuecomment-1849112550

During our investigation we realized that this was caused by not being able to read the /dev/kfd. We also found that some people did not have this file, which is strange. This did not happen to us but it could be due to a very old kernel, somewhere around 2014 seem to be the first kfd commit in the kernel, or perhaps a moded amdgpu driver.

Also note to get amd gpu acceleration one need to have the ROCM framework, which is the AMD equivalent of the CUDA framework for Nvidia, installed. Here are instructions for Ubuntu and Ubuntu derivative

https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/native-install/ubuntu.html

<!-- gh-comment-id:1968731999 --> @user414 commented on GitHub (Feb 28, 2024): We had similar issue, except that adding to the render group work. Please note that on linux for the groups changes to take full effect you have to logout and login again or reboot the system. If adding users to a group is not possible for whatever reasons changing the udev permission is also an option see here https://github.com/ROCm/ROCm/issues/1798#issuecomment-1849112550 During our investigation we realized that this was caused by not being able to read the /dev/kfd. We also found that some people did not have this file, which is strange. This did not happen to us but it could be due to a very old kernel, somewhere around 2014 seem to be the first kfd commit in the kernel, or perhaps a moded amdgpu driver. Also note to get amd gpu acceleration one need to have the ROCM framework, which is the AMD equivalent of the CUDA framework for Nvidia, installed. Here are instructions for Ubuntu and Ubuntu derivative https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/native-install/ubuntu.html
Author
Owner

@pladaria commented on GitHub (Feb 28, 2024):

I want to use the nVidia card, not the AMD one

<!-- gh-comment-id:1969187873 --> @pladaria commented on GitHub (Feb 28, 2024): I want to use the nVidia card, not the AMD one
Author
Owner

@dhiltgen commented on GitHub (Mar 11, 2024):

@pladaria we don't currently optimize for hybrid setups with mixed vendor GPUs.

That said, you're in luck, in that we do try nvidia first, and if that works, we use it without proceeding to the AMD GPU. You will need to add the ollama user to the render group to get past the amd gpu permission check we run at startup, but after that, it should use your nvidia card. If that's not working, please run the server with OLLAMA_DEBUG=1 and share the server log so we can understand why it's not working properly.

Please make sure to install the latest version as well.

<!-- gh-comment-id:1989636687 --> @dhiltgen commented on GitHub (Mar 11, 2024): @pladaria we don't currently optimize for hybrid setups with mixed vendor GPUs. That said, you're in luck, in that we do try nvidia first, and if that works, we use it without proceeding to the AMD GPU. You will need to add the ollama user to the render group to get past the amd gpu permission check we run at startup, but after that, it should use your nvidia card. If that's not working, please run the server with OLLAMA_DEBUG=1 and share the server log so we can understand why it's not working properly. Please make sure to install the latest version as well.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47937