[GH-ISSUE #4173] AMD GPUs mistaken as Nvidia GPUs #28354

Closed
opened 2026-04-22 06:28:58 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @eliranwong on GitHub (May 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4173

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

My device runs Ubuntu with dual AMD GPUs, both are RX 7900 XTX.

I set up the GPUs with ROCM. I keep a copy of my setup at https://github.com/eliranwong/MultiAMDGPU_AIDev_Ubuntu

I just tried installing ollama. Surprisingly, the last line reads "NVIDIA GPU installed."

Adding ollama user to render group...
Adding ollama user to video group...
Adding current user to ollama group...
Creating ollama systemd service...
Enabling and starting ollama service...
NVIDIA GPU installed.

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.1.33

Originally created by @eliranwong on GitHub (May 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4173 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? My device runs Ubuntu with dual AMD GPUs, both are RX 7900 XTX. I set up the GPUs with ROCM. I keep a copy of my setup at https://github.com/eliranwong/MultiAMDGPU_AIDev_Ubuntu I just tried installing ollama. Surprisingly, the last line reads "NVIDIA GPU installed." >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> NVIDIA GPU installed. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.1.33
GiteaMirror added the gpuamdbug labels 2026-04-22 06:28:58 -05:00
Author
Owner

@dhiltgen commented on GitHub (May 5, 2024):

The install script reports this if nvidia-smi is present, so my suspicion is you previously installed that. I don't see Installing NVIDIA repository in the output you shared, so we didn't install the CUDA drivers, so while this log message is a little misleading/confusing, I don't think the install script actually did anything incorrectly.

Does Ollama work correctly on your Radeon GPUs?

<!-- gh-comment-id:2094971773 --> @dhiltgen commented on GitHub (May 5, 2024): The install script reports this if `nvidia-smi` is present, so my suspicion is you previously installed that. I don't see `Installing NVIDIA repository` in the output you shared, so we didn't install the CUDA drivers, so while this log message is a little misleading/confusing, I don't think the install script actually did anything incorrectly. Does Ollama work correctly on your Radeon GPUs?
Author
Owner

@eliranwong commented on GitHub (May 5, 2024):

I can see ollama is working, but how may I know if it utilised GPU acceleration?

<!-- gh-comment-id:2094990696 --> @eliranwong commented on GitHub (May 5, 2024): I can see ollama is working, but how may I know if it utilised GPU acceleration?
Author
Owner

@dhiltgen commented on GitHub (May 5, 2024):

We don't yet have a way to see in the CLI, but you can check the server logs.

For example...

llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
<!-- gh-comment-id:2095003710 --> @dhiltgen commented on GitHub (May 5, 2024): We don't yet have a way to see in the CLI, but you can check the server logs. For example... ``` llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU ```
Author
Owner

@eliranwong commented on GitHub (May 6, 2024):

It looks like working, with just a minor issue, i.e.: 'error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"'.

level=INFO source=gpu.go:96 msg="Detecting GPUs"
level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
level=WARN source=amd_linux.go:49 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=0 total="24560.0 MiB"
level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=0 available="24560.0 MiB"
level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=1 total="24560.0 MiB"
level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=1 available="24560.0 MiB"
level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=1 gpu_type=gfx1100
level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24560.0 MiB" memory.required.full="38968.0 MiB" memory.required.partial="24447.5 MiB" memory.required.kv="640.0 MiB" memory.weights.total="37547.0 MiB" memory.weights.repeating="36725.0 MiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1104.5 MiB"
level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24560.0 MiB" memory.required.full="38968.0 MiB" memory.required.partial="24447.5 MiB" memory.required.kv="640.0 MiB" memory.weights.total="37547.0 MiB" memory.weights.repeating="36725.0 MiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1104.5 MiB"
level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=81 memory.available="49120.0 MiB" memory.required.full="39292.0 MiB" memory.required.partial="39292.0 MiB" memory.required.kv="640.0 MiB" memory.weights.total="37547.0 MiB" memory.weights.repeating="36725.0 MiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="648.0 MiB" memory.graph.partial="2208.9 MiB"
level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=81 memory.available="49120.0 MiB" memory.required.full="39292.0 MiB" memory.required.partial="39292.0 MiB" memory.required.kv="640.0 MiB" memory.weights.total="37547.0 MiB" memory.weights.repeating="36725.0 MiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="648.0 MiB" memory.graph.partial="2208.9 MiB"

<!-- gh-comment-id:2095277083 --> @eliranwong commented on GitHub (May 6, 2024): It looks like working, with just a minor issue, i.e.: 'error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"'. ``` level=INFO source=gpu.go:96 msg="Detecting GPUs" level=INFO source=cpu_common.go:11 msg="CPU has AVX2" level=WARN source=amd_linux.go:49 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=0 total="24560.0 MiB" level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=0 available="24560.0 MiB" level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=1 total="24560.0 MiB" level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=1 available="24560.0 MiB" level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=1 gpu_type=gfx1100 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24560.0 MiB" memory.required.full="38968.0 MiB" memory.required.partial="24447.5 MiB" memory.required.kv="640.0 MiB" memory.weights.total="37547.0 MiB" memory.weights.repeating="36725.0 MiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1104.5 MiB" level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24560.0 MiB" memory.required.full="38968.0 MiB" memory.required.partial="24447.5 MiB" memory.required.kv="640.0 MiB" memory.weights.total="37547.0 MiB" memory.weights.repeating="36725.0 MiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1104.5 MiB" level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=81 memory.available="49120.0 MiB" memory.required.full="39292.0 MiB" memory.required.partial="39292.0 MiB" memory.required.kv="640.0 MiB" memory.weights.total="37547.0 MiB" memory.weights.repeating="36725.0 MiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="648.0 MiB" memory.graph.partial="2208.9 MiB" level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=81 memory.available="49120.0 MiB" memory.required.full="39292.0 MiB" memory.required.partial="39292.0 MiB" memory.required.kv="640.0 MiB" memory.weights.total="37547.0 MiB" memory.weights.repeating="36725.0 MiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="648.0 MiB" memory.graph.partial="2208.9 MiB" ```
Author
Owner

@eabase commented on GitHub (May 6, 2024):

@dhiltgen

The install script reports this if nvidia-smi is present,

That's a very poor method, you could at least have parsed it's output.

Also, I can see use cases where people have both AMD & Nvidia GPUs...

Why not check with:

wmic path win32_VideoController get name

# or

(Get-CimInstance -ClassName CIM_VideoController) | select Name, DeviceId
<!-- gh-comment-id:2096021080 --> @eabase commented on GitHub (May 6, 2024): @dhiltgen > The install script reports this if `nvidia-smi` is present, That's a very poor method, you could at least have parsed it's output. Also, I can see use cases where people have both AMD & Nvidia GPUs... Why not check with: ```powershell wmic path win32_VideoController get name # or (Get-CimInstance -ClassName CIM_VideoController) | select Name, DeviceId ```
Author
Owner

@eliranwong commented on GitHub (May 6, 2024):

@dhiltgen

The install script reports this if nvidia-smi is present,

That's a very poor method, you could at least have parsed it's output.

Also, I can see use cases where people have both AMD & Nvidia GPUs...

Why not check with:

wmic path win32_VideoController get name

# or

(Get-CimInstance -ClassName CIM_VideoController) | select Name, DeviceId

I guess win32 is irrelevant to my case, I use Ubuntu. Anyway, I remove Nvidia things, and it is now clean for installation, but with a bigger issue in running:

amdgpu_ollama

It appears that running Ollama with GPUs is very unstable in my device. It crashes a lot. I use llama.cpp to run the same model files directly without an issue.

<!-- gh-comment-id:2096485775 --> @eliranwong commented on GitHub (May 6, 2024): > @dhiltgen > > > The install script reports this if `nvidia-smi` is present, > > That's a very poor method, you could at least have parsed it's output. > > Also, I can see use cases where people have both AMD & Nvidia GPUs... > > Why not check with: > > ```powershell > wmic path win32_VideoController get name > > # or > > (Get-CimInstance -ClassName CIM_VideoController) | select Name, DeviceId > ``` I guess win32 is irrelevant to my case, I use Ubuntu. Anyway, I remove Nvidia things, and it is now clean for installation, but with a bigger issue in running: ![amdgpu_ollama](https://github.com/eliranwong/freegenius/assets/25262722/c985062b-da23-4879-8d55-76b16e1017f3) It appears that running Ollama with GPUs is very unstable in my device. It crashes a lot. I use llama.cpp to run the same model files directly without an issue.
Author
Owner

@dhiltgen commented on GitHub (May 6, 2024):

@eliranwong can you share your server log so we can see where it's having problems?

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2097078847 --> @dhiltgen commented on GitHub (May 6, 2024): @eliranwong can you share your server log so we can see where it's having problems? https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@eliranwong commented on GitHub (May 7, 2024):

My screen go green not long after I started ollama. I just tried again, below is the log:

Started Ollama Service.
ollama[2103]: time=2024-05-07T13:25:37.108+01:00 level=INFO source=images.go:828 msg="total blobs: 22"
ollama[2103]: time=2024-05-07T13:25:37.109+01:00 level=INFO source=images.go:835 msg="total unused blobs removed: 0"
ollama[2103]: time=2024-05-07T13:25:37.109+01:00 level=INFO source=routes.go:1071 msg="Listening on 127.0.0.1:11434 (version 0.1.33)"
ollama[2103]: time=2024-05-07T13:25:37.110+01:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama953288552/runners
ollama[2103]: time=2024-05-07T13:25:38.831+01:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
ollama[2103]: time=2024-05-07T13:25:38.831+01:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
ollama[2103]: time=2024-05-07T13:25:38.833+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama[2103]: time=2024-05-07T13:25:38.833+01:00 level=INFO source=amd_linux.go:46 msg="AMD Driver: 6.3.6"
ollama[2103]: time=2024-05-07T13:25:38.834+01:00 level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=0 total="24560.0 MiB"
ollama[2103]: time=2024-05-07T13:25:38.834+01:00 level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=0 available="24533.4 MiB"
ollama[2103]: time=2024-05-07T13:25:38.843+01:00 level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
ollama[2103]: time=2024-05-07T13:25:38.843+01:00 level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=1 total="24560.0 MiB"
ollama[2103]: time=2024-05-07T13:25:38.843+01:00 level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=1 available="24434.4 MiB"
ollama[2103]: time=2024-05-07T13:25:38.843+01:00 level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=1 gpu_type=gfx1100

<!-- gh-comment-id:2098309543 --> @eliranwong commented on GitHub (May 7, 2024): My screen go green not long after I started ollama. I just tried again, below is the log: Started Ollama Service. ollama[2103]: time=2024-05-07T13:25:37.108+01:00 level=INFO source=images.go:828 msg="total blobs: 22" ollama[2103]: time=2024-05-07T13:25:37.109+01:00 level=INFO source=images.go:835 msg="total unused blobs removed: 0" ollama[2103]: time=2024-05-07T13:25:37.109+01:00 level=INFO source=routes.go:1071 msg="Listening on 127.0.0.1:11434 (version 0.1.33)" ollama[2103]: time=2024-05-07T13:25:37.110+01:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama953288552/runners ollama[2103]: time=2024-05-07T13:25:38.831+01:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" ollama[2103]: time=2024-05-07T13:25:38.831+01:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" ollama[2103]: time=2024-05-07T13:25:38.833+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama[2103]: time=2024-05-07T13:25:38.833+01:00 level=INFO source=amd_linux.go:46 msg="AMD Driver: 6.3.6" ollama[2103]: time=2024-05-07T13:25:38.834+01:00 level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=0 total="24560.0 MiB" ollama[2103]: time=2024-05-07T13:25:38.834+01:00 level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=0 available="24533.4 MiB" ollama[2103]: time=2024-05-07T13:25:38.843+01:00 level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 ollama[2103]: time=2024-05-07T13:25:38.843+01:00 level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=1 total="24560.0 MiB" ollama[2103]: time=2024-05-07T13:25:38.843+01:00 level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=1 available="24434.4 MiB" ollama[2103]: time=2024-05-07T13:25:38.843+01:00 level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=1 gpu_type=gfx1100
Author
Owner

@dhiltgen commented on GitHub (May 7, 2024):

What you describe sounds like a driver bug. Please make sure you're running the latest drivers from AMD.

<!-- gh-comment-id:2098770202 --> @dhiltgen commented on GitHub (May 7, 2024): What you describe sounds like a driver bug. Please make sure you're running the latest drivers from AMD.
Author
Owner

@eliranwong commented on GitHub (May 7, 2024):

I load and swap the same models with llama.cpp without an issue. same system, same driver

<!-- gh-comment-id:2099481716 --> @eliranwong commented on GitHub (May 7, 2024): I load and swap the same models with llama.cpp without an issue. same system, same driver
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28354