[GH-ISSUE #2718] Doc permission requirements for Rocm Docker Image to access /dev/dri and /dev/kfd #27391

Closed
opened 2026-04-22 04:42:55 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @3lpsy on GitHub (Feb 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2718

Originally assigned to: @dhiltgen on GitHub.

TLDR: The 0.1.27-rocm cannot find the correct version of rocm libraries.

I start the docker image using the following command:

sudo -H -u ollama /usr/bin/podman --runtime /usr/bin/crun run --gpus all --rm -v /usr/share/ollama/.ollama:/root/.ollama -p 11434:11434 --name ollama 'ollama/ollama:0.1.27-rocm'

Ollama appears to identify the AMD GPU without issue

...omitted for brevity... 
msg="Extracting dynamic libraries..."
time=2024-02-24T00:19:07.462Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cuda_v11 rocm_v5 cpu cpu_avx2 rocm_v6]"
time=2024-02-24T00:19:07.462Z level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-24T00:19:07.462Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-24T00:19:07.486Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-02-24T00:19:07.486Z level=INFO source=gpu.go:265 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-24T00:19:07.492Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.5.0.50701 /opt/rocm-5.7.1/lib/librocm_smi64.so.5.0.50701]"
time=2024-02-24T00:19:07.504Z level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-24T00:19:07.504Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

Then I attempt to run ollama from the client via:

echo 'test' | ollama run llama2

And observe the following errors:

time=2024-02-24T00:19:40.892Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1271109174/rocm_v5/libex
t_server.so"
time=2024-02-24T00:19:40.892Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-02-24T00:19:40.892Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama1271109174/rocm_v5/libext_serve
r.so  Unable to init GPU: invalid device ordinal"
time=2024-02-24T00:19:40.893Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama1271109174/rocm_v6/libext_serve
r.so  Unable to load dynamic library: Unable to load dynamic server library: libhipblas.so.2: cannot open shared object file: No such fil
e or directory"
time=2024-02-24T00:19:40.894Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1271109174/cpu_avx2/libe
xt_server.so"
time=2024-02-24T00:19:40.894Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:8934d96d3f08982e95922
b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
...omitted for brevity...

Note the error:

Unable to load dynamic library: Unable to load dynamic server library: libhipblas.so.2

If I grab a shell on the image, I can see that the libhipblas.so.2 library does not exist, only .1 and .0 do:

$ ls /opt/rocm/lib/libhipblas* | cat
/opt/rocm/lib/libhipblaslt.so
/opt/rocm/lib/libhipblaslt.so.0
/opt/rocm/lib/libhipblaslt.so.0.3.50701
/opt/rocm/lib/libhipblas.so
/opt/rocm/lib/libhipblas.so.1
/opt/rocm/lib/libhipblas.so.1.1.0.50701

I ran into a similar issue with a mismatch between library versions when running outside of docker which I was able to mitigate as described here: https://github.com/ollama/ollama/issues/2685#issuecomment-1961666228 (TLDR: just symlinking new versions to old versions). I believe even if I fixed the libhipblas.so issue, the other libraries would also need to be fixed as they were in the linked comment. Additionally, the issue here appears to be the opposite of the scenario described in the linked comment where the old versions exist but ollama wants to new versions (I believe).

I've looked at the Dockerfile and won't quite understand how the 0.1.27-rocm image is built so am not able to offer guidance on a fix.

Originally created by @3lpsy on GitHub (Feb 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2718 Originally assigned to: @dhiltgen on GitHub. TLDR: The 0.1.27-rocm cannot find the correct version of rocm libraries. I start the docker image using the following command: ``` sudo -H -u ollama /usr/bin/podman --runtime /usr/bin/crun run --gpus all --rm -v /usr/share/ollama/.ollama:/root/.ollama -p 11434:11434 --name ollama 'ollama/ollama:0.1.27-rocm' ``` Ollama appears to identify the AMD GPU without issue ``` ...omitted for brevity... msg="Extracting dynamic libraries..." time=2024-02-24T00:19:07.462Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cuda_v11 rocm_v5 cpu cpu_avx2 rocm_v6]" time=2024-02-24T00:19:07.462Z level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-24T00:19:07.462Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-24T00:19:07.486Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-02-24T00:19:07.486Z level=INFO source=gpu.go:265 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-24T00:19:07.492Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.5.0.50701 /opt/rocm-5.7.1/lib/librocm_smi64.so.5.0.50701]" time=2024-02-24T00:19:07.504Z level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-24T00:19:07.504Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ``` Then I attempt to run ollama from the client via: ``` echo 'test' | ollama run llama2 ``` And observe the following errors: ``` time=2024-02-24T00:19:40.892Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1271109174/rocm_v5/libex t_server.so" time=2024-02-24T00:19:40.892Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-02-24T00:19:40.892Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama1271109174/rocm_v5/libext_serve r.so Unable to init GPU: invalid device ordinal" time=2024-02-24T00:19:40.893Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama1271109174/rocm_v6/libext_serve r.so Unable to load dynamic library: Unable to load dynamic server library: libhipblas.so.2: cannot open shared object file: No such fil e or directory" time=2024-02-24T00:19:40.894Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1271109174/cpu_avx2/libe xt_server.so" time=2024-02-24T00:19:40.894Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:8934d96d3f08982e95922 b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ...omitted for brevity... ``` Note the error: ``` Unable to load dynamic library: Unable to load dynamic server library: libhipblas.so.2 ``` If I grab a shell on the image, I can see that the `libhipblas.so.2` library does not exist, only .1 and .0 do: ``` $ ls /opt/rocm/lib/libhipblas* | cat /opt/rocm/lib/libhipblaslt.so /opt/rocm/lib/libhipblaslt.so.0 /opt/rocm/lib/libhipblaslt.so.0.3.50701 /opt/rocm/lib/libhipblas.so /opt/rocm/lib/libhipblas.so.1 /opt/rocm/lib/libhipblas.so.1.1.0.50701 ``` I ran into a similar issue with a mismatch between library versions when running outside of docker which I was able to mitigate as described here: https://github.com/ollama/ollama/issues/2685#issuecomment-1961666228 (TLDR: just symlinking new versions to old versions). I believe even if I fixed the libhipblas.so issue, the other libraries would also need to be fixed as they were in the linked comment. Additionally, the issue here appears to be the opposite of the scenario described in the linked comment where the old versions exist but ollama wants to new versions (I believe). I've looked at the `Dockerfile` and won't quite understand how the 0.1.27-rocm image is built so am not able to offer guidance on a fix.
Author
Owner

@GabrielHaeusler commented on GitHub (Feb 24, 2024):

Can confirm this issue.
I similarly use podman instead of docker, but with a rootless setup:

podman run -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:0.1.27-rocm

The GPU is definitely recognised by Ollama and also otherwise I can corroborate the findings of @3lpsy, including the existing libhipblas.so.* libraries.

Update:
After looking into it a bit more and enabling the Ollama debug output, I found that the missing libhipblas.so.2` library likely isn't the issue, as that error pertains to rocm_v6, while both @3lpsy and I are running rocm-5.7.1.
However, in the debug output I found these messages instead:

level=DEBUG source=gpu.go:158 msg="error looking up amd driver version: %s" !BADKEY="amdgpu file stat error: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
level=DEBUG source=amd.go:76 msg="malformed gfx_target_version 0"

My first reaction was to make sure I had exported the HSA_OVERRIDE_GFX_VERSION env variable and to also pass this variable to the container, though this apparently changed nothing.

A second thought is that the container somehow might not have the proper access rights to the system AMD driver, but at that point this goes over my head as well.
Anyhow, I hope this helps.

Update 2:
I got it to work. The problem was the SELinux config on my Fedora system, as well as missing privileges for the container to access the Direct Rendering Infrastructure (DRI) and the AMD Kernel Fusion Driver (KFD). The way to fix this for me, was to run:

sudo setsebool container_use_devices=true

which allows podman containers to access system devices, otherwise denied by SELinux (see [0]).
Additionally I had to pass the devices corresponding to the DRI and KFD to the container. To do so I added --device=/dev/kfd:/dev/kfd and --device=/dev/dri:/dev/dri to the podman run command, now looking like:

podman run --rm --env "OLLAMA_DEBUG='1'" --env "HSA_OVERRIDE_GFX_VERSION" --device=/dev/kfd:/dev/kfd --device=/dev/dri:/dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:0.1.27-rocm

Disclaimer: I am not certain as to if this is a standard requirement when using rootless podman or not, as I only came across this today, at [1]. I dabbled with ROCm podman containers before without facing such requirements, but this might have been introduced in the newer versions of ROCm.

<!-- gh-comment-id:1962717610 --> @GabrielHaeusler commented on GitHub (Feb 24, 2024): Can confirm this issue. I similarly use podman instead of docker, but with a rootless setup: ```bash podman run -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:0.1.27-rocm ``` The GPU is definitely recognised by Ollama and also otherwise I can corroborate the findings of @3lpsy, including the existing `libhipblas.so.*` libraries. **Update:** After looking into it a bit more and enabling the Ollama debug output, I found that the missing libhipblas.so.2` library likely isn't the issue, as that error pertains to _rocm_v6_, while both @3lpsy and I are running rocm-5.7.1. However, in the debug output I found these messages instead: ```bash level=DEBUG source=gpu.go:158 msg="error looking up amd driver version: %s" !BADKEY="amdgpu file stat error: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" level=DEBUG source=amd.go:76 msg="malformed gfx_target_version 0" ``` My first reaction was to make sure I had exported the `HSA_OVERRIDE_GFX_VERSION` env variable and to also pass this variable to the container, though this apparently changed nothing. A second thought is that the container somehow might not have the proper access rights to the system AMD driver, but at that point this goes over my head as well. Anyhow, I hope this helps. **Update 2:** I got it to work. The problem was the SELinux config on my Fedora system, as well as missing privileges for the container to access the _Direct Rendering Infrastructure (DRI)_ and the _AMD Kernel Fusion Driver (KFD)_. The way to fix this for me, was to run: ```bash sudo setsebool container_use_devices=true ``` which allows podman containers to access system devices, otherwise denied by SELinux (see [[0]](https://docs.podman.io/en/latest/markdown/podman-run.1.html)). Additionally I had to pass the devices corresponding to the DRI and KFD to the container. To do so I added `--device=/dev/kfd:/dev/kfd` and `--device=/dev/dri:/dev/dri` to the `podman run` command, now looking like: ```bash podman run --rm --env "OLLAMA_DEBUG='1'" --env "HSA_OVERRIDE_GFX_VERSION" --device=/dev/kfd:/dev/kfd --device=/dev/dri:/dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:0.1.27-rocm ``` Disclaimer: I am not certain as to if this is a standard requirement when using rootless podman or not, as I only came across this today, at [[1]](https://github.com/prawilny/ollama-rocm-docker). I dabbled with ROCm podman containers before without facing such requirements, but this might have been introduced in the newer versions of ROCm.
Author
Owner

@3lpsy commented on GitHub (Feb 25, 2024):

Following @GabrielHaeusler 's approach (Thanks for the info!), I was able to successfully get ollama in podman to use the GPU via the following command:

sudo -u ollama -H /usr/bin/podman --runtime /usr/bin/crun run --rm -v /usr/share/ollama/.ollama:/root/.ollama -p 11434:11434 --name ollama  --device=/dev/kfd:/dev/kfd --device=/dev/dri:/dev/dri 'ollama/ollama:0.1.27-rocm'

I tried all variants of using HSA_OVERRIDE_GFX_VERSION, passing in kfd, and pssing in dri. I would have thought --gpus all would handle access to these devices but I guess not. Therefore, I removed --gpus all in the above. HSA_OVERRIDE_GFX_VERSION did also not appear to be required for my setup.

Should probably have noted that I'm using linux-hardened with a 6900xt but not selinux.

<!-- gh-comment-id:1963006222 --> @3lpsy commented on GitHub (Feb 25, 2024): Following @GabrielHaeusler 's approach (Thanks for the info!), I was able to successfully get ollama in podman to use the GPU via the following command: ``` sudo -u ollama -H /usr/bin/podman --runtime /usr/bin/crun run --rm -v /usr/share/ollama/.ollama:/root/.ollama -p 11434:11434 --name ollama --device=/dev/kfd:/dev/kfd --device=/dev/dri:/dev/dri 'ollama/ollama:0.1.27-rocm' ``` I tried all variants of using `HSA_OVERRIDE_GFX_VERSION`, passing in `kfd`, and pssing in `dri`. I would have thought `--gpus all` would handle access to these devices but I guess not. Therefore, I removed `--gpus all` in the above. `HSA_OVERRIDE_GFX_VERSION` did also not appear to be required for my setup. Should probably have noted that I'm using `linux-hardened` with a `6900xt` but not `selinux`.
Author
Owner

@dhiltgen commented on GitHub (Feb 26, 2024):

To clarify some of the error messages, our current build uses the v5 ROCm image base, so the actual error was

time=2024-02-24T00:19:40.892Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama1271109174/rocm_v5/libext_serve
r.so  Unable to init GPU: invalid device ordinal"

Since v5 fails, it attempts the v6 variant, and the second failure to find the v6 dependencies is ~expected since the base image is v5, not v6.

It sounds like you got it working with permission fixes to expose the drivers. I'll keep this issue open to track improving the docs for radeon container usage.

<!-- gh-comment-id:1965222674 --> @dhiltgen commented on GitHub (Feb 26, 2024): To clarify some of the error messages, our current build uses the v5 ROCm image base, so the actual error was ``` time=2024-02-24T00:19:40.892Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama1271109174/rocm_v5/libext_serve r.so Unable to init GPU: invalid device ordinal" ``` Since v5 fails, it attempts the v6 variant, and the second failure to find the v6 dependencies is ~expected since the base image is v5, not v6. It sounds like you got it working with permission fixes to expose the drivers. I'll keep this issue open to track improving the docs for radeon container usage.
Author
Owner

@craigcabrey commented on GitHub (Mar 16, 2024):

I'm seeing behavior which falls back to CPU no matter what. Granted, I am definitely in unsupported territory as I want to see if I can get inference to run on the 760m inside this 7640hs. /dev/kfd is exposed and ollama seems to recognize it, but fails on the same invalid device ordinal as above:

$ podman run --rm --env HSA_OVERRIDE_GFX_VERSION=gfx1103 --device /dev/kfd --device /dev/dri -p 11434:11434 --name ollama ollama/ollama:rocm
[snip]
time=2024-03-16T02:53:16.858Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1103]"
time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 2048M"
time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory  2048M"
time=2024-03-16T02:53:16.859Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-16T02:53:16.859Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1103]"
time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 2048M"
time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory  2048M"
time=2024-03-16T02:53:16.859Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-16T02:53:16.888Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2880729444/runners/rocm_v60000/libext_server.so"
time=2024-03-16T02:53:16.888Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-16T02:53:16.900Z level=WARN source=llm.go:170 msg="Failed to load dynamic library /tmp/ollama2880729444/runners/rocm_v60000/libext_server.so  Unable to init GPU: invalid device ordinal"
time=2024-03-16T02:53:16.901Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2880729444/runners/cpu_avx2/libext_server.so"
time=2024-03-16T02:53:16.901Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
<!-- gh-comment-id:2001320404 --> @craigcabrey commented on GitHub (Mar 16, 2024): I'm seeing behavior which falls back to CPU no matter what. Granted, I am _definitely_ in unsupported territory as I want to see if I can get inference to run on the 760m inside this 7640hs. `/dev/kfd` is exposed and ollama seems to recognize it, but fails on the same `invalid device ordinal` as above: ``` $ podman run --rm --env HSA_OVERRIDE_GFX_VERSION=gfx1103 --device /dev/kfd --device /dev/dri -p 11434:11434 --name ollama ollama/ollama:rocm [snip] time=2024-03-16T02:53:16.858Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1103]" time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 2048M" time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 2048M" time=2024-03-16T02:53:16.859Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-16T02:53:16.859Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1103]" time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 2048M" time=2024-03-16T02:53:16.859Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 2048M" time=2024-03-16T02:53:16.859Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-16T02:53:16.888Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2880729444/runners/rocm_v60000/libext_server.so" time=2024-03-16T02:53:16.888Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-03-16T02:53:16.900Z level=WARN source=llm.go:170 msg="Failed to load dynamic library /tmp/ollama2880729444/runners/rocm_v60000/libext_server.so Unable to init GPU: invalid device ordinal" time=2024-03-16T02:53:16.901Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2880729444/runners/cpu_avx2/libext_server.so" time=2024-03-16T02:53:16.901Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" ```
Author
Owner

@craigcabrey commented on GitHub (Mar 19, 2024):

Got it fully running! The key for me was sudo setsebool container_use_devices=1 and HSA_OVERRIDE_GFX_VERSION=11.0.0:

podman run --rm --env HSA_OVERRIDE_GFX_VERSION=11.0.0 --device /dev/kfd --device /dev/dri -p 11434:11434 --name ollama ollama/ollama:rocm

========================================= ROCm System Management Interface =========================================
=================================================== Concise Info ===================================================
Device  [Model : Revision]    Temp    Power     Partitions      SCLK  MCLK     Fan  Perf  PwrCap       VRAM%  GPU%
        Name (20 chars)       (Edge)  (Socket)  (Mem, Compute)                                                      
====================================================================================================================
0       [0xb002 : 0xc3]       63.0°C  53.055W   N/A, N/A        None  2800Mhz  0%   auto  Unsupported   64%   99%
        Phoenix1
====================================================================================================================
=============================================== End of ROCm SMI Log ================================================
<!-- gh-comment-id:2007813465 --> @craigcabrey commented on GitHub (Mar 19, 2024): Got it fully running! The key for me was `sudo setsebool container_use_devices=1` and `HSA_OVERRIDE_GFX_VERSION=11.0.0`: `podman run --rm --env HSA_OVERRIDE_GFX_VERSION=11.0.0 --device /dev/kfd --device /dev/dri -p 11434:11434 --name ollama ollama/ollama:rocm` ``` ========================================= ROCm System Management Interface ========================================= =================================================== Concise Info =================================================== Device [Model : Revision] Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU% Name (20 chars) (Edge) (Socket) (Mem, Compute) ==================================================================================================================== 0 [0xb002 : 0xc3] 63.0°C 53.055W N/A, N/A None 2800Mhz 0% auto Unsupported 64% 99% Phoenix1 ==================================================================================================================== =============================================== End of ROCm SMI Log ================================================ ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27391