[GH-ISSUE #8475] ollama/ollama:rocm not detecting AMD GPU being passed in #51967

Closed
opened 2026-04-28 21:25:34 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @Vilchis-Joshua on GitHub (Jan 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8475

What is the issue?

Overview

I've been trying to add ollama/openwebui to my home lab and I've run across an issue I cannot figure out. ROCM has finally been released last month for my linux distro and I've got it installed and can run (on gpu) ollama on my host machine. When I try to transition to docker however, I am unable to get it to work. I will provide what I have but please ask any clarifying questions.

Components

Host: Debian 6.1.124-1
CPU: I7-12700K
GPU: Radeon 6900 XT

Docker compose file

services:
  ollama:
    image: ollama/ollama:rocm
    container_name: ollama
    hostname: ollama
    privileged: true
    volumes:
      - ollama:/root/.ollama
    environment:
      HSA_OVERRIDE_GFX_VERSION: 10.3.0
      AMD_SERIALIZE_KERNEL: 3
      HIP_VISIBLE_DEVICES: 0
      OLLAMA_DEBUG: 1
      AMD_LOG_LEVEL: 3
    # ports:
    #   - :11434
    restart: unless-stopped
    networks:
      - npm
    devices:
      - /dev/kfd
      - /dev/dri
    group_add:
      - 105
      - 44
    security_opt:
      - seccomp:unconfined

Logs & Outputs

No errors that I can really tell. Here are the logs on startup in docker

2025-01-17 20:40:33 2025/01/18 01:40:33 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:0 HSA_OVERRIDE_GFX_VERSION:10.3.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.654Z level=INFO source=images.go:432 msg="total blobs: 0"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=common.go:80 msg="runners located" dir=/usr/lib/ollama/runners
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/lib/ollama/runners/rocm_avx/ollama_llama_server
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm_avx]"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.656Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.656Z level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.656Z level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
2025-01-17 20:40:33 time=2025-01-18T01:40:33.656Z level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[]
2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcudart.so*
2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/lib/ollama/libcudart.so* /libcudart.so* /usr/lib/ollama/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[]
2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="50.7 GiB" available="35.1 GiB"

ls -ld /sys/module/amdgpu is display drwxr-xr-x 7 root root 0 Jan 17 12:23 /sys/module/amdgpu

OS

Linux, Docker

GPU

AMD

CPU

Intel

Ollama version

0.5.7-0-ga420a45-dirty

Originally created by @Vilchis-Joshua on GitHub (Jan 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8475 ### What is the issue? # Overview I've been trying to add ollama/openwebui to my home lab and I've run across an issue I cannot figure out. ROCM has finally been released last month for my linux distro and I've got it installed and can run (on gpu) ollama on my host machine. When I try to transition to docker however, I am unable to get it to work. I will provide what I have but please ask any clarifying questions. ## Components Host: Debian 6.1.124-1 CPU: I7-12700K GPU: Radeon 6900 XT # Docker compose file ``` services: ollama: image: ollama/ollama:rocm container_name: ollama hostname: ollama privileged: true volumes: - ollama:/root/.ollama environment: HSA_OVERRIDE_GFX_VERSION: 10.3.0 AMD_SERIALIZE_KERNEL: 3 HIP_VISIBLE_DEVICES: 0 OLLAMA_DEBUG: 1 AMD_LOG_LEVEL: 3 # ports: # - :11434 restart: unless-stopped networks: - npm devices: - /dev/kfd - /dev/dri group_add: - 105 - 44 security_opt: - seccomp:unconfined ``` # Logs & Outputs No errors that I can really tell. Here are the logs on startup in docker ``` 2025-01-17 20:40:33 2025/01/18 01:40:33 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:0 HSA_OVERRIDE_GFX_VERSION:10.3.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.654Z level=INFO source=images.go:432 msg="total blobs: 0" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=common.go:80 msg="runners located" dir=/usr/lib/ollama/runners 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/lib/ollama/runners/rocm_avx/ollama_llama_server 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm_avx]" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.655Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.656Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.656Z level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.656Z level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so* 2025-01-17 20:40:33 time=2025-01-18T01:40:33.656Z level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[] 2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcudart.so* 2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/lib/ollama/libcudart.so* /libcudart.so* /usr/lib/ollama/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[] 2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" 2025-01-17 20:40:33 time=2025-01-18T01:40:33.657Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="50.7 GiB" available="35.1 GiB" ``` ```ls -ld /sys/module/amdgpu``` is display ```drwxr-xr-x 7 root root 0 Jan 17 12:23 /sys/module/amdgpu``` ### OS Linux, Docker ### GPU AMD ### CPU Intel ### Ollama version 0.5.7-0-ga420a45-dirty
GiteaMirror added the bug label 2026-04-28 21:25:34 -05:00
Author
Owner

@MaximPerry commented on GitHub (Jan 23, 2025):

Hey @Vilchis-Joshua, I seem to be having the same problem. I recently updated my Ollama Docker Container on my CasaOS home server (Ubuntu Server 22.04) from 0.1.34 to 0.5.7 to be able to run the latest LLMs like Llama 3.2.

ROCm worked fine with 0.1.34, but after updating to 0.5.7 (which for some reasons installs 0.5.7-0-ga420a45-dirty, not clean 0.5.7), Ollama detects my GPU and ROCm but still runs on the CPU instead anyways. A clean re-install doesn't do anything.

Have you figured out a way to make it work?

<!-- gh-comment-id:2608540789 --> @MaximPerry commented on GitHub (Jan 23, 2025): Hey @Vilchis-Joshua, I seem to be having the same problem. I recently updated my Ollama Docker Container on my CasaOS home server (Ubuntu Server 22.04) from 0.1.34 to 0.5.7 to be able to run the latest LLMs like Llama 3.2. ROCm worked fine with 0.1.34, but after updating to 0.5.7 (which for some reasons installs 0.5.7-0-ga420a45-dirty, not clean 0.5.7), Ollama detects my GPU and ROCm but still runs on the CPU instead anyways. A clean re-install doesn't do anything. Have you figured out a way to make it work?
Author
Owner

@Mario4272 commented on GitHub (Jan 24, 2025):

Seeing this issue as well
0.5.7-0-ga420a45-dirty

<!-- gh-comment-id:2613364537 --> @Mario4272 commented on GitHub (Jan 24, 2025): Seeing this issue as well 0.5.7-0-ga420a45-dirty
Author
Owner

@Vilchis-Joshua commented on GitHub (Feb 9, 2025):

Hey @Vilchis-Joshua, I seem to be having the same problem. I recently updated my Ollama Docker Container on my CasaOS home server (Ubuntu Server 22.04) from 0.1.34 to 0.5.7 to be able to run the latest LLMs like Llama 3.2.

ROCm worked fine with 0.1.34, but after updating to 0.5.7 (which for some reasons installs 0.5.7-0-ga420a45-dirty, not clean 0.5.7), Ollama detects my GPU and ROCm but still runs on the CPU instead anyways. A clean re-install doesn't do anything.

Have you figured out a way to make it work?

Sorry, I have not been able to find a good way to make this work. In the interim, there is a feature of docker that I've had to use which I'm not sure of at all.
"host.docker.internal:<port#>"
In this way it bypasses the docker network and utilizes the host network it seems like.

Pretty frustrating, but it doesn't mean that I cannot use it at all so that's nice at least. I've been hoping that more people have this issue, but it doesn't look to be the case. I need to try and see if I can make it work again though.

<!-- gh-comment-id:2646055926 --> @Vilchis-Joshua commented on GitHub (Feb 9, 2025): > Hey [@Vilchis-Joshua](https://github.com/Vilchis-Joshua), I seem to be having the same problem. I recently updated my Ollama Docker Container on my CasaOS home server (Ubuntu Server 22.04) from 0.1.34 to 0.5.7 to be able to run the latest LLMs like Llama 3.2. > > ROCm worked fine with 0.1.34, but after updating to 0.5.7 (which for some reasons installs 0.5.7-0-ga420a45-dirty, not clean 0.5.7), Ollama detects my GPU and ROCm but still runs on the CPU instead anyways. A clean re-install doesn't do anything. > > Have you figured out a way to make it work? Sorry, I have not been able to find a good way to make this work. In the interim, there is a feature of docker that I've had to use which I'm not sure of at all. "host.docker.internal:<port#>" In this way it bypasses the docker network and utilizes the host network it seems like. Pretty frustrating, but it doesn't mean that I cannot use it at all so that's nice at least. I've been hoping that more people have this issue, but it doesn't look to be the case. I need to try and see if I can make it work again though.
Author
Owner

@salmanmarvasti commented on GitHub (Feb 21, 2025):

it doesnt work for me I get this: time=2025-02-21T03:46:05.907-05:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-02-21T03:46:05.907-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-21T03:46:05.908-05:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.11)"
time=2025-02-21T03:46:05.909-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-21T03:46:07.302-05:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/x86_64-linux-gnu/libcuda.so.525.60.13"
time=2025-02-21T03:46:07.305-05:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/wsl/lib/libcuda.so"
time=2025-02-21T03:46:07.305-05:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/wsl/lib/libcuda.so.1"
time=2025-02-21T03:46:07.306-05:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/wsl/lib/libcuda.so.1.1"
time=2025-02-21T03:46:07.444-05:00 level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/wsl/drivers/nv_dispsi.inf_amd64_3d88c2eb4775cc07/libcuda.so.1.1: cuda driver library init failure: 500"
time=2025-02-21T03:46:08.776-05:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-21T03:46:08.776-05:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="47.0 GiB" available="46.1 GiB"

<!-- gh-comment-id:2673948314 --> @salmanmarvasti commented on GitHub (Feb 21, 2025): it doesnt work for me I get this: time=2025-02-21T03:46:05.907-05:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-02-21T03:46:05.907-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-21T03:46:05.908-05:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.11)" time=2025-02-21T03:46:05.909-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-21T03:46:07.302-05:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/x86_64-linux-gnu/libcuda.so.525.60.13" time=2025-02-21T03:46:07.305-05:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/wsl/lib/libcuda.so" time=2025-02-21T03:46:07.305-05:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/wsl/lib/libcuda.so.1" time=2025-02-21T03:46:07.306-05:00 level=INFO source=gpu.go:602 msg="no nvidia devices detected by library /usr/lib/wsl/lib/libcuda.so.1.1" time=2025-02-21T03:46:07.444-05:00 level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/wsl/drivers/nv_dispsi.inf_amd64_3d88c2eb4775cc07/libcuda.so.1.1: cuda driver library init failure: 500" time=2025-02-21T03:46:08.776-05:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-02-21T03:46:08.776-05:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="47.0 GiB" available="46.1 GiB"
Author
Owner

@salmanmarvasti commented on GitHub (Feb 21, 2025):

root@Ryzen7950xDesk:~# python3 -c "import torch; print(f'device name [0]:', torch.cuda.get_device_name(0))"
device name [0]: AMD Radeon RX 7900 GRE

<!-- gh-comment-id:2673948612 --> @salmanmarvasti commented on GitHub (Feb 21, 2025): root@Ryzen7950xDesk:~# python3 -c "import torch; print(f'device name [0]:', torch.cuda.get_device_name(0))" device name [0]: AMD Radeon RX 7900 GRE
Author
Owner

@Vilchis-Joshua commented on GitHub (Mar 4, 2025):

root@Ryzen7950xDesk:~# python3 -c "import torch; print(f'device name [0]:', torch.cuda.get_device_name(0))" device name [0]: AMD Radeon RX 7900 GRE

What does your docker looks like?

I am assuming someone made an update because I ran ollama last week and actually everthing works out. Ollama docker container is getting passed by gpu and my life is great finally :) I don't know who updated what so that kind of stinks but I've been happy.

<!-- gh-comment-id:2698619110 --> @Vilchis-Joshua commented on GitHub (Mar 4, 2025): > root@Ryzen7950xDesk:~# python3 -c "import torch; print(f'device name [0]:', torch.cuda.get_device_name(0))" device name [0]: AMD Radeon RX 7900 GRE What does your docker looks like? I am assuming someone made an update because I ran ollama last week and actually everthing works out. Ollama docker container is getting passed by gpu and my life is great finally :) I don't know who updated what so that kind of stinks but I've been happy.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51967