[GH-ISSUE #2320] AMD ROCm problem: GPU is constantly running at 100% #1338

Closed
opened 2026-04-12 11:10:27 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @MichaelFomenko on GitHub (Feb 2, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2320

Following Problem:

When I run "ollama run Mistral" the GPU is constantly running at 100% and consuming 100 watt
But the Chat is working fine, without any Problems.

The GPU is behaving strange:
Before I run "ollama run Mistral" - the GPU utilization is 0% and power: 0 watt and memory 0 MB.
After I run "ollama run Mistral" - the GPU utilization is 100% and power: 100 watt and memory 5.000 MB.
When I run a Chat Prompt - the GPU utilization is 100% and power: 300 watt and memory 5.000 MB.
After I close ollama chat - the GPU utilization is 100% and power: 100 watt and memory 5.000 MB.
After I close ollama serve - the GPU utilization is 0% and power: 0 watt and memory 0 MB.

Additional Information about GPU and Memmory Speed
Before I run "ollama run Mistral" - GPUSpeed: 50Mhz MemorySpeed: 90Mhz.
After I run "ollama run Mistral" - GPUSpeed: 3000Mhz MemorySpeed: 90Mhz.
When I run a Chat Prompt - GPUSpeed: 3000Mhz MemorySpeed: 1200Mhz.
After I close ollama chat - GPUSpeed: 3000Mhz MemorySpeed: 90Mhz.
After I close ollama serve - GPUSpeed: 50Mhz MemorySpeed: 90Mhz.

ollama version: 0.1.22
ROCm Verion: 6.0
GPU: 7900 XTX
System: Ubuntu 22.04
CPU: 7950X
RAM: 64GB

When I start ollama serve:
ollama serve

2024/02/02 05:11:24 images.go:857: INFO total blobs: 7
2024/02/02 05:11:24 images.go:864: INFO total unused blobs removed: 0
2024/02/02 05:11:24 routes.go:950: INFO Listening on 127.0.0.1:11434 (version 0.1.22)
2024/02/02 05:11:24 payload_common.go:106: INFO Extracting dynamic libraries...
2024/02/02 05:11:25 payload_common.go:145: INFO Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v5 rocm_v6 cpu]
2024/02/02 05:11:25 gpu.go:94: INFO Detecting GPU type
2024/02/02 05:11:25 gpu.go:236: INFO Searching for GPU management library libnvidia-ml.so
2024/02/02 05:11:25 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05]
2024/02/02 05:11:25 gpu.go:294: INFO Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05: nvml vram init failure: 9
2024/02/02 05:11:25 gpu.go:236: INFO Searching for GPU management library librocm_smi64.so
2024/02/02 05:11:25 gpu.go:282: INFO Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60000 /opt/rocm-6.0.0/lib/librocm_smi64.so.6.0.60000]
2024/02/02 05:11:25 gpu.go:109: INFO Radeon GPU detected

ollama run Mistral

[GIN] 2024/02/02 - 07:36:56 | 200 |      32.421µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/02 - 07:36:56 | 200 |     723.312µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/02 - 07:36:56 | 200 |     284.482µs |       127.0.0.1 | POST     "/api/show"
2024/02/02 07:36:56 cpu_common.go:11: INFO CPU has AVX2
loading library /tmp/ollama726758615/rocm_v6/libext_server.so
2024/02/02 07:36:56 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama726758615/rocm_v6/libext_server.so
2024/02/02 07:36:56 dyn_ext_server.go:145: INFO Initializing llama server
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /home/user/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  3847.55 MiB
llm_load_tensors:        CPU buffer size =    70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    12.01 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   156.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     8.00 MiB
llama_new_context_with_model: graph splits (measure): 3
2024/02/02 07:37:15 dyn_ext_server.go:156: INFO Starting llama main loop
[GIN] 2024/02/02 - 07:37:15 | 200 | 18.899618958s |       127.0.0.1 | POST     "/api/chat"

Same behavior when I run the llama2 Model

When I Run Mistral in oobabooga/text-generation-webui and using there Transformers, all working fine. The GPU is only at 100% active if I chat, else at 0%. But when I using there llama.cpp, the GPU behave the same like in ollama.

It seems like an llama.cpp problem

Originally created by @MichaelFomenko on GitHub (Feb 2, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2320 ### Following Problem: When I run "ollama run Mistral" the GPU is constantly running at 100% and consuming 100 watt But the Chat is working fine, without any Problems. **The GPU is behaving strange:** Before I run "ollama run Mistral" - the GPU utilization is **0%** and power: 0 watt and memory 0 MB. After I run "ollama run Mistral" - the GPU utilization is **100%** and power: 100 watt and memory 5.000 MB. When I run a Chat Prompt - the GPU utilization is **100%** and power: **300 watt** and memory 5.000 MB. After I close ollama chat - the GPU utilization is **100%** and power: 100 watt and memory 5.000 MB. After I close ollama serve - the GPU utilization is **0%** and power: 0 watt and memory 0 MB. **Additional Information about GPU and Memmory Speed** Before I run "ollama run Mistral" - GPUSpeed: **50Mhz** MemorySpeed: 90Mhz. After I run "ollama run Mistral" - GPUSpeed: **3000Mhz** MemorySpeed: 90Mhz. When I run a Chat Prompt - GPUSpeed: **3000Mhz** MemorySpeed: **1200Mhz**. After I close ollama chat - GPUSpeed: **3000Mhz** MemorySpeed: 90Mhz. After I close ollama serve - GPUSpeed: **50Mhz** MemorySpeed: 90Mhz. ollama version: 0.1.22 ROCm Verion: 6.0 GPU: 7900 XTX System: Ubuntu 22.04 CPU: 7950X RAM: 64GB When I start ollama serve: **ollama serve** ``` 2024/02/02 05:11:24 images.go:857: INFO total blobs: 7 2024/02/02 05:11:24 images.go:864: INFO total unused blobs removed: 0 2024/02/02 05:11:24 routes.go:950: INFO Listening on 127.0.0.1:11434 (version 0.1.22) 2024/02/02 05:11:24 payload_common.go:106: INFO Extracting dynamic libraries... 2024/02/02 05:11:25 payload_common.go:145: INFO Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v5 rocm_v6 cpu] 2024/02/02 05:11:25 gpu.go:94: INFO Detecting GPU type 2024/02/02 05:11:25 gpu.go:236: INFO Searching for GPU management library libnvidia-ml.so 2024/02/02 05:11:25 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05] 2024/02/02 05:11:25 gpu.go:294: INFO Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05: nvml vram init failure: 9 2024/02/02 05:11:25 gpu.go:236: INFO Searching for GPU management library librocm_smi64.so 2024/02/02 05:11:25 gpu.go:282: INFO Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60000 /opt/rocm-6.0.0/lib/librocm_smi64.so.6.0.60000] 2024/02/02 05:11:25 gpu.go:109: INFO Radeon GPU detected ``` **ollama run Mistral** ``` [GIN] 2024/02/02 - 07:36:56 | 200 | 32.421µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/02 - 07:36:56 | 200 | 723.312µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/02 - 07:36:56 | 200 | 284.482µs | 127.0.0.1 | POST "/api/show" 2024/02/02 07:36:56 cpu_common.go:11: INFO CPU has AVX2 loading library /tmp/ollama726758615/rocm_v6/libext_server.so 2024/02/02 07:36:56 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama726758615/rocm_v6/libext_server.so 2024/02/02 07:36:56 dyn_ext_server.go:145: INFO Initializing llama server ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /home/user/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 3847.55 MiB llm_load_tensors: CPU buffer size = 70.31 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: ROCm_Host input buffer size = 12.01 MiB llama_new_context_with_model: ROCm0 compute buffer size = 156.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB llama_new_context_with_model: graph splits (measure): 3 2024/02/02 07:37:15 dyn_ext_server.go:156: INFO Starting llama main loop [GIN] 2024/02/02 - 07:37:15 | 200 | 18.899618958s | 127.0.0.1 | POST "/api/chat" ``` **Same behavior when I run the llama2 Model** When I Run Mistral in [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) and using there **Transformers**, all working fine. The GPU is only at 100% active if I chat, else at 0%. But when I using there **llama.cpp**, the GPU behave the same like in ollama. It seems like an **llama.cpp** problem
Author
Owner

@Wintoplay commented on GitHub (Apr 11, 2024):

Have you solved this issue/problem?
I face the same problem. llama.cpp issue can be solved with GPU_MAX_HW_QUEUES=1. however, I cannot solve this issue for ollama yet.

<!-- gh-comment-id:2048888017 --> @Wintoplay commented on GitHub (Apr 11, 2024): Have you solved this issue/problem? I face the same problem. llama.cpp issue can be solved with GPU_MAX_HW_QUEUES=1. however, I cannot solve this issue for ollama yet.
Author
Owner

@melroy89 commented on GitHub (Feb 26, 2025):

For now the workaround is adding the following line:

Environment="GPU_MAX_HW_QUEUES=1"

Under the existing Environment="PATH=... line within the /etc/systemd/system/ollama.service file.

And then restart the Ollama service:

sudo systemctl daemon-reload && sudo systemctl restart ollama
<!-- gh-comment-id:2685697371 --> @melroy89 commented on GitHub (Feb 26, 2025): For now the workaround is adding the following line: ```service Environment="GPU_MAX_HW_QUEUES=1" ``` Under the existing `Environment="PATH=...` line within the `/etc/systemd/system/ollama.service` file. And then restart the Ollama service: ```sh sudo systemctl daemon-reload && sudo systemctl restart ollama ```
Author
Owner

@r3tr0g4m3r commented on GitHub (Sep 6, 2025):

For now the workaround is adding the following line:

Environment="GPU_MAX_HW_QUEUES=1"

Under the existing Environment="PATH=... line within the /etc/systemd/system/ollama.service file.

And then restart the Ollama service:

sudo systemctl daemon-reload && sudo systemctl restart ollama

I have a Radeon RX 9070 and the only way to make it work correctly is with your fix and a HSA override, a more permanent fix resistant to upgrades is creating a systemd drop in:

/etc/systemd/system/ollama.service.d/override.conf
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=12.0.1"
Environment="ROCM_PATH=/opt/rocm"
Environment="GPU_MAX_HW_QUEUES=1"

Bug encountered in:
Linux 6.17.0-rc4-1-MANJARO SMP PREEMPT_DYNAMIC Mon, 01 Sep 2025 04:25:44 +0000 x86_64 GNU/Linux
ollama 0.11.4-1
ollama-rocm 0.11.4-1
ollama-rocm 0.11.4-1
rocm-core 6.4.3-1
rocm-device-libs 6.4.3-1
rocm-llvm 6.4.3-1
rocm-smi-lib 6.4.3-1
rocminfo 6.4.3-1

<!-- gh-comment-id:3263180511 --> @r3tr0g4m3r commented on GitHub (Sep 6, 2025): > For now the workaround is adding the following line: > > Environment="GPU_MAX_HW_QUEUES=1" > > Under the existing `Environment="PATH=...` line within the `/etc/systemd/system/ollama.service` file. > > And then restart the Ollama service: > > sudo systemctl daemon-reload && sudo systemctl restart ollama I have a Radeon RX 9070 and the only way to make it work correctly is with your fix and a HSA override, a more permanent fix resistant to upgrades is creating a systemd drop in: ``` /etc/systemd/system/ollama.service.d/override.conf ``` ``` [Service] Environment="HSA_OVERRIDE_GFX_VERSION=12.0.1" Environment="ROCM_PATH=/opt/rocm" Environment="GPU_MAX_HW_QUEUES=1" ``` Bug encountered in: Linux 6.17.0-rc4-1-MANJARO SMP PREEMPT_DYNAMIC Mon, 01 Sep 2025 04:25:44 +0000 x86_64 GNU/Linux ollama 0.11.4-1 ollama-rocm 0.11.4-1 ollama-rocm 0.11.4-1 rocm-core 6.4.3-1 rocm-device-libs 6.4.3-1 rocm-llvm 6.4.3-1 rocm-smi-lib 6.4.3-1 rocminfo 6.4.3-1
Author
Owner

@mvanthoor commented on GitHub (Sep 8, 2025):

@r3tr0g4m3r 👍🏻

Thanks for this:

[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=12.0.1"
Environment="GPU_MAX_HW_QUEUES=1"

Previously I used an RX 6750 XT, which needed the HSA override set to 10.3.0. Now I'm on an RX 9070 XT, and ollama crashed with that override. Removing it made it work, but then it kept running the GPU at 100% between prompts. That would be an enormous waste of power. I didn't need the ROCm root, but the HSA and GPU_MAX settings solved the problem with crashing and 100% GPU utilization.

<!-- gh-comment-id:3267646330 --> @mvanthoor commented on GitHub (Sep 8, 2025): @r3tr0g4m3r 👍🏻 Thanks for this: ``` [Service] Environment="HSA_OVERRIDE_GFX_VERSION=12.0.1" Environment="GPU_MAX_HW_QUEUES=1" ``` Previously I used an RX 6750 XT, which needed the HSA override set to 10.3.0. Now I'm on an RX 9070 XT, and ollama crashed with that override. Removing it made it work, but then it kept running the GPU at 100% between prompts. That would be an enormous waste of power. I didn't need the ROCm root, but the HSA and GPU_MAX settings solved the problem with crashing and 100% GPU utilization.
Author
Owner

@JaviLib commented on GitHub (Apr 1, 2026):

I know this is an old thread, but here is a very good solution for RX 9060 XT, it is able to run even the 35B model by offloading partially to CPU, and 9B models at incredible speeds:

docker run --rm -d --device /dev/kfd --device /dev/dri -p 11434:11434 --name ollama \
  -e GPU_MAX_HW_QUEUES=1,OLLAMA_KV_CACHE_TYPE=q4_0,OLLAMA_CONTEXT_LENGTH=128000,OLLAMA_FLASH_ATTENTION=true,OLLAMA_MAX_LOADED_MODELS=1,OLLAMA_NUM_PARALLEL=32,OLLAMA_NEW_ENGINE=true \
  -v /var/home/myuser/.ollama/models:/root/.ollama/models ollama/ollama:rocm

The -v option is optional, so if you have the models already downloaded don't download again.

Try it out:

docker exec -ti ollama ollama run qwen3.5:9b
docker exec -ti ollama ollama run qwen3.5:35b
<!-- gh-comment-id:4171287129 --> @JaviLib commented on GitHub (Apr 1, 2026): I know this is an old thread, but here is a very good solution for RX 9060 XT, it is able to run even the 35B model by offloading partially to CPU, and 9B models at incredible speeds: ```bash docker run --rm -d --device /dev/kfd --device /dev/dri -p 11434:11434 --name ollama \ -e GPU_MAX_HW_QUEUES=1,OLLAMA_KV_CACHE_TYPE=q4_0,OLLAMA_CONTEXT_LENGTH=128000,OLLAMA_FLASH_ATTENTION=true,OLLAMA_MAX_LOADED_MODELS=1,OLLAMA_NUM_PARALLEL=32,OLLAMA_NEW_ENGINE=true \ -v /var/home/myuser/.ollama/models:/root/.ollama/models ollama/ollama:rocm ``` The `-v` option is optional, so if you have the models already downloaded don't download again. Try it out: ```bash docker exec -ti ollama ollama run qwen3.5:9b docker exec -ti ollama ollama run qwen3.5:35b ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1338