[GH-ISSUE #12600] Continue support for AMD gfx906 #34124

Open
opened 2026-04-22 17:24:03 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @mputzi on GitHub (Oct 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12600

Originally assigned to: @dhiltgen on GitHub.

Please continue support for this hardware – even when it has reached EOL! It was working absolutely fine with ollama just a few days ago.

Originally created by @mputzi on GitHub (Oct 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12600 Originally assigned to: @dhiltgen on GitHub. Please continue support for this hardware – even when it has reached EOL! It was working absolutely fine with ollama just a few days ago.
GiteaMirror added the feature requestamd labels 2026-04-22 17:24:03 -05:00
Author
Owner

@mputzi commented on GitHub (Oct 13, 2025):

For all of you, who are running older harware: ollama-rocm 0.12.2 still works on AMD gfx906.

<!-- gh-comment-id:3398439136 --> @mputzi commented on GitHub (Oct 13, 2025): For all of you, who are running older harware: ollama-rocm 0.12.2 still works on AMD gfx906.
Author
Owner

@rick-github commented on GitHub (Oct 13, 2025):

Vulkan (https://github.com/ollama/ollama/pull/11835) will restore support.

<!-- gh-comment-id:3398580469 --> @rick-github commented on GitHub (Oct 13, 2025): Vulkan (https://github.com/ollama/ollama/pull/11835) will restore support.
Author
Owner

@pdevine commented on GitHub (Oct 13, 2025):

@mputzi it's coming back through vulkan (as @rick-github mentioned)... unfortunately the ROCm support for it is broken.

<!-- gh-comment-id:3398723854 --> @pdevine commented on GitHub (Oct 13, 2025): @mputzi it's coming back through vulkan (as @rick-github mentioned)... unfortunately the ROCm support for it is broken.
Author
Owner

@Nyx1197 commented on GitHub (Oct 14, 2025):

For all of you, who are running older harware: ollama-rocm 0.12.2 still works on AMD gfx906.对于所有使用旧硬件的用户:ollama-rocm 0.12.2 仍然在 AMD gfx906 上工作。

0.12.3 is also supported, but 0.12.4 and later versions are not.

<!-- gh-comment-id:3401367163 --> @Nyx1197 commented on GitHub (Oct 14, 2025): > For all of you, who are running older harware: ollama-rocm 0.12.2 still works on AMD gfx906.对于所有使用旧硬件的用户:ollama-rocm 0.12.2 仍然在 AMD gfx906 上工作。 0.12.3 is also supported, but 0.12.4 and later versions are not.
Author
Owner

@Nyx1197 commented on GitHub (Oct 14, 2025):

Vulkan (#11835) will restore support.Vulkan ( #11835 ) 将恢复支持。

Will the upcoming Vulkan version be a standalone release, similar to ollama-rocm?

<!-- gh-comment-id:3401391586 --> @Nyx1197 commented on GitHub (Oct 14, 2025): > Vulkan ([#11835](https://github.com/ollama/ollama/pull/11835)) will restore support.Vulkan ( [#11835](https://github.com/ollama/ollama/pull/11835) ) 将恢复支持。 Will the upcoming Vulkan version be a standalone release, similar to ollama-rocm?
Author
Owner

@dhiltgen commented on GitHub (Oct 14, 2025):

@Nyx1197 for the Windows installer it will be bundled in once it's ready. On Linux, we may adjust the bundles to make it easier for users to pick and choose which components they want - see #12277

<!-- gh-comment-id:3402694124 --> @dhiltgen commented on GitHub (Oct 14, 2025): @Nyx1197 for the Windows installer it will be bundled in once it's ready. On Linux, we may adjust the bundles to make it easier for users to pick and choose which components they want - see #12277
Author
Owner

@AngryPenguinPL commented on GitHub (Oct 16, 2025):

@dhiltgen Is there any way to tell ollama to use the Vulkan backend instead of the CPU? I compiled it manually, cmake requested Vulkan, and I installed it. The build logs show that ollama was compiled with Vulkan backend support. The problem occurs at runtime, where Ollama default to CPU.
I don't see any instructions on how to use Vulkan during runtime in your doc.

<!-- gh-comment-id:3412047928 --> @AngryPenguinPL commented on GitHub (Oct 16, 2025): @dhiltgen Is there any way to tell ollama to use the Vulkan backend instead of the CPU? I compiled it manually, cmake requested Vulkan, and I installed it. The build logs show that ollama was compiled with Vulkan backend support. The problem occurs at runtime, where Ollama default to CPU. I don't see any instructions on how to use Vulkan during runtime in your doc.
Author
Owner

@dhiltgen commented on GitHub (Oct 16, 2025):

@AngryPenguinPL we're still working on the build and documentation, but you can try something like

PLATFORM=linux/amd64 ./scripts/build_linux.sh

which will use docker buildx to build Linux binaries in a container, and place them in ./dist/ and the resulting tgz files should pick up Vulkan.

<!-- gh-comment-id:3412234284 --> @dhiltgen commented on GitHub (Oct 16, 2025): @AngryPenguinPL we're still working on the build and documentation, but you can try something like ``` PLATFORM=linux/amd64 ./scripts/build_linux.sh ``` which will use `docker buildx` to build Linux binaries in a container, and place them in `./dist/` and the resulting tgz files should pick up Vulkan.
Author
Owner

@Nyx1197 commented on GitHub (Oct 21, 2025):

@Nyx1197 for the Windows installer it will be bundled in once it's ready. On Linux, we may adjust the bundles to make it easier for users to pick and choose which components they want - see #12277

Sorry, my description wasn't quite accurate. I meant to will the upcoming Vulkan version be a docker images, similar to ollama-rocm?

<!-- gh-comment-id:3425289237 --> @Nyx1197 commented on GitHub (Oct 21, 2025): > [@Nyx1197](https://github.com/Nyx1197) for the Windows installer it will be bundled in once it's ready. On Linux, we may adjust the bundles to make it easier for users to pick and choose which components they want - see [#12277](https://github.com/ollama/ollama/pull/12277) Sorry, my description wasn't quite accurate. I meant to will the upcoming Vulkan version be a docker images, similar to ollama-rocm?
Author
Owner

@AngryPenguinPL commented on GitHub (Oct 25, 2025):

@AngryPenguinPL we're still working on the build and documentation, but you can try something like

PLATFORM=linux/amd64 ./scripts/build_linux.sh

which will use docker buildx to build Linux binaries in a container, and place them in ./dist/ and the resulting tgz files should pick up Vulkan.

@dhiltgen
When I looked at the build logs, I noticed that although the build was successful and the Vulkan library was being created, the log contained errors related to ggml-vulkan.
This is the same error also reported in koboldcpp: https://github.com/LostRuins/koboldcpp/issues/1768
I resolved the issue without using Docker. It was caused by an older version of glslc/shaderc, which, in addition to the development version of Vulkan, is also required for compiling Ollama with Vulkan.

I had version 2024.4, and the problem resolved after updating to the latest version 2025.4.
I think it's worth adding information about the need to install a relatively modern version of glslc/shaderc to get Vulkan support to your doc here https://github.com/ollama/ollama/pull/12711

<!-- gh-comment-id:3446879614 --> @AngryPenguinPL commented on GitHub (Oct 25, 2025): > [@AngryPenguinPL](https://github.com/AngryPenguinPL) we're still working on the build and documentation, but you can try something like > > ``` > PLATFORM=linux/amd64 ./scripts/build_linux.sh > ``` > > which will use `docker buildx` to build Linux binaries in a container, and place them in `./dist/` and the resulting tgz files should pick up Vulkan. @dhiltgen When I looked at the build logs, I noticed that although the build was successful and the Vulkan library was being created, the log contained errors related to ggml-vulkan. This is the same error also reported in koboldcpp: **https://github.com/LostRuins/koboldcpp/issues/1768** I resolved the issue without using Docker. It was caused by an older version of glslc/shaderc, which, in addition to the development version of Vulkan, is also required for compiling Ollama with Vulkan. I had version 2024.4, and the problem resolved after updating to the latest version 2025.4. I think it's worth adding information about the need to install a relatively modern version of glslc/shaderc to get Vulkan support to your doc here https://github.com/ollama/ollama/pull/12711
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support

<!-- gh-comment-id:3530388564 --> @dhiltgen commented on GitHub (Nov 14, 2025): In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support
Author
Owner

@Nyx1197 commented on GitHub (Nov 14, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support

Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future.

Model list as follows:

Model architecture type Rocm Vulkan
hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92
qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96
devstral:latest llama Dense 20.11 2.42
gpt-oss:20b gptoss MoE 45.16 NG

Smaller models work well, even showing performance improvements.

Model architecture type Rocm Vulkan
codegeex4:latest chatglm Dense 48.11 60.63

GPU: AMD Instinct MI50 32G
Powerlimit: 160w (rocm-smi --setpoweroverdrive 160)
Performance data: ollama run model --verbose (eval rate: tokens/s)

<!-- gh-comment-id:3530648840 --> @Nyx1197 commented on GitHub (Nov 14, 2025): > In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future. Model list as follows: |Model|architecture|type|Rocm|Vulkan| |:-:|:-:|:-:|:-:|:-:| |hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M|qwen2vl|Dense|14.92|1.92| |qwen3:30b-a3b-thinking-2507-q4_K_M|qwen3moe|MoE|46.77|9.96 |devstral:latest|llama|Dense|20.11|2.42| |gpt-oss:20b|gptoss|MoE|45.16|NG| Smaller models work well, even showing performance improvements. |Model|architecture|type|Rocm|Vulkan| |:-:|:-:|:-:|:-:|:-:| |codegeex4:latest|chatglm|Dense|48.11|60.63| GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s)
Author
Owner

@zeus commented on GitHub (Nov 14, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support

Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future.

Model list as follows:

Model architecture type Rocm Vulkan
hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92
qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96
devstral:latest llama Dense 20.11 2.42
gpt-oss:20b gptoss MoE 45.16 NG
Smaller models work well, even showing performance improvements.

Model architecture type Rocm Vulkan
codegeex4:latest chatglm Dense 48.11 60.63
GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s)

Vulkan utilize only 16GB of 32GB MI50, can't get more than 16GB allocate on GPU, result significant performance degradation.

<!-- gh-comment-id:3531770394 --> @zeus commented on GitHub (Nov 14, 2025): > > In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support > > Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future. > > Model list as follows: > > Model architecture type Rocm Vulkan > hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92 > qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96 > devstral:latest llama Dense 20.11 2.42 > gpt-oss:20b gptoss MoE 45.16 NG > Smaller models work well, even showing performance improvements. > > Model architecture type Rocm Vulkan > codegeex4:latest chatglm Dense 48.11 60.63 > GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s) Vulkan utilize only 16GB of 32GB MI50, can't get more than 16GB allocate on GPU, result significant performance degradation.
Author
Owner

@Nyx1197 commented on GitHub (Nov 14, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support

Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future.
Model list as follows:
Model architecture type Rocm Vulkan
hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92
qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96
devstral:latest llama Dense 20.11 2.42
gpt-oss:20b gptoss MoE 45.16 NG
Smaller models work well, even showing performance improvements.
Model architecture type Rocm Vulkan
codegeex4:latest chatglm Dense 48.11 60.63
GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s)

Vulkan utilize only 16GB of 32GB MI50, can't get more than 16GB allocate on GPU, result significant performance degradation.

Thanks for your help, it's a bit strange that we can only use 16G.

<!-- gh-comment-id:3531933823 --> @Nyx1197 commented on GitHub (Nov 14, 2025): > > > In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support > > > > > > Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future. > > Model list as follows: > > Model architecture type Rocm Vulkan > > hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92 > > qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96 > > devstral:latest llama Dense 20.11 2.42 > > gpt-oss:20b gptoss MoE 45.16 NG > > Smaller models work well, even showing performance improvements. > > Model architecture type Rocm Vulkan > > codegeex4:latest chatglm Dense 48.11 60.63 > > GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s) > > Vulkan utilize only 16GB of 32GB MI50, can't get more than 16GB allocate on GPU, result significant performance degradation. Thanks for your help, it's a bit strange that we can only use 16G.
Author
Owner

@mputzi commented on GitHub (Nov 14, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support

Thank you very much!

I just tested the new version with vulkan-backend in docker container of ollama:latest and it works really great!
For those, who like a quick start:

sudo docker pull ollama/ollama:latest

sudo docker run -d --restart=always --device /dev/kfd --device /dev/dri --security-opt seccomp=unconfined -v ollama:/root/.ollama -p 11434:11434 -e OLLAMA_VULKAN=1 --name ollama ollama/ollama:latest

sudo docker exec -it ollama ollama run qwen3-vl:32b

qwen3-vl:32b distributes itself nicely in VRAM on 2 parallel MI50 with 16GB each using vulkan backend. I also recognize a massive gain in token generation speed.

Thanks for the great work!

<!-- gh-comment-id:3534014672 --> @mputzi commented on GitHub (Nov 14, 2025): > In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support Thank you very much! I just tested the new version with vulkan-backend in docker container of ollama:latest and it works really great! For those, who like a quick start: ``` sudo docker pull ollama/ollama:latest sudo docker run -d --restart=always --device /dev/kfd --device /dev/dri --security-opt seccomp=unconfined -v ollama:/root/.ollama -p 11434:11434 -e OLLAMA_VULKAN=1 --name ollama ollama/ollama:latest sudo docker exec -it ollama ollama run qwen3-vl:32b ``` qwen3-vl:32b distributes itself nicely in VRAM on 2 parallel MI50 with 16GB each using vulkan backend. I also recognize a massive gain in token generation speed. Thanks for the great work!
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2025):

That's great to hear!

It would be nice to find a reduced set of security grants so we can keep it somewhat deprivileged and add that to the docs.

<!-- gh-comment-id:3534494501 --> @dhiltgen commented on GitHub (Nov 14, 2025): That's great to hear! It would be nice to find a reduced set of security grants so we can keep it somewhat deprivileged and add that to the docs.
Author
Owner

@Nyx1197 commented on GitHub (Nov 17, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support

Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future.
Model list as follows:
Model architecture type Rocm Vulkan
hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92
qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96
devstral:latest llama Dense 20.11 2.42
gpt-oss:20b gptoss MoE 45.16 NG
Smaller models work well, even showing performance improvements.
Model architecture type Rocm Vulkan
codegeex4:latest chatglm Dense 48.11 60.63
GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s)

Vulkan utilize only 16GB of 32GB MI50, can't get more than 16GB allocate on GPU, result significant performance degradation.

What modifications can I make to ensure Ollama recognizes only 16GB of memory per MI50 32GB GPU? Currently, Ollama detects the full 32GB of memory and attempts to allocate more than 16GB on a single card, causing performance issues.

<!-- gh-comment-id:3539702913 --> @Nyx1197 commented on GitHub (Nov 17, 2025): > > > In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support > > > > > > Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future. > > Model list as follows: > > Model architecture type Rocm Vulkan > > hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92 > > qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96 > > devstral:latest llama Dense 20.11 2.42 > > gpt-oss:20b gptoss MoE 45.16 NG > > Smaller models work well, even showing performance improvements. > > Model architecture type Rocm Vulkan > > codegeex4:latest chatglm Dense 48.11 60.63 > > GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s) > > Vulkan utilize only 16GB of 32GB MI50, can't get more than 16GB allocate on GPU, result significant performance degradation. What modifications can I make to ensure Ollama recognizes only 16GB of memory per MI50 32GB GPU? Currently, Ollama detects the full 32GB of memory and attempts to allocate more than 16GB on a single card, causing performance issues.
Author
Owner

@Nyx1197 commented on GitHub (Nov 20, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support

Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future.
Model list as follows:
Model architecture type Rocm Vulkan
hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92
qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96
devstral:latest llama Dense 20.11 2.42
gpt-oss:20b gptoss MoE 45.16 NG
Smaller models work well, even showing performance improvements.
Model architecture type Rocm Vulkan
codegeex4:latest chatglm Dense 48.11 60.63
GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s)

Vulkan utilize only 16GB of 32GB MI50, can't get more than 16GB allocate on GPU, result significant performance degradation.

What modifications can I make to ensure Ollama recognizes only 16GB of memory per MI50 32GB GPU? Currently, Ollama detects the full 32GB of memory and attempts to allocate more than 16GB on a single card, causing performance issues.

I found a workaround for the issue I was facing.
I have two AMD MI50 32GB , but the original vbios Vulkan only allows using 16GB of VRAM. When I tried loading a model that uses more than 16GB VRAM but less than 32GB, performance issues occurred.

By setting the environment variables: OLLAMA_NUM_GPU=2 and OLLAMA_SCHED_SPREAD=1, I forced the use of both GPUs, reducing the VRAM usage on each individual card to stay under 16GB. I know this will reduce performance, but it allows me to use the newer version of Ollama and the model. Perhaps in the future I'll try updating the vbios to resolve this issue.

Thanks again for the Vulkan support for the old card

<!-- gh-comment-id:3556205865 --> @Nyx1197 commented on GitHub (Nov 20, 2025): > > > > In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server and https://github.com/ollama/ollama/blob/main/docs/docker.mdx#vulkan-support > > > > > > > > > Thank you for everything you've done; the newly added Vulkan support is really cool. However, there's a minor issue: some models larger than 32B are experiencing severe performance problems, and MoE-architecture models are also affected. Hopefully, these can be fixed in the future. > > > Model list as follows: > > > Model architecture type Rocm Vulkan > > > hf-mirror.com/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF:Q4_K_M qwen2vl Dense 14.92 1.92 > > > qwen3:30b-a3b-thinking-2507-q4_K_M qwen3moe MoE 46.77 9.96 > > > devstral:latest llama Dense 20.11 2.42 > > > gpt-oss:20b gptoss MoE 45.16 NG > > > Smaller models work well, even showing performance improvements. > > > Model architecture type Rocm Vulkan > > > codegeex4:latest chatglm Dense 48.11 60.63 > > > GPU: AMD Instinct MI50 32G Powerlimit: 160w (rocm-smi --setpoweroverdrive 160) Performance data: ollama run model --verbose (eval rate: tokens/s) > > > > > > Vulkan utilize only 16GB of 32GB MI50, can't get more than 16GB allocate on GPU, result significant performance degradation. > > What modifications can I make to ensure Ollama recognizes only 16GB of memory per MI50 32GB GPU? Currently, Ollama detects the full 32GB of memory and attempts to allocate more than 16GB on a single card, causing performance issues. I found a workaround for the issue I was facing. I have two AMD MI50 32GB , but the original vbios Vulkan only allows using 16GB of VRAM. When I tried loading a model that uses more than 16GB VRAM but less than 32GB, performance issues occurred. By setting the environment variables: `OLLAMA_NUM_GPU=2` and `OLLAMA_SCHED_SPREAD=1`, I forced the use of both GPUs, reducing the VRAM usage on each individual card to stay under 16GB. I know this will reduce performance, but it allows me to use the newer version of Ollama and the model. Perhaps in the future I'll try updating the vbios to resolve this issue. Thanks again for the Vulkan support for the old card
Author
Owner

@Nyx1197 commented on GitHub (Dec 10, 2025):

I referenced the article https://gist.github.com/evilJazz/14a4c82a67f2c52a6bb5f9cea02f5e13 to flash the vbios for MI50, and now Vulkan can fully utilize all 32GB of VRAM.
Thanks again.

<!-- gh-comment-id:3635598850 --> @Nyx1197 commented on GitHub (Dec 10, 2025): I referenced the article [https://gist.github.com/evilJazz/14a4c82a67f2c52a6bb5f9cea02f5e13](https://gist.github.com/evilJazz/14a4c82a67f2c52a6bb5f9cea02f5e13) to flash the vbios for MI50, and now Vulkan can fully utilize all 32GB of VRAM. Thanks again.
Author
Owner

@lilyanatia commented on GitHub (Dec 13, 2025):

neither ROCm nor Vulkan seem to work with my two gfx906. attempting to use either results in ollama not detecting any GPUs. rocminfo and vulkaninfo both show both GPUs, but ollama acts like they don't exist.

<!-- gh-comment-id:3649500829 --> @lilyanatia commented on GitHub (Dec 13, 2025): neither ROCm nor Vulkan seem to work with my two gfx906. attempting to use either results in ollama not detecting any GPUs. `rocminfo` and `vulkaninfo` both show both GPUs, but ollama acts like they don't exist.
Author
Owner

@crosys commented on GitHub (Dec 22, 2025):

Vulkan

Hi, which BIOS would work with Vulkan for full 32GB VRAM finally? I have tried 274474.rom, but the server can not boot into Ubuntu then... Tks in advance.

<!-- gh-comment-id:3680375420 --> @crosys commented on GitHub (Dec 22, 2025): > Vulkan Hi, which BIOS would work with Vulkan for full 32GB VRAM finally? I have tried 274474.rom, but the server can not boot into Ubuntu then... Tks in advance.
Author
Owner

@Nyx1197 commented on GitHub (Dec 22, 2025):

Vulkan

Hi, which BIOS would work with Vulkan for full 32GB VRAM finally? I have tried 274474.rom, but the server can not boot into Ubuntu then... Tks in advance.

I used 274474, This vbios can't display output. If you need display output, try 278241 or V420.rom. please see: https://gist.github.com/evilJazz/14a4c82a67f2c52a6bb5f9cea02f5e13

<!-- gh-comment-id:3680397574 --> @Nyx1197 commented on GitHub (Dec 22, 2025): > > Vulkan > > Hi, which BIOS would work with Vulkan for full 32GB VRAM finally? I have tried 274474.rom, but the server can not boot into Ubuntu then... Tks in advance. I used 274474, This vbios can't display output. If you need display output, try [278241](https://www.techpowerup.com/vgabios/278241/278241) or [V420.rom](https://katastrophos.net/downloads/InstinctMI50/V420.rom). please see: [https://gist.github.com/evilJazz/14a4c82a67f2c52a6bb5f9cea02f5e13](https://gist.github.com/evilJazz/14a4c82a67f2c52a6bb5f9cea02f5e13)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34124