[PR #9650] [CLOSED] Vulkan support (replacing pull/5059) #13026

Closed
opened 2026-04-13 00:15:49 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/9650
Author: @grinco
Created: 3/11/2025
Status: Closed

Base: mainHead: vulkan


📝 Commits (10+)

📊 Changes

122 files changed (+19337 additions, -119 deletions)

View changed files

📝 CMakeLists.txt (+13 -0)
📝 CMakePresets.json (+10 -1)
📝 Dockerfile (+25 -5)
📝 Makefile.sync (+1 -1)
📝 README.md (+2 -0)
📝 discover/amd_linux.go (+7 -6)
📝 discover/gpu.go (+118 -1)
📝 discover/gpu_info.h (+1 -0)
discover/gpu_info_vulkan.c (+286 -0)
discover/gpu_info_vulkan.h (+67 -0)
📝 discover/gpu_linux.go (+18 -0)
📝 discover/gpu_windows.go (+9 -0)
📝 discover/types.go (+12 -8)
discover/vulkan_common.go (+19 -0)
📝 envconfig/config.go (+2 -0)
📝 llm/server.go (+2 -0)
📝 ml/backend/ggml/ggml.go (+14 -8)
📝 ml/backend/ggml/ggml/.rsync-filter (+3 -0)
ml/backend/ggml/ggml/src/ggml-vulkan/CMakeLists.txt (+162 -0)
ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp (+9401 -0)

...and 80 more files

📄 Description

This pull request is based on https://github.com/ollama/ollama/pull/5059, and https://github.com/whyvl/ollama-vulkan/issues/7

Tested on the v0.5.13 on linux. Image was built using the supplied Dockerfile with a caveat that release image was bumped to 24.04 (from 20.04).

Build command:

docker buildx build --platform linux/amd64 ${OLLAMA_COMMON_BUILD_ARGS} -t grinco/ollama-amd-apu:vulkan .

Tested on AMD Ryzen 7 8845HS w/ Radeon 780M Graphics with ROCm disabled

[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-03-11T13:00:40.793Z level=INFO source=gpu.go:199 msg="vulkan: load libvulkan and libcap ok"
time=2025-03-11T13:00:40.877Z level=INFO source=gpu.go:421 msg="error looking up vulkan GPU memory" error="device is a CPU"
time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
time=2025-03-11T13:00:40.879Z level=INFO source=types.go:137 msg="inference compute" id=0 library=vulkan variant="" compute=1.3 driver=1.3 name="AMD Radeon Graphics (RADV GFX1103_R1)" total="15.6 GiB" available="15.6 GiB"
 # ollama run phi4:14b
>>> /set verbose
Set 'verbose' mode.
>>> how's it going?
Hello! I'm here to help you with any questions or tasks you have. How can I assist you today? 😊

total duration:       3.341959745s
load duration:        18.165612ms
prompt eval count:    15 token(s)
prompt eval duration: 475ms
prompt eval rate:     31.58 tokens/s
eval count:           26 token(s)
eval duration:        2.846s
eval rate:            9.14 tokens/s
>>>

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/9650 **Author:** [@grinco](https://github.com/grinco) **Created:** 3/11/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `vulkan` --- ### 📝 Commits (10+) - [`f46b4a6`](https://github.com/ollama/ollama/commit/f46b4a6fa263d7cf51bc8f3ceb2a69d2c1e83fdd) implement the vulkan C backend - [`9c6b049`](https://github.com/ollama/ollama/commit/9c6b0495678f66f5b6b50fdb05c7efd99f5a208f) add support in gpu.go - [`93c4d69`](https://github.com/ollama/ollama/commit/93c4d69daa02be2c4407c73d30c8fe72961de61b) add support in gen_linux.sh - [`24c8840`](https://github.com/ollama/ollama/commit/24c8840037a9edd48fafd31f113916cb4105c922) it builds - [`724fac4`](https://github.com/ollama/ollama/commit/724fac470f0df86e8d0d24e209bea34f31a4ec84) fix segfault - [`e4e8a5d`](https://github.com/ollama/ollama/commit/e4e8a5d25a375c9df03ad122211237798e4ca743) fix compilation - [`257364c`](https://github.com/ollama/ollama/commit/257364cb3c47a5e392bfb1772ecf6709dc0a7c83) fix free memory monitor - [`11c55fa`](https://github.com/ollama/ollama/commit/11c55fab8113a02fbd77968c99856c22fb89c880) fix total memory monitor - [`e77ea68`](https://github.com/ollama/ollama/commit/e77ea68e114022df303ead281915efd86ed31006) Merge branch 'refs/heads/main' into vulkan - [`18f3f96`](https://github.com/ollama/ollama/commit/18f3f960b01e1dd18a43fbcddbc0dc9de1ae2cbd) update gpu.go ### 📊 Changes **122 files changed** (+19337 additions, -119 deletions) <details> <summary>View changed files</summary> 📝 `CMakeLists.txt` (+13 -0) 📝 `CMakePresets.json` (+10 -1) 📝 `Dockerfile` (+25 -5) 📝 `Makefile.sync` (+1 -1) 📝 `README.md` (+2 -0) 📝 `discover/amd_linux.go` (+7 -6) 📝 `discover/gpu.go` (+118 -1) 📝 `discover/gpu_info.h` (+1 -0) ➕ `discover/gpu_info_vulkan.c` (+286 -0) ➕ `discover/gpu_info_vulkan.h` (+67 -0) 📝 `discover/gpu_linux.go` (+18 -0) 📝 `discover/gpu_windows.go` (+9 -0) 📝 `discover/types.go` (+12 -8) ➕ `discover/vulkan_common.go` (+19 -0) 📝 `envconfig/config.go` (+2 -0) 📝 `llm/server.go` (+2 -0) 📝 `ml/backend/ggml/ggml.go` (+14 -8) 📝 `ml/backend/ggml/ggml/.rsync-filter` (+3 -0) ➕ `ml/backend/ggml/ggml/src/ggml-vulkan/CMakeLists.txt` (+162 -0) ➕ `ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp` (+9401 -0) _...and 80 more files_ </details> ### 📄 Description This pull request is based on https://github.com/ollama/ollama/pull/5059, and https://github.com/whyvl/ollama-vulkan/issues/7 Tested on the v0.5.13 on linux. Image was built using the supplied Dockerfile with a caveat that release image was bumped to 24.04 (from 20.04). Build command: ``` docker buildx build --platform linux/amd64 ${OLLAMA_COMMON_BUILD_ARGS} -t grinco/ollama-amd-apu:vulkan . ``` Tested on AMD Ryzen 7 8845HS w/ Radeon 780M Graphics with ROCm disabled ``` [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-03-11T13:00:40.793Z level=INFO source=gpu.go:199 msg="vulkan: load libvulkan and libcap ok" time=2025-03-11T13:00:40.877Z level=INFO source=gpu.go:421 msg="error looking up vulkan GPU memory" error="device is a CPU" time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" time=2025-03-11T13:00:40.879Z level=INFO source=types.go:137 msg="inference compute" id=0 library=vulkan variant="" compute=1.3 driver=1.3 name="AMD Radeon Graphics (RADV GFX1103_R1)" total="15.6 GiB" available="15.6 GiB" ``` ``` # ollama run phi4:14b >>> /set verbose Set 'verbose' mode. >>> how's it going? Hello! I'm here to help you with any questions or tasks you have. How can I assist you today? 😊 total duration: 3.341959745s load duration: 18.165612ms prompt eval count: 15 token(s) prompt eval duration: 475ms prompt eval rate: 31.58 tokens/s eval count: 26 token(s) eval duration: 2.846s eval rate: 9.14 tokens/s >>> ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:15:49 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13026