[GH-ISSUE #15304] Publish binary releases with Vulkan for Raspberry Pi OS / Raspberry Pi 4 #56305

Closed
opened 2026-04-29 10:36:36 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @znmeb on GitHub (Apr 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15304

I've been testing Ollama on a Raspberry Pi 4 and I noticed that the default binary is not compiled for Vulkan. Vulkan on a Raspberry Pi 4 uses the Pi / Broadcom "GPU".

I've gotten the cmake builds to complete but there are a few errors and the go run . serve step doesn't seem to be coming up with Vulkan. If this should work, I'd be happy to gather documentation for a bug report.

Originally created by @znmeb on GitHub (Apr 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15304 I've been testing Ollama on a Raspberry Pi 4 and I noticed that the default binary is not compiled for Vulkan. Vulkan on a Raspberry Pi 4 uses the Pi / Broadcom "GPU". I've gotten the `cmake` builds to complete but there are a few errors and the `go run . serve` step doesn't seem to be coming up with Vulkan. If this should work, I'd be happy to gather documentation for a bug report.
GiteaMirror added the feature request label 2026-04-29 10:36:37 -05:00
Author
Owner

@znmeb commented on GitHub (Apr 4, 2026):

It turns out that compiling llama.cpp with Vulkan for a Raspberry Pi produces a runtime that will crash! So don't do this!!

See https://github.com/ggml-org/llama.cpp/issues/9801

<!-- gh-comment-id:4186428022 --> @znmeb commented on GitHub (Apr 4, 2026): It turns out that compiling `llama.cpp` with Vulkan for a Raspberry Pi produces a runtime that will crash! So don't do this!! See https://github.com/ggml-org/llama.cpp/issues/9801
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56305