[GH-ISSUE #15734] Ollama not working on Mac M5 #35792

Open
opened 2026-04-22 20:28:11 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @pranitmodi on GitHub (Apr 21, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15734

What is the issue?

Keep getting this error - 500 Internal Server Error: llama runner process has terminated: %!w()

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @pranitmodi on GitHub (Apr 21, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15734 ### What is the issue? Keep getting this error - 500 Internal Server Error: llama runner process has terminated: %!w(<nil>) ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 20:28:11 -05:00
Author
Owner

@cl93a commented on GitHub (Apr 21, 2026):

Same issue here, wanted to try out Gemma4 on MLX, but all models (not just MLX) fail. Ollama 0.21.0. Model fails to load with 500 error. Logs show:

ggml_metal_init: picking default device: Apple M5
signal arrived during cgo execution
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_get_default_buffer_type(0x0)
fault 0x1926755b0
llama runner terminated: exit status 2

<!-- gh-comment-id:4291679769 --> @cl93a commented on GitHub (Apr 21, 2026): Same issue here, wanted to try out Gemma4 on MLX, but all models (not just MLX) fail. Ollama 0.21.0. Model fails to load with 500 error. Logs show: ggml_metal_init: picking default device: Apple M5 signal arrived during cgo execution github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_get_default_buffer_type(0x0) fault 0x1926755b0 llama runner terminated: exit status 2
Author
Owner

@mverrilli commented on GitHub (Apr 21, 2026):

I am guessing #15581 will address this. Posted this to the wrong issue.

<!-- gh-comment-id:4292394924 --> @mverrilli commented on GitHub (Apr 21, 2026): ~~I am guessing #15581 will address this.~~ Posted this to the wrong issue.
Author
Owner

@dhiltgen commented on GitHub (Apr 22, 2026):

@cl93a can you verify this fails for you on MLX? What model and tag were you trying to run? So far, this seems to be a GGML specific defect.

<!-- gh-comment-id:4297865890 --> @dhiltgen commented on GitHub (Apr 22, 2026): @cl93a can you verify this fails for you on MLX? What model and tag were you trying to run? So far, this seems to be a GGML specific defect.
Author
Owner

@cl93a commented on GitHub (Apr 22, 2026):

@dhiltgen — Confirmed: MLX works, GGML does not on M5 (Ollama v0.21.1). Was confused that :latest did not auto pick mlx.

Tested:

gemma4:e2b-mlx-bf16 loads and runs fine
gemma4:e4b-mlx-bf16 loads and runs fine
gemma4:latest (Q4_K_M, GGML) crashes — even after a full ollama rm + re-pull, ruling out a corrupted download
gemma3:4b / gemma3:12b (GGML) same crash
All GGML failures hit ggml_metal_init: picking default device: Apple M5 → signal arrived during cgo execution → fault 0x1926755b0 → exit status 2. MLX is the only working backend on M5 right now it appears.

<!-- gh-comment-id:4298891093 --> @cl93a commented on GitHub (Apr 22, 2026): @dhiltgen — Confirmed: MLX works, GGML does not on M5 (Ollama v0.21.1). Was confused that :latest did not auto pick mlx. Tested: gemma4:e2b-mlx-bf16 ✅ loads and runs fine gemma4:e4b-mlx-bf16 ✅ loads and runs fine gemma4:latest (Q4_K_M, GGML) ❌ crashes — even after a full ollama rm + re-pull, ruling out a corrupted download gemma3:4b / gemma3:12b (GGML) ❌ same crash All GGML failures hit ggml_metal_init: picking default device: Apple M5 → signal arrived during cgo execution → fault 0x1926755b0 → exit status 2. MLX is the only working backend on M5 right now it appears.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35792