[GH-ISSUE #9583] bug: 500 error #32012

Closed
opened 2026-04-22 12:53:26 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @EntropyYue on GitHub (Mar 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9583

What is the issue?

When trying to have a conversation, a 500 error is returned

Relevant log output

time=2025-03-08T03:20:41.451+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 0xc0000409"
time=2025-03-08T03:20:41.647+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed"
[GIN] 2025/03/08 - 03:20:41 | 500 |   24.6239449s |  172.22.112.162 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.13

Originally created by @EntropyYue on GitHub (Mar 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9583 ### What is the issue? When trying to have a conversation, a 500 error is returned ### Relevant log output ```shell time=2025-03-08T03:20:41.451+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 0xc0000409" time=2025-03-08T03:20:41.647+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed" [GIN] 2025/03/08 - 03:20:41 | 500 | 24.6239449s | 172.22.112.162 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.13
GiteaMirror added the bug label 2026-04-22 12:53:26 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32012