[GH-ISSUE #14093] "model runner has unexpectedly stopped" error occurs frequently #34961

Closed
opened 2026-04-22 19:02:58 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @workflowsguy on GitHub (Feb 5, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14093

What is the issue?

Since several Ollama releases back, frequent 500 Server errors occur.

The error message on the client is "model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details"

I have been using the same model, llama3.3:70bsince the beginning, and at first, there where no such errors as described.
The hardware platform also has not changed; it is

Model Name: Mac mini
Model Identifier: Mac16,11
Chip: Apple M4 Pro
Total Number of Cores: 14 (10 performance and 4 efficiency)
Memory: 64 GB

Relevant log output

time=2026-02-05T12:48:27.680+01:00 level=ERROR source=server.go:302 msg="llama runner terminated" error="exit status 2"
time=2026-02-05T12:48:27.680+01:00 level=ERROR source=server.go:1607 msg="post predict" error="Post \"http://127.0.0.1:52208/completion\": EOF"

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.15.2

Originally created by @workflowsguy on GitHub (Feb 5, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14093 ### What is the issue? Since several Ollama releases back, frequent 500 Server errors occur. The error message on the client is "model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details" I have been using the same model, `llama3.3:70b`since the beginning, and at first, there where no such errors as described. The hardware platform also has not changed; it is ``` Model Name: Mac mini Model Identifier: Mac16,11 Chip: Apple M4 Pro Total Number of Cores: 14 (10 performance and 4 efficiency) Memory: 64 GB ``` ### Relevant log output ```shell time=2026-02-05T12:48:27.680+01:00 level=ERROR source=server.go:302 msg="llama runner terminated" error="exit status 2" time=2026-02-05T12:48:27.680+01:00 level=ERROR source=server.go:1607 msg="post predict" error="Post \"http://127.0.0.1:52208/completion\": EOF" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.15.2
GiteaMirror added the bug label 2026-04-22 19:02:58 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 5, 2026):

Post the full server log.

<!-- gh-comment-id:3853835995 --> @rick-github commented on GitHub (Feb 5, 2026): Post the full server log.
Author
Owner

@workflowsguy commented on GitHub (Feb 5, 2026):

During my investigations of the issue,
I changed the timeout value in the client from "400" to "800".

After this, a query that caused an error as described above now took a lot longer to process, but eventually was successful.

While I don't understand how a timeout value that seems too short can cause a server process to crash, I will close the issue as resolved.

<!-- gh-comment-id:3855536968 --> @workflowsguy commented on GitHub (Feb 5, 2026): During my investigations of the issue, I changed the timeout value in the client from "400" to "800". After this, a query that caused an error as described above now took a lot longer to process, but eventually was successful. While I don't understand how a timeout value that seems too short can cause a server process to crash, I will close the issue as resolved.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34961