[GH-ISSUE #10953] Error: POST predict: Post "http://127.0.0.1:35943/completion": EOF - Server Log #53727

Closed
opened 2026-04-29 04:36:10 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jlsilicon on GitHub (Jun 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10953

What is the issue?

I am suddenly seeing this error popup on any models ...

Error: POST predict: Post "http://127.0.0.1:35943/completion": EOF

I even re-installed ollama , with no effect

says server is not running run: ollama serve

  • which just spits out pages of commands , then locks up

Relevant log output


OS

Linux

GPU

No response

CPU

AMD

Ollama version

No response

Originally created by @jlsilicon on GitHub (Jun 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10953 ### What is the issue? I am suddenly seeing this error popup on any models ... Error: POST predict: Post "http://127.0.0.1:35943/completion": EOF I even re-installed ollama , with no effect says server is not running run: ollama serve - which just spits out pages of commands , then locks up ### Relevant log output ```shell ``` ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 04:36:10 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 2, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2932887932 --> @rick-github commented on GitHub (Jun 2, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@jlsilicon commented on GitHub (Jun 3, 2025):

It seems to be that I had switched from Orange Pi 5 Pro with 16GB Ram , to a Pro 5 with 4GB.

Only Tinyllama seems to be the only Model able to run on 4GB.

Your Ollama needs to throw the error of "Not enough Ram Error" .
-- instead of : killed Signal or 127.0.0.1 error or etc.

You would see a lot less Issues popping up ...
;)

<!-- gh-comment-id:2933086202 --> @jlsilicon commented on GitHub (Jun 3, 2025): It seems to be that I had switched from Orange Pi 5 Pro with 16GB Ram , to a Pro 5 with 4GB. Only Tinyllama seems to be the only Model able to run on 4GB. Your Ollama needs to throw the error of "Not enough Ram Error" . -- instead of : killed Signal or 127.0.0.1 error or etc. You would see a lot less Issues popping up ... ;)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53727