[GH-ISSUE #8926] Error: llama runner process has terminated: exit status 2 #5791

Closed
opened 2026-04-12 17:07:56 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @creasyWinds on GitHub (Feb 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8926

What is the issue?

update to ollama version is 0.5.8-rc10
Error: llama runner process has terminated: exit status 2

Relevant log output


OS

Windows 11 24H2 26100.3037

GPU

NVIDIA GeForce GTX 1080

CPU

Intel(R) Xeon(R) CPU E5-2699A v4 @ 2.40GHz 2.40 GHz

Ollama version

v0.5.8 Pre-release

Originally created by @creasyWinds on GitHub (Feb 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8926 ### What is the issue? update to ollama version is 0.5.8-rc10 Error: llama runner process has terminated: exit status 2 ### Relevant log output ```shell ``` ### OS Windows 11 24H2 26100.3037 ### GPU NVIDIA GeForce GTX 1080 ### CPU Intel(R) Xeon(R) CPU E5-2699A v4 @ 2.40GHz 2.40 GHz ### Ollama version v0.5.8 Pre-release
GiteaMirror added the bug label 2026-04-12 17:07:56 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 7, 2025):

Server logs may aid in debugging.

<!-- gh-comment-id:2642853678 --> @rick-github commented on GitHub (Feb 7, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@creasyWinds commented on GitHub (Feb 7, 2025):

server-1.log

<!-- gh-comment-id:2643100188 --> @creasyWinds commented on GitHub (Feb 7, 2025): [server-1.log](https://github.com/user-attachments/files/18707997/server-1.log)
Author
Owner

@mxyng commented on GitHub (Feb 8, 2025):

I haven't been able to reproduce this on either 0.5.8-rc10 or 0.5.8-rc11. It looks like the system crashed while loading dynamic backends. It looks like it loaded the cuda backend. The next step should be to load the CPU backend. What CPU does this system use?

Can you provide the logs again with OLLAMA_DEBUG=1 set? This will output more information which should tell use where the problem is.

<!-- gh-comment-id:2644369454 --> @mxyng commented on GitHub (Feb 8, 2025): I haven't been able to reproduce this on either 0.5.8-rc10 or 0.5.8-rc11. It looks like the system crashed while loading dynamic backends. It looks like it loaded the cuda backend. The next step should be to load the CPU backend. What CPU does this system use? Can you provide the logs again with `OLLAMA_DEBUG=1` set? This will output more information which should tell use where the problem is.
Author
Owner

@creasyWinds commented on GitHub (Feb 8, 2025):

server.log

<!-- gh-comment-id:2644820363 --> @creasyWinds commented on GitHub (Feb 8, 2025): [server.log](https://github.com/user-attachments/files/18717958/server.log)
Author
Owner

@creasyWinds commented on GitHub (Feb 8, 2025):

0.57 is fine

<!-- gh-comment-id:2644840059 --> @creasyWinds commented on GitHub (Feb 8, 2025): 0.57 is fine
Author
Owner

@jmorganca commented on GitHub (Feb 10, 2025):

@creasyWinds does the new 0.5.8-rc13 pre-release work for you? (with GPU) https://github.com/ollama/ollama/releases/tag/v0.5.8-rc13

<!-- gh-comment-id:2649516336 --> @jmorganca commented on GitHub (Feb 10, 2025): @creasyWinds does the new 0.5.8-rc13 pre-release work for you? (with GPU) https://github.com/ollama/ollama/releases/tag/v0.5.8-rc13
Author
Owner

@creasyWinds commented on GitHub (Feb 11, 2025):

@creasyWinds does the new 0.5.8-rc13 pre-release work for you? (with GPU) https://github.com/ollama/ollama/releases/tag/v0.5.8-rc13

now it has new error

server.log

“Ollama Failed to embed: Ollama returned an empty embedding for chunk!” @jmorganca

<!-- gh-comment-id:2650396762 --> @creasyWinds commented on GitHub (Feb 11, 2025): > [@creasyWinds](https://github.com/creasyWinds) does the new 0.5.8-rc13 pre-release work for you? (with GPU) https://github.com/ollama/ollama/releases/tag/v0.5.8-rc13 now it has new error [server.log](https://github.com/user-attachments/files/18748534/server.log) “Ollama Failed to embed: Ollama returned an empty embedding for chunk!” @jmorganca
Author
Owner

@mxyng commented on GitHub (Feb 11, 2025):

This looks the same as #9014 so I'll reopen it for now

<!-- gh-comment-id:2652272044 --> @mxyng commented on GitHub (Feb 11, 2025): This looks the same as #9014 so I'll reopen it for now
Author
Owner

@jmorganca commented on GitHub (Feb 12, 2025):

Hi @creasyWinds would it be possible to try the new 0.5.9 pre-release that should fix this? https://github.com/ollama/ollama/releases/tag/v0.5.9-rc0

Thanks so much and sorry the error happened.

<!-- gh-comment-id:2652843356 --> @jmorganca commented on GitHub (Feb 12, 2025): Hi @creasyWinds would it be possible to try the new 0.5.9 pre-release that should fix this? https://github.com/ollama/ollama/releases/tag/v0.5.9-rc0 Thanks so much and sorry the error happened.
Author
Owner

@creasyWinds commented on GitHub (Feb 12, 2025):

Image
look!it is working now!元宵节快乐! @jmorganca @mxyng

<!-- gh-comment-id:2653735142 --> @creasyWinds commented on GitHub (Feb 12, 2025): ![Image](https://github.com/user-attachments/assets/0198ce66-b22b-4684-b970-d826190182ce) look!it is working now!元宵节快乐! @jmorganca @mxyng
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5791