[GH-ISSUE #11664] Ollama call failed with status code 500: llama runner process has terminated: error:fault #7712

Closed
opened 2026-04-12 19:49:14 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @hkbb2014 on GitHub (Aug 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11664

What is the issue?

If I connect to any model that support vision, it will failed.

Ollama call failed with status code 500: llama runner process has terminated: error:fault

It work fine in the past. Please fix it.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @hkbb2014 on GitHub (Aug 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11664 ### What is the issue? If I connect to any model that support vision, it will failed. Ollama call failed with status code 500: llama runner process has terminated: error:fault It work fine in the past. Please fix it. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 19:49:14 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 5, 2025):

Server logs will help in debugging.

<!-- gh-comment-id:3154415017 --> @rick-github commented on GitHub (Aug 5, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@hkbb2014 commented on GitHub (Aug 5, 2025):

server log:

server.log

<!-- gh-comment-id:3154470698 --> @hkbb2014 commented on GitHub (Aug 5, 2025): server log: [server.log](https://github.com/user-attachments/files/21596446/server.log)
Author
Owner

@rick-github commented on GitHub (Aug 5, 2025):

load_backend: loaded CUDA backend from Y:\ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CUDA backend from Y:\ollama\lib\ollama\cuda_v12\ggml-cuda.dll

#11211

<!-- gh-comment-id:3154576815 --> @rick-github commented on GitHub (Aug 5, 2025): ``` load_backend: loaded CUDA backend from Y:\ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CUDA backend from Y:\ollama\lib\ollama\cuda_v12\ggml-cuda.dll ``` #11211
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7712