[GH-ISSUE #6554] Error: llama runner process has terminated: exit status 0xc0000135 #29883

Closed
opened 2026-04-22 09:10:34 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @balaji1732000 on GitHub (Aug 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6554

What is the issue?

I followed the below document to run the ollama model in GPU using Intel IPEX

https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md

https://www.intel.com/content/www/us/en/content-details/826081/running-ollama-with-open-webui-on-intel-hardware-platform.html

I couldn't get the inference from the model.

Error: llama runner process has terminated: exit status 0xc0000135
can anyone solve the issue

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

No response

Originally created by @balaji1732000 on GitHub (Aug 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6554 ### What is the issue? I followed the below document to run the ollama model in GPU using Intel IPEX https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md https://www.intel.com/content/www/us/en/content-details/826081/running-ollama-with-open-webui-on-intel-hardware-platform.html I couldn't get the inference from the model. Error: llama runner process has terminated: exit status 0xc0000135 can anyone solve the issue ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 09:10:34 -05:00
Author
Owner

@devskale commented on GitHub (Aug 29, 2024):

on Oracle Cloud Instance with Ampere CPU

OLLAMA_DEBUG=1 ./ollama run gemma2:2b
Error: llama runner process has terminated: exit status 127

architecture:
Linux ampere aarch64
Operating System: Ubuntu 24.04 LTS
Kernel: Linux 6.8.0-1011-oracle

<!-- gh-comment-id:2317560671 --> @devskale commented on GitHub (Aug 29, 2024): on Oracle Cloud Instance with Ampere CPU OLLAMA_DEBUG=1 ./ollama run gemma2:2b Error: llama runner process has terminated: exit status 127 architecture: Linux ampere aarch64 Operating System: Ubuntu 24.04 LTS Kernel: Linux 6.8.0-1011-oracle
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

You need to set OLLAMA_DEBUG=1 in the server environment, not the client.

OLLAMA_DEBUG=1 ./ollama serve
<!-- gh-comment-id:2317606619 --> @rick-github commented on GitHub (Aug 29, 2024): You need to set `OLLAMA_DEBUG=1` in the server environment, not the client. ``` OLLAMA_DEBUG=1 ./ollama serve ```
Author
Owner

@SlOrbA commented on GitHub (Aug 30, 2024):

I'm running on RPI5 with 8GB and the debug output has these lines:

time=2024-08-30T10:30:16.625+03:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-08-30T10:30:16.625+03:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
/tmp/ollama1044695914/runners/cpu/ollama_llama_server: error while loading shared libraries: libllama.so: cannot open shared object file: No such file or directory
time=2024-08-30T10:30:16.625+03:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2024-08-30T10:30:16.876+03:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: exit status 127"

To me it looks like that the missing libllama.so trips the model loading.

<!-- gh-comment-id:2320344616 --> @SlOrbA commented on GitHub (Aug 30, 2024): I'm running on RPI5 with 8GB and the debug output has these lines: time=2024-08-30T10:30:16.625+03:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-08-30T10:30:16.625+03:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" /tmp/ollama1044695914/runners/cpu/ollama_llama_server: error while loading shared libraries: libllama.so: cannot open shared object file: No such file or directory time=2024-08-30T10:30:16.625+03:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2024-08-30T10:30:16.876+03:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: exit status 127" To me it looks like that the missing libllama.so trips the model loading.
Author
Owner

@pdevine commented on GitHub (Sep 1, 2024):

This looks like a dupe of #6541 which should be fixed now.

<!-- gh-comment-id:2323535762 --> @pdevine commented on GitHub (Sep 1, 2024): This looks like a dupe of #6541 which should be fixed now.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29883