[GH-ISSUE #4334] Error: llama runner process has terminated: exit status 0xc0000005 #28457

Closed
opened 2026-04-22 06:39:01 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @EthanScully on GitHub (May 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4334

What is the issue?

at the start of loading a model on v0.1.35, it errors out Error: llama runner process has terminated: exit status 0xc0000005
v0.1.34 works perfectly fine
log:

time=2024-05-10T22:52:42.838-04:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000005 "
[GIN] 2024/05/10 - 22:52:42 | 500 |    2.7650401s |       127.0.0.1 | POST     "/api/chat"
time=2024-05-10T22:52:48.161-04:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.3231542
time=2024-05-10T22:52:48.488-04:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.6501493
time=2024-05-10T22:52:48.832-04:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.9940694

using the AMD Radeon RX 6800
windows 11 and latest drivers

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

v0.1.35

Originally created by @EthanScully on GitHub (May 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4334 ### What is the issue? at the start of loading a model on v0.1.35, it errors out `Error: llama runner process has terminated: exit status 0xc0000005` v0.1.34 works perfectly fine log: ``` time=2024-05-10T22:52:42.838-04:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000005 " [GIN] 2024/05/10 - 22:52:42 | 500 | 2.7650401s | 127.0.0.1 | POST "/api/chat" time=2024-05-10T22:52:48.161-04:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.3231542 time=2024-05-10T22:52:48.488-04:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.6501493 time=2024-05-10T22:52:48.832-04:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.9940694 ``` using the AMD Radeon RX 6800 windows 11 and latest drivers ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version v0.1.35
GiteaMirror added the bug label 2026-04-22 06:39:01 -05:00
Author
Owner

@jmorganca commented on GitHub (May 11, 2024):

This should be fixed in https://github.com/ollama/ollama/releases/tag/v0.1.36, sorry about this!

<!-- gh-comment-id:2105606442 --> @jmorganca commented on GitHub (May 11, 2024): This should be fixed in https://github.com/ollama/ollama/releases/tag/v0.1.36, sorry about this!
Author
Owner

@seraasch commented on GitHub (May 11, 2024):

I just got this error with v0.1.36
AMD RX 7800 XT

<!-- gh-comment-id:2106028743 --> @seraasch commented on GitHub (May 11, 2024): I just got this error with v0.1.36 AMD RX 7800 XT
Author
Owner

@ReOT20 commented on GitHub (May 13, 2024):

This should be fixed in https://github.com/ollama/ollama/releases/tag/v0.1.36, sorry about this!

Same problem on linux and 0.1.37 with 5700 XT masked as gfx1030.

Here is additional info from the logs that could be helpful:
:0:rocdevice.cpp :2726: 846396226165 us: [pid:171202 tid:0x7f2f46a006c0] Callback: Queue 0x7f2e28700000 aborting with error : HSA_STATUS_ERROR_MEMORY_APERTURE_VIOLATION: The agent attempted to access memory beyond the largest legal address. code: 0x29

<!-- gh-comment-id:2108506979 --> @ReOT20 commented on GitHub (May 13, 2024): > This should be fixed in https://github.com/ollama/ollama/releases/tag/v0.1.36, sorry about this! Same problem on linux and 0.1.37 with 5700 XT masked as gfx1030. Here is additional info from the logs that could be helpful: `:0:rocdevice.cpp :2726: 846396226165 us: [pid:171202 tid:0x7f2f46a006c0] Callback: Queue 0x7f2e28700000 aborting with error : HSA_STATUS_ERROR_MEMORY_APERTURE_VIOLATION: The agent attempted to access memory beyond the largest legal address. code: 0x29 `
Author
Owner

@V01D-NULL commented on GitHub (May 27, 2024):

I'm seeing this on 0.1.38

<!-- gh-comment-id:2134139386 --> @V01D-NULL commented on GitHub (May 27, 2024): I'm seeing this on 0.1.38
Author
Owner

@Codesbyusman commented on GitHub (Jun 19, 2024):

facing same issue when trying to run codegemma

ollama run codegemma

Error: llama runner process has terminated: exit status 0xc0000005

ollama version = 0.1.41

any help in this regard?

<!-- gh-comment-id:2177790028 --> @Codesbyusman commented on GitHub (Jun 19, 2024): facing same issue when trying to run codegemma `ollama run codegemma` > Error: llama runner process has terminated: exit status 0xc0000005 ollama version = 0.1.41 any help in this regard?
Author
Owner

@antibyte commented on GitHub (Jun 22, 2024):

same here

<!-- gh-comment-id:2184124770 --> @antibyte commented on GitHub (Jun 22, 2024): same here
Author
Owner

@Codesbyusman commented on GitHub (Jun 24, 2024):

@antibyte update ollama to 0.1.45, it's working there

<!-- gh-comment-id:2185420988 --> @Codesbyusman commented on GitHub (Jun 24, 2024): @antibyte update ollama to 0.1.45, it's working there
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28457