[GH-ISSUE #11985] 500: llama runner process has terminated: exit status 2 #7958

Closed
opened 2026-04-12 20:07:55 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @nobbywfc on GitHub (Aug 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11985

What is the issue?

Ollama/openwebui issued error shown above and I can not use LLM.

I had updated Nvidia windows driver on 19 Aug and suspect that it caused this trouble. Must I uninstall the current driver from PC and purge all nvidia related settings then reinstall previous version of the driver ?

Relevant log output

Following is error message when trying to start container:
-------
docker start ollama
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 2, stdout: , stderr: SIGSEGV: segmentation violation
PC=0x7e0e3388eed4 m=0 sigcode=1 addr=0x8
signal arrived during cgo execution
-------
A part of docker logs ollama:

----
looking up nvidia GPU memory"
cuda driver library failed to get device context 2time=2025-08-20T08:52:10.625Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
time=2025-08-20T08:52:10.872Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.00440619 runner.size="14.9 GiB" runner.vram="6.6 GiB" runner.parallel=1 runner.pid=246 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
cuda driver library failed to get device context 2time=2025-08-20T08:52:10.873Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
time=2025-08-20T08:52:11.123Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.255027316 runner.size="14.9 GiB" runner.vram="6.6 GiB" runner.parallel=1 runner.pid=246 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
cuda driver library failed to get device context 2time=2025-08-20T08:52:11.127Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
time=2025-08-20T08:52:11.374Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.505478104 runner.size="14.9 GiB" runner.vram="6.6 GiB" runner.parallel=1 runner.pid=246 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
----

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

As ollama stopped, I can not find the version.

Originally created by @nobbywfc on GitHub (Aug 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11985 ### What is the issue? Ollama/openwebui issued error shown above and I can not use LLM. I had updated Nvidia windows driver on 19 Aug and suspect that it caused this trouble. Must I uninstall the current driver from PC and purge all nvidia related settings then reinstall previous version of the driver ? ### Relevant log output ```shell Following is error message when trying to start container: ------- docker start ollama Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 2, stdout: , stderr: SIGSEGV: segmentation violation PC=0x7e0e3388eed4 m=0 sigcode=1 addr=0x8 signal arrived during cgo execution ------- A part of docker logs ollama: ---- looking up nvidia GPU memory" cuda driver library failed to get device context 2time=2025-08-20T08:52:10.625Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-08-20T08:52:10.872Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.00440619 runner.size="14.9 GiB" runner.vram="6.6 GiB" runner.parallel=1 runner.pid=246 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 cuda driver library failed to get device context 2time=2025-08-20T08:52:10.873Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-08-20T08:52:11.123Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.255027316 runner.size="14.9 GiB" runner.vram="6.6 GiB" runner.parallel=1 runner.pid=246 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 cuda driver library failed to get device context 2time=2025-08-20T08:52:11.127Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-08-20T08:52:11.374Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.505478104 runner.size="14.9 GiB" runner.vram="6.6 GiB" runner.parallel=1 runner.pid=246 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 ---- ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version As ollama stopped, I can not find the version.
GiteaMirror added the bug label 2026-04-12 20:07:55 -05:00
Author
Owner

@nobbywfc commented on GitHub (Aug 20, 2025):

It's fixed and following is the analysis of Gemini.

The llama runner process has terminated and SIGSEGV errors are often not caused by the model itself, but rather by compatibility issues between the host OS's NVIDIA drivers and the Docker environment (specifically, the NVIDIA Container Toolkit).

The issue was resolved after I updated Docker Desktop, which proves that the problem was indeed related to the host environment.

<!-- gh-comment-id:3207743114 --> @nobbywfc commented on GitHub (Aug 20, 2025): It's fixed and following is the analysis of Gemini. The llama runner process has terminated and SIGSEGV errors are often not caused by the model itself, but rather by compatibility issues between the host OS's NVIDIA drivers and the Docker environment (specifically, the NVIDIA Container Toolkit). The issue was resolved after I updated Docker Desktop, which proves that the problem was indeed related to the host environment.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7958