[GH-ISSUE #12059] The Ollama server keeps failing after running for a while in Docker #33769

Closed
opened 2026-04-22 16:45:56 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @packermaster on GitHub (Aug 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12059

What is the issue?

I am using Ollama to deploy a large model Docker service. I found that after running for a while, the Ollama service crashes. Upon entering the container to investigate, I discovered that the GPU cannot be used properly. When running the nvidia-smi command, I get the error "Failed to initialize NVML: Unknown Error". Why is this happening?

Relevant log output


OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @packermaster on GitHub (Aug 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12059 ### What is the issue? I am using Ollama to deploy a large model Docker service. I found that after running for a while, the Ollama service crashes. Upon entering the container to investigate, I discovered that the GPU cannot be used properly. When running the nvidia-smi command, I get the error "Failed to initialize NVML: Unknown Error". Why is this happening? ### Relevant log output ```shell ``` ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 16:45:56 -05:00
Author
Owner
<!-- gh-comment-id:3217947749 --> @rick-github commented on GitHub (Aug 24, 2025): https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-docker
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33769