[GH-ISSUE #3060] Ollama Server is unavailable after some time #48395

Closed
opened 2026-04-28 08:04:03 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @vrubzov1957 on GitHub (Mar 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3060

Sometimes after a certain amount of time working with the same AI model (after about an hour) the Ollama server becomes unavailable.

I have to start it via "ollama run MODEL" - server starts in the background, but closes again after a while.
Very inconvenient when we use a different frontend (like Ollama Web-UI) - we have to connect to the PC manually, and do the Ollama server startup again

LOG
server_ollama.log

In this logs the server became unavailable, then manually started, then became unavailable again. Periods 1-2 hours

OS: Windows

Originally created by @vrubzov1957 on GitHub (Mar 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3060 Sometimes after a certain amount of time working with the same AI model (after about an hour) the Ollama server becomes unavailable. I have to start it via "ollama run MODEL" - server starts in the background, but closes again after a while. Very inconvenient when we use a different frontend (like Ollama Web-UI) - we have to connect to the PC manually, and do the Ollama server startup again LOG [server_ollama.log](https://github.com/ollama/ollama/files/14562402/server_ollama.log) In this logs the server became unavailable, then manually started, then became unavailable again. Periods 1-2 hours OS: Windows
Author
Owner

@jmorganca commented on GitHub (Mar 11, 2024):

Sorry about this. This seems to be an out of memory error with CUDA, I'll merge this with https://github.com/ollama/ollama/issues/1952

<!-- gh-comment-id:1989128818 --> @jmorganca commented on GitHub (Mar 11, 2024): Sorry about this. This seems to be an out of memory error with CUDA, I'll merge this with https://github.com/ollama/ollama/issues/1952
Author
Owner

@vrubzov1957 commented on GitHub (Mar 11, 2024):

@jmorganca ok.
Also I did note in #1952 - because may be different root cause of it. May be memory leaking. Because with selected model it works perfect, VRAM for model is enough. And after 20-30 newly-topic promts requests - Ollama server goes offline.

<!-- gh-comment-id:1989239559 --> @vrubzov1957 commented on GitHub (Mar 11, 2024): @jmorganca ok. Also I did note in #1952 - because may be different root cause of it. May be memory leaking. Because with selected model it works perfect, VRAM for model is enough. And after 20-30 newly-topic promts requests - Ollama server goes offline.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48395