[GH-ISSUE #4050] Ollama after 30 minutes start to be very very slow to answer the questions #2513

Closed
opened 2026-04-12 12:50:11 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @nunostiles on GitHub (Apr 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4050

What is the issue?

I've already tried with several different models, but the issue is always persisting, after ~30 minutes it keeps taking ages to answer to questions, even with saved models it happens. Is there anything that I can do? it's in fact a bug?
On the first 30 minutes it runs normally without any slowness. Need help please, any suggestion?
Thank you!

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.32

Originally created by @nunostiles on GitHub (Apr 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4050 ### What is the issue? I've already tried with several different models, but the issue is always persisting, after ~30 minutes it keeps taking ages to answer to questions, even with saved models it happens. Is there anything that I can do? it's in fact a bug? On the first 30 minutes it runs normally without any slowness. Need help please, any suggestion? Thank you! ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.32
GiteaMirror added the bugperformanceneeds more info labels 2026-04-12 12:50:12 -05:00
Author
Owner

@sammcj commented on GitHub (Apr 30, 2024):

Two ideas on this:

  1. Are you sure it’s not just the model unloading when idle? (I think this defaults to 5 minutes)

  2. I’ve noticed that occasionally after some time idle Ollama seems to switch from using the GPU to CPU only, can you confirm if inference is occurring on your GPU or CPU?

<!-- gh-comment-id:2085261255 --> @sammcj commented on GitHub (Apr 30, 2024): Two ideas on this: 1) Are you sure it’s not just the model unloading when idle? (I think this defaults to 5 minutes) 2) I’ve noticed that occasionally after some time idle Ollama seems to switch from using the GPU to CPU only, can you confirm if inference is occurring on your GPU or CPU?
Author
Owner

@nunostiles commented on GitHub (Apr 30, 2024):

Hello,
No way, I could type around 30 minutes straight and then start the horrible slowness. I've already tested to use only CPU and it's the exact same symptom, after around 30 minutes it starts to getting slow.

<!-- gh-comment-id:2085271211 --> @nunostiles commented on GitHub (Apr 30, 2024): Hello, No way, I could type around 30 minutes straight and then start the horrible slowness. I've already tested to use only CPU and it's the exact same symptom, after around 30 minutes it starts to getting slow.
Author
Owner

@pdevine commented on GitHub (Apr 30, 2024):

Are you using the CLI or the API? Do all subsequent requests continue to be slow, or just the first time you do it after 30 minutes?

<!-- gh-comment-id:2086677218 --> @pdevine commented on GitHub (Apr 30, 2024): Are you using the CLI or the API? Do all subsequent requests continue to be slow, or just the first time you do it after 30 minutes?
Author
Owner

@nunostiles commented on GitHub (Apr 30, 2024):

I'm using the CLI, the first requests are always fine, first around 30 minutes, no problems. there is always another thing, when the slowness starts, I could also save the model, restart the computer, load it, and the problem persists. Very weird!

<!-- gh-comment-id:2086850515 --> @nunostiles commented on GitHub (Apr 30, 2024): I'm using the CLI, the first requests are always fine, first around 30 minutes, no problems. there is always another thing, when the slowness starts, I could also save the model, restart the computer, load it, and the problem persists. Very weird!
Author
Owner

@mistakenideas commented on GitHub (May 30, 2024):

I’ve seen something similar running on Linux albeit with CPU only. I think it may be related to the context size, and when this fills up things slow to a crawl. Responses go from around 30secs to 7minutes!

Halving the context from 2048 to 1024 causes the problem to happen quicker but also be not as pronounced (responses ‘only’ 3 and a half minutes).

I’ll do some more investigation and compare to using llama.cpp directly.

<!-- gh-comment-id:2139261041 --> @mistakenideas commented on GitHub (May 30, 2024): I’ve seen something similar running on Linux albeit with CPU only. I think it may be related to the context size, and when this fills up things slow to a crawl. Responses go from around 30secs to 7minutes! Halving the context from 2048 to 1024 causes the problem to happen quicker but also be not as pronounced (responses ‘only’ 3 and a half minutes). I’ll do some more investigation and compare to using llama.cpp directly.
Author
Owner

@eyalshalom commented on GitHub (Jul 8, 2024):

I am experiencing the same. after 30 minuts become very slow

<!-- gh-comment-id:2214912272 --> @eyalshalom commented on GitHub (Jul 8, 2024): I am experiencing the same. after 30 minuts become very slow
Author
Owner

@eyalshalom commented on GitHub (Jul 8, 2024):

do you have any idea what cause to it?

<!-- gh-comment-id:2214913334 --> @eyalshalom commented on GitHub (Jul 8, 2024): do you have any idea what cause to it?
Author
Owner

@nunostiles commented on GitHub (Jul 8, 2024):

I don't really know and I've already tried several machines... no idea

<!-- gh-comment-id:2214976131 --> @nunostiles commented on GitHub (Jul 8, 2024): I don't really know and I've already tried several machines... no idea
Author
Owner

@dhiltgen commented on GitHub (Oct 23, 2024):

Please give the new 0.4.0 RC release a try and see if it resolves this slowdown in performance. We've rewritten the way we cache which should improve performance and reliability.

https://github.com/ollama/ollama/releases

<!-- gh-comment-id:2433171415 --> @dhiltgen commented on GitHub (Oct 23, 2024): Please give the new 0.4.0 RC release a try and see if it resolves this slowdown in performance. We've rewritten the way we cache which should improve performance and reliability. https://github.com/ollama/ollama/releases
Author
Owner

@pdevine commented on GitHub (Dec 19, 2024):

I tried this w/ linux and an RTX 4090. The results:

Initial speed:

total duration:       3.245905622s
load duration:        15.127592ms
prompt eval count:    363 token(s)
prompt eval duration: 3ms
prompt eval rate:     121000.00 tokens/s
eval count:           434 token(s)
eval duration:        3.225s
eval rate:            134.57 tokens/s

After 40 minutes:

total duration:       5.115713412s
load duration:        890.785686ms
prompt eval count:    810 token(s)
prompt eval duration: 127ms
prompt eval rate:     6377.95 tokens/s
eval count:           518 token(s)
eval duration:        3.881s
eval rate:            133.47 tokens/s

That includes having to reload the model into memory.

I'm going to go ahead and close the issue, but we can reopen it if someone is able to reproduce it.

<!-- gh-comment-id:2555966977 --> @pdevine commented on GitHub (Dec 19, 2024): I tried this w/ linux and an RTX 4090. The results: Initial speed: ``` total duration: 3.245905622s load duration: 15.127592ms prompt eval count: 363 token(s) prompt eval duration: 3ms prompt eval rate: 121000.00 tokens/s eval count: 434 token(s) eval duration: 3.225s eval rate: 134.57 tokens/s ``` After 40 minutes: ``` total duration: 5.115713412s load duration: 890.785686ms prompt eval count: 810 token(s) prompt eval duration: 127ms prompt eval rate: 6377.95 tokens/s eval count: 518 token(s) eval duration: 3.881s eval rate: 133.47 tokens/s ``` That includes having to reload the model into memory. I'm going to go ahead and close the issue, but we can reopen it if someone is able to reproduce it.
Author
Owner

@alex-bratu commented on GitHub (Feb 18, 2025):

I have the exact same problems. it only works okay for like 20-30 minutes and then very slow

<!-- gh-comment-id:2665235055 --> @alex-bratu commented on GitHub (Feb 18, 2025): I have the exact same problems. it only works okay for like 20-30 minutes and then very slow
Author
Owner

@pdevine commented on GitHub (Feb 18, 2025):

@alex-bratu can you post some more details? What hardware are you using? What version of Ollama? Can you run ollama ps when it slows down?

<!-- gh-comment-id:2666664945 --> @pdevine commented on GitHub (Feb 18, 2025): @alex-bratu can you post some more details? What hardware are you using? What version of Ollama? Can you run `ollama ps` when it slows down?
Author
Owner

@alex-bratu commented on GitHub (Feb 19, 2025):

ollama ps

NAME ID SIZE PROCESSOR UNTIL
llama3:8b 365c0bd3c000 6.7 GB 100% GPU Stopping...

nvidia H100

<!-- gh-comment-id:2668660756 --> @alex-bratu commented on GitHub (Feb 19, 2025): # ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:8b 365c0bd3c000 6.7 GB 100% GPU Stopping... nvidia H100
Author
Owner

@pdevine commented on GitHub (Feb 19, 2025):

OK, that's weird. Is there some other process which is trying to stop the model? Does it succeed in stopping?

<!-- gh-comment-id:2669338742 --> @pdevine commented on GitHub (Feb 19, 2025): OK, that's weird. Is there some other process which is trying to stop the model? Does it succeed in stopping?
Author
Owner

@alex-bratu commented on GitHub (Feb 20, 2025):

as far as I can tell, there is no other process which is trying to stop the model. here I restarted the model but seems like it works very slow

Image
Image

<!-- gh-comment-id:2670919946 --> @alex-bratu commented on GitHub (Feb 20, 2025): as far as I can tell, there is no other process which is trying to stop the model. here I restarted the model but seems like it works very slow ![Image](https://github.com/user-attachments/assets/8eaf1e28-72f6-4289-8869-27c8dd7e6745) ![Image](https://github.com/user-attachments/assets/b2a06aa6-da82-4135-850d-b2555ca375ae)
Author
Owner

@pdevine commented on GitHub (Feb 21, 2025):

@alex-bratu does it stay in the Stopping... state? That was the thing that I was commenting on being weird. I'm struggling to duplicate this though, at least on a 4090.

<!-- gh-comment-id:2673027010 --> @pdevine commented on GitHub (Feb 21, 2025): @alex-bratu does it stay in the `Stopping...` state? That was the thing that I was commenting on being weird. I'm struggling to duplicate this though, at least on a 4090.
Author
Owner

@alex-bratu commented on GitHub (Feb 21, 2025):

so there is a fresh server with no other service running on it. all i have done is to install ollama using the one line installer and I try to run the llama3:8b model. i tried using the ' ollama run llama3:8b ' or using the curl commands and it seems like after like 30 minutes it works very very slow. Even if I restart the service or reload the model this slowness is not changing. I need to restart the hole server in order to make it work well, but then again after approximatively 30 minutes starts to work like this . I will give you the logs from ollama serve terminal when it is working slow.

logs.txt

<!-- gh-comment-id:2673955098 --> @alex-bratu commented on GitHub (Feb 21, 2025): so there is a fresh server with no other service running on it. all i have done is to install ollama using the one line installer and I try to run the llama3:8b model. i tried using the ' ollama run llama3:8b ' or using the curl commands and it seems like after like 30 minutes it works very very slow. Even if I restart the service or reload the model this slowness is not changing. I need to restart the hole server in order to make it work well, but then again after approximatively 30 minutes starts to work like this . I will give you the logs from ollama serve terminal when it is working slow. [logs.txt](https://github.com/user-attachments/files/18903725/logs.txt)
Author
Owner

@gamer4et commented on GitHub (Mar 1, 2025):

Meet the same issue on tesla p40 with deepseek-r1:14b

upd: after restarting ollama-service (systemd) begin works normally - I'm testing on the same context with 0 temperature - after attempt of concurrent usage (I've tried to do 2 request at the same time) time upscaled up to x2 of normal (without concurrent requests). Also GPU usage decreased with 2 percents down

<!-- gh-comment-id:2692468188 --> @gamer4et commented on GitHub (Mar 1, 2025): Meet the same issue on tesla p40 with deepseek-r1:14b upd: after restarting ollama-service (systemd) begin works normally - I'm testing on the same context with 0 temperature - after attempt of concurrent usage (I've tried to do 2 request at the same time) time upscaled up to x2 of normal (without concurrent requests). Also GPU usage decreased with 2 percents down
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2513