[GH-ISSUE #7606] vram usage does not go back down after model unloads - stuck in Stopping... #30613

Closed
opened 2026-04-22 10:25:55 -05:00 by GiteaMirror · 27 comments
Owner

Originally created by @CraftMaster163 on GitHub (Nov 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7606

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

when i set keep alive to 0 the memory usage does not go all the way back down. also it uses system ram when vram still avalible

gpu 7800xt
platform windows
rocm version 6.1

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.4.1

Originally created by @CraftMaster163 on GitHub (Nov 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7606 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? when i set keep alive to 0 the memory usage does not go all the way back down. also it uses system ram when vram still avalible gpu 7800xt platform windows rocm version 6.1 ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.4.1
GiteaMirror added the amdbug labels 2026-04-22 10:25:55 -05:00
Author
Owner

@CraftMaster163 commented on GitHub (Nov 11, 2024):

server-3.log
app-4.log

<!-- gh-comment-id:2467128494 --> @CraftMaster163 commented on GitHub (Nov 11, 2024): [server-3.log](https://github.com/user-attachments/files/17694531/server-3.log) [app-4.log](https://github.com/user-attachments/files/17694533/app-4.log)
Author
Owner

@rick-github commented on GitHub (Nov 11, 2024):

What's the output of ollama ps?

<!-- gh-comment-id:2468661231 --> @rick-github commented on GitHub (Nov 11, 2024): What's the output of `ollama ps`?
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

image
screenshot of before and after closing ollama
and output of ollama ps
image

<!-- gh-comment-id:2471916204 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): ![image](https://github.com/user-attachments/assets/f6fd1af9-9f79-4b55-a61a-6dad3fcf531e) screenshot of before and after closing ollama and output of ollama ps ![image](https://github.com/user-attachments/assets/669e9805-883a-44c3-9840-fc3f631d1aef)
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

image
here is when a model runs and stops

<!-- gh-comment-id:2472283120 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): ![image](https://github.com/user-attachments/assets/0f15959f-bee8-4188-af1b-34314dca4299) here is when a model runs and stops
Author
Owner

@rick-github commented on GitHub (Nov 13, 2024):

Looks like VRAM usage returns the level it was before the model was run. Other processes are using VRAM - browser, media player, GUI, etc.

<!-- gh-comment-id:2473577211 --> @rick-github commented on GitHub (Nov 13, 2024): Looks like VRAM usage returns the level it was before the model was run. Other processes are using VRAM - browser, media player, GUI, etc.
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

no if i quit ollama it goes back down, that is ollama using it

<!-- gh-comment-id:2474016254 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): no if i quit ollama it goes back down, that is ollama using it
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

i found a work around, for n8n(which is what im mainly using ollama for atm) just tell it to kill ollama and it forces it to dump the memory, but that should not be required

<!-- gh-comment-id:2474267630 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): i found a work around, for n8n(which is what im mainly using ollama for atm) just tell it to kill ollama and it forces it to dump the memory, but that should not be required
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2024):

@CraftMaster163 can you check detailed processes to see if ollama_llama_server is still running when you see the memory leakage, or if only the main ollama process is running? When the model goes idle, we should shut down the ollama_llama_server which should release GPU memory, however we do GPU discovery in the main ollama process. That code should unload the GPU libraries when not actively querying to avoid additional VRAM usage.

Looking at your driver version, this might be a dup of #7107

Does the behavior disappear if you downgrade the driver to 24.8.1?

<!-- gh-comment-id:2474586383 --> @dhiltgen commented on GitHub (Nov 13, 2024): @CraftMaster163 can you check detailed processes to see if `ollama_llama_server` is still running when you see the memory leakage, or if only the main `ollama` process is running? When the model goes idle, we should shut down the `ollama_llama_server` which should release GPU memory, however we do GPU discovery in the main `ollama` process. That code should unload the GPU libraries when not actively querying to avoid additional VRAM usage. Looking at your driver version, this might be a dup of #7107 Does the behavior disappear if you downgrade the driver to 24.8.1?
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

yes i just checked and ollama_llama_server is still running, if i close it the vram goes down again

<!-- gh-comment-id:2474757152 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): yes i just checked and ollama_llama_server is still running, if i close it the vram goes down again
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

also ollama.exe does not get rid of its vram usage, if i stop it, it restarts fine and goes back down to my normal vram usage

<!-- gh-comment-id:2474775861 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): also ollama.exe does not get rid of its vram usage, if i stop it, it restarts fine and goes back down to my normal vram usage
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2024):

@CraftMaster163 can you clarify if ollama ps shows nothing running while ollama_llama_server is still running?

<!-- gh-comment-id:2474777772 --> @dhiltgen commented on GitHub (Nov 13, 2024): @CraftMaster163 can you clarify if `ollama ps` shows nothing running while `ollama_llama_server` is still running?
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

image
i assume /bye unloads model

<!-- gh-comment-id:2474781018 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): ![image](https://github.com/user-attachments/assets/f17f221c-21dc-4b79-a60e-17944cce479c) i assume /bye unloads model
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2024):

i assume /bye unloads model

That just closes the client, but the model will stay loaded until the timeout expires. By default this is 5m but you can override. Can you verify if the VRAM usage drops back down once ollama ps no longer shows a model loaded?

https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately

<!-- gh-comment-id:2474789915 --> @dhiltgen commented on GitHub (Nov 13, 2024): > i assume /bye unloads model That just closes the client, but the model will stay loaded until the timeout expires. By default this is 5m but you can override. Can you verify if the VRAM usage drops back down once `ollama ps` no longer shows a model loaded? https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

image
hm now it closes, when i removed the commands from n8n, it does stop but ollama.exe does not clear its cache, here is a photo showing the graph going down after closing ollama
image

<!-- gh-comment-id:2474794189 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): ![image](https://github.com/user-attachments/assets/59ad05e5-27bd-46f3-82fa-a9d5777624ab) hm now it closes, when i removed the commands from n8n, it does stop but ollama.exe does not clear its cache, here is a photo showing the graph going down after closing ollama ![image](https://github.com/user-attachments/assets/13ab5c1b-3cc8-46e6-93fc-59601b9c9af2)
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2024):

hm now it closes, when i removed the commands from n8n, it does stop but ollama.exe does not clear its cache, here is a photo showing the graph going down after closing ollama

Can you clarify? When ollama ps shows no model loaded, does ollama_llama_server stop and does the VRAM usage drop back down to the baseline you area expecting? If so, this doesn't sound like a bug.

I'm not familiar with n8n, but if it calls the Ollama API and keeps the model loaded, that might explain what you are seeing.

<!-- gh-comment-id:2474806498 --> @dhiltgen commented on GitHub (Nov 13, 2024): > hm now it closes, when i removed the commands from n8n, it does stop but ollama.exe does not clear its cache, here is a photo showing the graph going down after closing ollama Can you clarify? When `ollama ps` shows no model loaded, does `ollama_llama_server` stop and does the VRAM usage drop back down to the baseline you area expecting? If so, this doesn't sound like a bug. I'm not familiar with n8n, but if it calls the Ollama API and keeps the model loaded, that might explain what you are seeing.
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

no i told it to unload model, there is a spike, it unloads the context but does not fully unload model, here is a screenshot of the three spikes. normal usage
image
after ollama runs with keep alive 0
image
and after i quit ollama
image

<!-- gh-comment-id:2474816992 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): no i told it to unload model, there is a spike, it unloads the context but does not fully unload model, here is a screenshot of the three spikes. normal usage ![image](https://github.com/user-attachments/assets/f7278702-cad9-4efa-81c4-71553d6947d9) after ollama runs with keep alive 0 ![image](https://github.com/user-attachments/assets/aa0fec5d-069a-4d13-a702-984224bfa254) and after i quit ollama ![image](https://github.com/user-attachments/assets/b53c0e81-88f3-4e83-97fb-f9bd860b4386)
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

if i let my discord bot using ollama run for a while it goes to full usage, its just llama3.2:3b which does not need that much memory

<!-- gh-comment-id:2474818497 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): if i let my discord bot using ollama run for a while it goes to full usage, its just llama3.2:3b which does not need that much memory
Author
Owner

@CraftMaster163 commented on GitHub (Nov 13, 2024):

https://drive.google.com/file/d/1x_CYMP45GIFcMIAYwTgio3_oYyrUsPNP/view?usp=sharing
here is a video of the issue

<!-- gh-comment-id:2474910975 --> @CraftMaster163 commented on GitHub (Nov 13, 2024): https://drive.google.com/file/d/1x_CYMP45GIFcMIAYwTgio3_oYyrUsPNP/view?usp=sharing here is a video of the issue
Author
Owner

@dhiltgen commented on GitHub (Apr 9, 2025):

Related issues #8178 #9617 #8969 #10119

<!-- gh-comment-id:2790957293 --> @dhiltgen commented on GitHub (Apr 9, 2025): Related issues #8178 #9617 #8969 #10119
Author
Owner

@dhiltgen commented on GitHub (Apr 9, 2025):

There seems to be a race somewhere in the scheduler under heavy load, possibly related to clients closing connections prematurely. If people are still seeing models get stuck in a "Stopping..." state in the ollama ps output and the model never actually unloads, please try running the server with OLLAMA_DEBUG=1 and share the logs including the model load, and eventual stuck state.

<!-- gh-comment-id:2791068349 --> @dhiltgen commented on GitHub (Apr 9, 2025): There seems to be a race somewhere in the scheduler under heavy load, possibly related to clients closing connections prematurely. If people are still seeing models get stuck in a "Stopping..." state in the `ollama ps` output and the model never actually unloads, please try running the server with OLLAMA_DEBUG=1 and share the logs including the model load, and eventual stuck state.
Author
Owner

@rick-github commented on GitHub (Apr 9, 2025):

Not necessarily high load. If the model loses coherence, then it will remain in the "Stopping..." state until the token limit count is exceeded or the client disconnects. If the context buffer is large and the client doesn't have a timeout, this can take a while.

$ ollama -v
ollama version is 0.6.5
$ ollama ps
NAME    ID    SIZE    PROCESSOR    UNTIL 
$ curl -s localhost:11434/api/generate -d '{"model":"gemma3:4b","prompt":'"$((echo write a story based on the following text: ; head -8000 /etc/dictionaries-common/words) | jq -sR)"',"options":{"num_ctx":32768}}' -o /dev/null &
[1] 3044530
$ ollama stop gemma3:4b ; date
Thu Apr 10 12:17:06 AM CEST 2025
$ while : ; do ollama ps ; date ; sleep 1m ; done
...
NAME         ID              SIZE      PROCESSOR    UNTIL       
gemma3:4b    c0494fe00251    6.4 GB    100% GPU     Stopping...    
Thu Apr 10 12:33:40 AM CEST 2025
...
<!-- gh-comment-id:2791130228 --> @rick-github commented on GitHub (Apr 9, 2025): Not necessarily high load. If the model loses coherence, then it will remain in the "Stopping..." state until the token limit count is exceeded or the client disconnects. If the context buffer is large and the client doesn't have a timeout, this can take a while. ```console $ ollama -v ollama version is 0.6.5 $ ollama ps NAME ID SIZE PROCESSOR UNTIL $ curl -s localhost:11434/api/generate -d '{"model":"gemma3:4b","prompt":'"$((echo write a story based on the following text: ; head -8000 /etc/dictionaries-common/words) | jq -sR)"',"options":{"num_ctx":32768}}' -o /dev/null & [1] 3044530 $ ollama stop gemma3:4b ; date Thu Apr 10 12:17:06 AM CEST 2025 $ while : ; do ollama ps ; date ; sleep 1m ; done ... NAME ID SIZE PROCESSOR UNTIL gemma3:4b c0494fe00251 6.4 GB 100% GPU Stopping... Thu Apr 10 12:33:40 AM CEST 2025 ... ```
Author
Owner

@Anaphylaxis commented on GitHub (Apr 18, 2025):

I'm experiencing an issue where loading multiple models (one after the other) seems to be persisting in the VRAM even after the model is completely stopped. Model after model will lower the GPU% and raise CPU%, and even with no models in ollama ps the shared VRAM is maxed out on my GPU after multiple sessions. Quitting ollama app.exe completely released all my VRAM, and on Ollama restart models are using 100% GPU again, so I'm 100% sure it's Ollama failing to reclaim the space, perhaps when a model is booted off for an embedding model. In the logs it does show that my available memory is less and less and less every time.

<!-- gh-comment-id:2815582029 --> @Anaphylaxis commented on GitHub (Apr 18, 2025): I'm experiencing an issue where loading multiple models (one after the other) seems to be persisting in the VRAM even after the model is completely stopped. Model after model will lower the GPU% and raise CPU%, and even with no models in `ollama ps` the shared VRAM is maxed out on my GPU after multiple sessions. Quitting `ollama app.exe` completely released all my VRAM, and on Ollama restart models are using 100% GPU again, so I'm 100% sure it's Ollama failing to reclaim the space, perhaps when a model is booted off for an embedding model. In the logs it does show that my available memory is less and less and less every time.
Author
Owner

@rick-github commented on GitHub (Apr 18, 2025):

Server logs and the output of nivida-smi will aid in debugging.

<!-- gh-comment-id:2815592967 --> @rick-github commented on GitHub (Apr 18, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) and the output of `nivida-smi` will aid in debugging.
Author
Owner

@Anaphylaxis commented on GitHub (Apr 18, 2025):

Server logs and the output of nivida-smi will aid in debugging.

I am using ROCm and Windows so there is no nvidia-smi for me, that's why I referenced the shared VRAM usage in Task Manager.
I didn't want to post hundreds of lines of logs if unnecessary, here they are.

ollamalog.txt

You can see we start out as memory.available="[23.7 GiB]" and towards the end we're memory.available="[12.7 GiB]" and 63 gpu layers that quickly starts dropping.
time=2025-04-18T10:47:45.183-04:00 level=INFO source=sched.go:187 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency" could be related but I assume this is talking about the virtual display driver I'm using to remote into this machine.

<!-- gh-comment-id:2815597871 --> @Anaphylaxis commented on GitHub (Apr 18, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) and the output of `nivida-smi` will aid in debugging. I am using ROCm and Windows so there is no `nvidia-smi` for me, that's why I referenced the shared VRAM usage in Task Manager. I didn't want to post hundreds of lines of logs if unnecessary, here they are. [ollamalog.txt](https://github.com/user-attachments/files/19813583/ollamalog.txt) You can see we start out as `memory.available="[23.7 GiB]"` and towards the end we're `memory.available="[12.7 GiB]" ` and 63 gpu layers that quickly starts dropping. `time=2025-04-18T10:47:45.183-04:00 level=INFO source=sched.go:187 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency"` could be related but I assume this is talking about the virtual display driver I'm using to remote into this machine.
Author
Owner

@rick-github commented on GitHub (Apr 18, 2025):

This might be a different issue in that you don't have models stuck in a "Stopping..." state. It also seems to affect both the old llama.cpp runner and the new go runner. I suggest opening a new ticket so this one doesn't get mixed in with a similar but different problem.

<!-- gh-comment-id:2815621769 --> @rick-github commented on GitHub (Apr 18, 2025): This might be a different issue in that you don't have models stuck in a "Stopping..." state. It also seems to affect both the old llama.cpp runner and the new go runner. I suggest opening a new ticket so this one doesn't get mixed in with a similar but different problem.
Author
Owner

@Anaphylaxis commented on GitHub (Apr 18, 2025):

Thanks, that's what I thought. I'm not sure whether to make a llama.cpp issue or an Ollama issue. I see some relevant issues but they're all closed.

<!-- gh-comment-id:2815668737 --> @Anaphylaxis commented on GitHub (Apr 18, 2025): Thanks, that's what I thought. I'm not sure whether to make a llama.cpp issue or an Ollama issue. I see some relevant issues but they're all closed.
Author
Owner

@rick-github commented on GitHub (Apr 18, 2025):

It seems to affect both runners so it might be a rocm issue.

<!-- gh-comment-id:2815677570 --> @rick-github commented on GitHub (Apr 18, 2025): It seems to affect both runners so it might be a rocm issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30613