[GH-ISSUE #4077] Stop running model without removing #28292

Closed
opened 2026-04-22 06:17:34 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @nitulkukadia on GitHub (May 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4077

To start the model we can use the command :
ollama run

How do we stop the model ?

I tried running ollama rm but it will remove the try to redownload the model which is approx 50 GB.

We need to run different models based on the requirements and user interest.
How can we do that without redownloading the model again?

Originally created by @nitulkukadia on GitHub (May 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4077 To start the model we can use the command : ollama run <Model> How do we stop the model ? I tried running ollama rm <Model> but it will remove the try to redownload the model which is approx 50 GB. We need to run different models based on the requirements and user interest. How can we do that without redownloading the model again?
GiteaMirror added the feature request label 2026-04-22 06:17:34 -05:00
Author
Owner

@Gomez12 commented on GitHub (May 1, 2024):

A normal model will stop being loaded after 5 minutes or whenever a new model is requested.

Basically all you have to do is Ollama run "other model name" and it will do what it needs to do.
And if you do not use the model for 5 minutes then it will be automatically unloaded.

<!-- gh-comment-id:2088379899 --> @Gomez12 commented on GitHub (May 1, 2024): A normal model will stop being loaded after 5 minutes or whenever a new model is requested. Basically all you have to do is Ollama run "other model name" and it will do what it needs to do. And if you do not use the model for 5 minutes then it will be automatically unloaded.
Author
Owner

@pdevine commented on GitHub (May 1, 2024):

@nitulkukadia If you're using ollama run, just hit Ctrl + c to stop the model from responding. If you want to unload it from memory check out the FAQ which covers this. The short answer is either use the OLLAMA_KEEP_ALIVE environment variable, or you can make a call to the API.

<!-- gh-comment-id:2088450819 --> @pdevine commented on GitHub (May 1, 2024): @nitulkukadia If you're using `ollama run`, just hit `Ctrl + c` to stop the model from responding. If you want to unload it from memory check out the [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately) which covers this. The short answer is either use the `OLLAMA_KEEP_ALIVE` environment variable, or you can make a call to the API.
Author
Owner

@nitulkukadia commented on GitHub (May 1, 2024):

This seems a fallback for me. But removing unwanted resources (remove model) and removing unused resources (no one is using for sometime) seems a different use case.

<!-- gh-comment-id:2088498468 --> @nitulkukadia commented on GitHub (May 1, 2024): This seems a fallback for me. But removing unwanted resources (remove model) and removing unused resources (no one is using for sometime) seems a different use case.
Author
Owner

@nitulkukadia commented on GitHub (May 7, 2024):

@pdevine The basic use case is I do not want to show the models in the UI drop down for the end users as those are not available for specific reason and have been stopped for evaluation before releasing to end user. OR have been stopped to reduce resource utilisation.

<!-- gh-comment-id:2098242029 --> @nitulkukadia commented on GitHub (May 7, 2024): @pdevine The basic use case is I do not want to show the models in the UI drop down for the end users as those are not available for specific reason and have been stopped for evaluation before releasing to end user. OR have been stopped to reduce resource utilisation.
Author
Owner

@pdevine commented on GitHub (May 7, 2024):

@nitulkukadia when you start the ollama server you can use the OLLAMA_KEEP_ALIVE environment variable and set it to 0. That will automatically unload any model after each generation. This is covered in the FAQ I mentioned before.

<!-- gh-comment-id:2098909979 --> @pdevine commented on GitHub (May 7, 2024): @nitulkukadia when you start the ollama server you can use the `OLLAMA_KEEP_ALIVE` environment variable and set it to `0`. That will automatically unload any model after each generation. This is covered in the FAQ I mentioned before.
Author
Owner

@the-zucc commented on GitHub (Jun 5, 2024):

This doesn't make sense. For some people, loading the model in memory takes a long time. As such, they should be left with the choice of keeping it in memory, or killing it.

The same way docker users can issue the docker stop <container_name> command to stop a container when they no longer use it, ollama users should be able to issue ollama stop <model_name> to stop a model that is OLLAMA_KEEP_ALIVE=-1 (never unload the model).

The idea might not be fully thought out, and there might be some considerations, but I really think that not letting people control what models are loaded in memory is extremely odd..

<!-- gh-comment-id:2150766768 --> @the-zucc commented on GitHub (Jun 5, 2024): This doesn't make sense. For some people, loading the model in memory takes a long time. As such, they should be left with the choice of keeping it in memory, or killing it. The same way docker users can issue the `docker stop <container_name>` command to stop a container when they no longer use it, ollama users should be able to issue `ollama stop <model_name>` to stop a model that is OLLAMA_KEEP_ALIVE=-1 (never unload the model). The idea might not be fully thought out, and there might be some considerations, but I really think that not letting people control what models are loaded in memory is extremely odd..
Author
Owner

@justinsloan commented on GitHub (Jun 25, 2024):

You could also just restart the daemon process. For example: sudo systemctl restart ollama

<!-- gh-comment-id:2189966193 --> @justinsloan commented on GitHub (Jun 25, 2024): You could also just restart the daemon process. For example: `sudo systemctl restart ollama`
Author
Owner

@EricFrancis12 commented on GitHub (Sep 6, 2024):

You can always kill the Ollama process from the task manager.

<!-- gh-comment-id:2334829744 --> @EricFrancis12 commented on GitHub (Sep 6, 2024): You can always kill the Ollama process from the task manager.
Author
Owner

@pdevine commented on GitHub (Sep 11, 2024):

Check out #6739

<!-- gh-comment-id:2344094879 --> @pdevine commented on GitHub (Sep 11, 2024): Check out #6739
Author
Owner

@darcouk commented on GitHub (Jan 14, 2025):

Ctrl + d

<!-- gh-comment-id:2589659564 --> @darcouk commented on GitHub (Jan 14, 2025): **`Ctrl + d`**
Author
Owner

@Mugane commented on GitHub (May 11, 2025):

The notion of killing the process, or unloading the model (even if immediate) is not at all what is wanted. What we want is to stop inference and to do so without removing the model from memory.

<!-- gh-comment-id:2870174663 --> @Mugane commented on GitHub (May 11, 2025): The notion of killing the process, or unloading the model (even if immediate) is not at all what is wanted. What we want is to stop *inference* and to do so *without* removing the model from memory.
Author
Owner

@cacard commented on GitHub (Oct 28, 2025):

I want clear gpu RAM, okay?...

<!-- gh-comment-id:3454701315 --> @cacard commented on GitHub (Oct 28, 2025): I want clear gpu RAM, okay?...
Author
Owner

@pdevine commented on GitHub (Oct 28, 2025):

If you want to clear the model memory use ollama stop <model>. If you want to stop inference you can use Ctrl + c in the can of the ollama CLI, or you can hang up on the HTTP stream. The model will stay loaded in memory unless you have keepalive set to 0.

<!-- gh-comment-id:3455202661 --> @pdevine commented on GitHub (Oct 28, 2025): If you want to clear the model memory use `ollama stop <model>`. If you want to stop inference you can use `Ctrl + c` in the can of the ollama CLI, or you can hang up on the HTTP stream. The model will stay loaded in memory unless you have keepalive set to `0`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28292