[GH-ISSUE #1493] A way to prevent downloaded models from being deleted #805

Closed
opened 2026-04-12 10:28:52 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @t18n on GitHub (Dec 13, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1493

I downloaded around 50Gbs worth of models to use with Big AGI. For some reason, when I reloaded with Big AGI interface, all the models are gone. The models are too easy to get removed and it takes a lot of time to download them. Is there a way to prevent that? Can I save the models somewhere and point Ollama to it instead?

Originally created by @t18n on GitHub (Dec 13, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1493 I downloaded around 50Gbs worth of models to use with Big AGI. For some reason, when I reloaded with Big AGI interface, all the models are gone. The models are too easy to get removed and it takes a lot of time to download them. Is there a way to prevent that? Can I save the models somewhere and point Ollama to it instead?
Author
Owner

@BruceMacD commented on GitHub (Dec 13, 2023):

Hi @t18n, it sounds like there is a chance that the models are still on your system, fully downloaded models shouldn't get deleted automatically.

Try making sure Ollama is running in the same context (as a service versus as the user). You can manually check if the models are still around locally by checking both ~/.ollama/models and /usr/share/ollama/.ollama/models

<!-- gh-comment-id:1854270980 --> @BruceMacD commented on GitHub (Dec 13, 2023): Hi @t18n, it sounds like there is a chance that the models are still on your system, fully downloaded models shouldn't get deleted automatically. Try making sure Ollama is running in the same context (as a service versus as the user). You can manually check if the models are still around locally by checking both `~/.ollama/models` and `/usr/share/ollama/.ollama/models`
Author
Owner

@t18n commented on GitHub (Dec 13, 2023):

@BruceMacD I checked and it actually seems like they are not deleted. However, when I run ollama list, there is only the latest model showed up
image

<!-- gh-comment-id:1854634160 --> @t18n commented on GitHub (Dec 13, 2023): @BruceMacD I checked and it actually seems like they are not deleted. However, when I run `ollama list`, there is only the latest model showed up ![image](https://github.com/jmorganca/ollama/assets/14198542/f7ebde64-8921-450b-a608-cee96a6f1ff3)
Author
Owner

@BruceMacD commented on GitHub (Dec 15, 2023):

@t18n interesting, are there duplicates in /usr/share/ollama/.ollama/models?

<!-- gh-comment-id:1858172076 --> @BruceMacD commented on GitHub (Dec 15, 2023): @t18n interesting, are there duplicates in `/usr/share/ollama/.ollama/models`?
Author
Owner

@t18n commented on GitHub (Dec 18, 2023):

@BruceMacD I use MacOS 14.0 and there is no /usr/share/ollama/ folder on my computer.

<!-- gh-comment-id:1860386102 --> @t18n commented on GitHub (Dec 18, 2023): @BruceMacD I use MacOS 14.0 and there is no `/usr/share/ollama/` folder on my computer.
Author
Owner

@t18n commented on GitHub (Dec 18, 2023):

Good news, all the lost models seems to be back and I have no clue why. I did press the Refresh button on Big AGI several times before it shows up though.

image

<!-- gh-comment-id:1860390700 --> @t18n commented on GitHub (Dec 18, 2023): Good news, all the lost models seems to be back and I have no clue why. I did press the `Refresh` button on **Big AGI** several times before it shows up though. ![image](https://github.com/jmorganca/ollama/assets/14198542/3423f6f9-bf3d-4dd1-8404-ceea9f21340a)
Author
Owner

@pdevine commented on GitHub (Jan 25, 2024):

Going to go ahead and close this since it seems like it got fixed.

<!-- gh-comment-id:1911098569 --> @pdevine commented on GitHub (Jan 25, 2024): Going to go ahead and close this since it seems like it got fixed.
Author
Owner

@YuanfengZhang commented on GitHub (Mar 25, 2024):

It happened to me several times on my Ubuntu 22.04 machine. And I get the safe way to stop and run ollama:

  1. stop it using systemctl stop ollama.service instead of CTRL+C
  2. start it using ollama serve instead of systemctl restart ollama.service or systemctl start ollama.service

if failed, try another run.

<!-- gh-comment-id:2017659700 --> @YuanfengZhang commented on GitHub (Mar 25, 2024): It happened to me several times on my Ubuntu 22.04 machine. And I get the safe way to stop and run ollama: 1. stop it using `systemctl stop ollama.service` instead of `CTRL+C` 2. start it using `ollama serve` instead of `systemctl restart ollama.service` or `systemctl start ollama.service` if failed, try another run.
Author
Owner

@kmanan commented on GitHub (Nov 1, 2024):

I'm running into this issue. Ollama and Ollama WebUI installed via Portainer on Ubuntu

<!-- gh-comment-id:2452231802 --> @kmanan commented on GitHub (Nov 1, 2024): I'm running into this issue. Ollama and Ollama WebUI installed via Portainer on Ubuntu
Author
Owner

@blackpyramid88 commented on GitHub (Feb 20, 2025):

I'm having this problem with a Windows computer running in Command Prompt. I restarted my computer and all the models were gone and I had to download again. When I ran Ollama List I got nothing..... any help would be appreciated

<!-- gh-comment-id:2670700706 --> @blackpyramid88 commented on GitHub (Feb 20, 2025): I'm having this problem with a Windows computer running in Command Prompt. I restarted my computer and all the models were gone and I had to download again. When I ran Ollama List I got nothing..... any help would be appreciated
Author
Owner

@sebastianlau commented on GitHub (Feb 20, 2025):

I've just had the exact same -- I thought it was my open-webui setup (ollama windows, open-webui docker), updated both then realised the model folders (blobs, manifests) were now empty

<!-- gh-comment-id:2670825621 --> @sebastianlau commented on GitHub (Feb 20, 2025): I've just had the exact same -- I thought it was my open-webui setup (ollama windows, open-webui docker), updated both then realised the model folders (blobs, manifests) were now empty
Author
Owner

@blackpyramid88 commented on GitHub (Feb 20, 2025):

Image

Image

Image

Image

Image

I spent a week working on my models and created 3 new models to see them all gone. I've tried every command to pull them out of memory, but nothing worked. I travel quite a bit and I'm not sure how I can incorporate my business this way if I keep loosing all of my data. Any help recovering would be appreciated.

UPDATE: Ok I figured out how to find all the models. Run the "ls" command in command prompt once you're in ubuntu. Then load the model back into the memory and it's back! I was afraid I lost it all but I got them all back now! Thanks guys for the help and hopefully these screenshots help anyone else that's new like me and help prevent the frustration I went through!

<!-- gh-comment-id:2671690152 --> @blackpyramid88 commented on GitHub (Feb 20, 2025): ![Image](https://github.com/user-attachments/assets/aeaf3177-df46-4f30-97cc-c08022993db1) ![Image](https://github.com/user-attachments/assets/ce02d440-44a2-44fb-8f67-d9f0a4e8f2b0) ![Image](https://github.com/user-attachments/assets/51decec9-88b1-43a7-81a0-2ee5a52e5e1f) ![Image](https://github.com/user-attachments/assets/85ae13bf-a597-4b57-a04e-a4bf9424f9f4) ![Image](https://github.com/user-attachments/assets/f1f6c737-ae7e-4a94-9fda-074946803ed7) I spent a week working on my models and created 3 new models to see them all gone. I've tried every command to pull them out of memory, but nothing worked. I travel quite a bit and I'm not sure how I can incorporate my business this way if I keep loosing all of my data. Any help recovering would be appreciated. UPDATE: Ok I figured out how to find all the models. Run the "ls" command in command prompt once you're in ubuntu. Then load the model back into the memory and it's back! I was afraid I lost it all but I got them all back now! Thanks guys for the help and hopefully these screenshots help anyone else that's new like me and help prevent the frustration I went through!
Author
Owner

@sebastianlau commented on GitHub (Feb 20, 2025):

@blackpyramid88 you should check whether they have been deleted from the filesystem, or just from ollama

The default location for models is something like C:\Users<username>\AppData\Local\Ollama\models -- if they're gone from there you may be able to use a file recovery tool (e.g. PhotoRec), but I wouldn't pin my hopes on that. If they still exist in there you can probably re-add them (but I'm not familiar with how)

<!-- gh-comment-id:2671801184 --> @sebastianlau commented on GitHub (Feb 20, 2025): @blackpyramid88 you should check whether they have been deleted from the filesystem, or just from ollama The default location for models is something like _C:\Users\<username>\AppData\Local\Ollama\models_ -- if they're gone from there you may be able to use a file recovery tool (e.g. PhotoRec), but I wouldn't pin my hopes on that. If they still exist in there you can probably re-add them (but I'm not familiar with how)
Author
Owner

@pdevine commented on GitHub (Feb 21, 2025):

@blackpyramid88 @sebastianlau can you describe exactly what happened? Did you upgrade something? Did you change anything like environment variables?

<!-- gh-comment-id:2673019992 --> @pdevine commented on GitHub (Feb 21, 2025): @blackpyramid88 @sebastianlau can you describe exactly what happened? Did you upgrade something? Did you change anything like environment variables?
Author
Owner

@sebastianlau commented on GitHub (Feb 21, 2025):

@pdevine AFAIK it' occurred as a result of an upgrade of either ollama or open-webui (I did three updates in quick succession so am not sure when exactly it occurred -- I also could not find anything in the logs related to it)

Host machine is running Windows Server 2022
Using ollama-windows native (uses the tray app/service -- updated via "an update is available, click to restart")
Docker is being hosted here (WSL2 integration) for running open-webui (updated via "docker compose pull/recreate etc")
Nginx is running in another docker container as a reverse proxy to open-webui's container
Models were stored in a different location to default ("D:\ollama")

I think my update path for open-webui was 0.5.4 -> 0.5.14 -> 0.5.15, I started way earlier though so receive the "vacuum db" warning on startup

Env var's have not been changed at all (though I did need to update my nginx.conf to sort out websockets)

<!-- gh-comment-id:2673781526 --> @sebastianlau commented on GitHub (Feb 21, 2025): @pdevine AFAIK it' occurred as a result of an upgrade of either ollama or open-webui (I did three updates in quick succession so am not sure when exactly it occurred -- I also could not find anything in the logs related to it) Host machine is running Windows Server 2022 Using ollama-windows native (uses the tray app/service -- updated via "an update is available, click to restart") Docker is being hosted here (WSL2 integration) for running open-webui (updated via "docker compose pull/recreate etc") Nginx is running in another docker container as a reverse proxy to open-webui's container Models were stored in a different location to default ("D:\ollama\") I think my update path for open-webui was 0.5.4 -> 0.5.14 -> 0.5.15, I started way earlier though so receive the "vacuum db" warning on startup Env var's have not been changed at all (though I did need to update my nginx.conf to sort out websockets)
Author
Owner

@DewiarQR commented on GitHub (Feb 21, 2025):

I have already deleted all the models twice. Apparently, when fully loaded, there is a moment when Ollama deletes its models for some reason. I have no strength left to install them, my hands are down

<!-- gh-comment-id:2674241065 --> @DewiarQR commented on GitHub (Feb 21, 2025): I have already deleted all the models twice. Apparently, when fully loaded, there is a moment when Ollama deletes its models for some reason. I have no strength left to install them, my hands are down
Author
Owner

@DewiarQR commented on GitHub (Feb 21, 2025):

dewiar@dewiar:$ systemctl stop ollama.service
Warning: The unit file, source configuration file or drop-ins of ollama.service changed on disk. Run 'systemctl daemon-reload' to reload units.
dewiar@dewiar:
$ ollama serve
2025/02/21 14:27:05 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434/ OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/dewiar/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost/ https://localhost/ http://localhost/:* https://localhost/:* http://127.0.0.1/ https://127.0.0.1/ http://127.0.0.1/:* https://127.0.0.1/:* http://0.0.0.0/ https://0.0.0.0/ http://0.0.0.0/:* https://0.0.0.0/:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-21T14:27:05.297+03:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-02-21T14:27:05.297+03:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-21T14:27:05.297+03:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-21T14:27:05.298+03:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-02-21T14:27:05.298+03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-21T14:27:05.440+03:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-59933eb2-145a-7eb0-60a3-0c0cec1c5c79 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.0 GiB"

<!-- gh-comment-id:2674954025 --> @DewiarQR commented on GitHub (Feb 21, 2025): [dewiar@dewiar](mailto:dewiar@dewiar):~$ systemctl stop ollama.service Warning: The unit file, source configuration file or drop-ins of ollama.service changed on disk. Run 'systemctl daemon-reload' to reload units. [dewiar@dewiar](mailto:dewiar@dewiar):~$ ollama serve 2025/02/21 14:27:05 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434/ OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/dewiar/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost/ https://localhost/ http://localhost/:* https://localhost/:* http://127.0.0.1/ https://127.0.0.1/ http://127.0.0.1/:* https://127.0.0.1/:* http://0.0.0.0/ https://0.0.0.0/ http://0.0.0.0/:* https://0.0.0.0/:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-21T14:27:05.297+03:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-02-21T14:27:05.297+03:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> [github.com/ollama/ollama/server.(*Server).PullHandler-fm](http://github.com/ollama/ollama/server.(*Server).PullHandler-fm) (5 handlers) [GIN-debug] POST /api/generate --> [github.com/ollama/ollama/server.(*Server).GenerateHandler-fm](http://github.com/ollama/ollama/server.(*Server).GenerateHandler-fm) (5 handlers) [GIN-debug] POST /api/chat --> [github.com/ollama/ollama/server.(*Server).ChatHandler-fm](http://github.com/ollama/ollama/server.(*Server).ChatHandler-fm) (5 handlers) [GIN-debug] POST /api/embed --> [github.com/ollama/ollama/server.(*Server).EmbedHandler-fm](http://github.com/ollama/ollama/server.(*Server).EmbedHandler-fm) (5 handlers) [GIN-debug] POST /api/embeddings --> [github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm](http://github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm) (5 handlers) [GIN-debug] POST /api/create --> [github.com/ollama/ollama/server.(*Server).CreateHandler-fm](http://github.com/ollama/ollama/server.(*Server).CreateHandler-fm) (5 handlers) [GIN-debug] POST /api/push --> [github.com/ollama/ollama/server.(*Server).PushHandler-fm](http://github.com/ollama/ollama/server.(*Server).PushHandler-fm) (5 handlers) [GIN-debug] POST /api/copy --> [github.com/ollama/ollama/server.(*Server).CopyHandler-fm](http://github.com/ollama/ollama/server.(*Server).CopyHandler-fm) (5 handlers) [GIN-debug] DELETE /api/delete --> [github.com/ollama/ollama/server.(*Server).DeleteHandler-fm](http://github.com/ollama/ollama/server.(*Server).DeleteHandler-fm) (5 handlers) [GIN-debug] POST /api/show --> [github.com/ollama/ollama/server.(*Server).ShowHandler-fm](http://github.com/ollama/ollama/server.(*Server).ShowHandler-fm) (5 handlers) [GIN-debug] POST /api/blobs/:digest --> [github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm](http://github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm) (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> [github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm](http://github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm) (5 handlers) [GIN-debug] GET /api/ps --> [github.com/ollama/ollama/server.(*Server).PsHandler-fm](http://github.com/ollama/ollama/server.(*Server).PsHandler-fm) (5 handlers) [GIN-debug] POST /v1/chat/completions --> [github.com/ollama/ollama/server.(*Server).ChatHandler-fm](http://github.com/ollama/ollama/server.(*Server).ChatHandler-fm) (6 handlers) [GIN-debug] POST /v1/completions --> [github.com/ollama/ollama/server.(*Server).GenerateHandler-fm](http://github.com/ollama/ollama/server.(*Server).GenerateHandler-fm) (6 handlers) [GIN-debug] POST /v1/embeddings --> [github.com/ollama/ollama/server.(*Server).EmbedHandler-fm](http://github.com/ollama/ollama/server.(*Server).EmbedHandler-fm) (6 handlers) [GIN-debug] GET /v1/models --> [github.com/ollama/ollama/server.(*Server).ListHandler-fm](http://github.com/ollama/ollama/server.(*Server).ListHandler-fm) (6 handlers) [GIN-debug] GET /v1/models/:model --> [github.com/ollama/ollama/server.(*Server).ShowHandler-fm](http://github.com/ollama/ollama/server.(*Server).ShowHandler-fm) (6 handlers) [GIN-debug] GET / --> [github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1](http://github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1) (5 handlers) [GIN-debug] GET /api/tags --> [github.com/ollama/ollama/server.(*Server).ListHandler-fm](http://github.com/ollama/ollama/server.(*Server).ListHandler-fm) (5 handlers) [GIN-debug] GET /api/version --> [github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2](http://github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2) (5 handlers) [GIN-debug] HEAD / --> [github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1](http://github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1) (5 handlers) [GIN-debug] HEAD /api/tags --> [github.com/ollama/ollama/server.(*Server).ListHandler-fm](http://github.com/ollama/ollama/server.(*Server).ListHandler-fm) (5 handlers) [GIN-debug] HEAD /api/version --> [github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2](http://github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2) (5 handlers) time=2025-02-21T14:27:05.297+03:00 level=INFO source=routes.go:1238 msg="Listening on [127.0.0.1:11434](http://127.0.0.1:11434/) (version 0.5.7)" time=2025-02-21T14:27:05.298+03:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" time=2025-02-21T14:27:05.298+03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-21T14:27:05.440+03:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-59933eb2-145a-7eb0-60a3-0c0cec1c5c79 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.0 GiB"
Author
Owner

@blackpyramid88 commented on GitHub (Feb 21, 2025):

I have already deleted all the models twice. Apparently, when fully loaded, there is a moment when Ollama deletes its models for some reason. I have no strength left to install them, my hands are down

I just posted screenshots I hope will help you!

<!-- gh-comment-id:2675509071 --> @blackpyramid88 commented on GitHub (Feb 21, 2025): > I have already deleted all the models twice. Apparently, when fully loaded, there is a moment when Ollama deletes its models for some reason. I have no strength left to install them, my hands are down I just posted screenshots I hope will help you!
Author
Owner

@blackpyramid88 commented on GitHub (Feb 21, 2025):

@blackpyramid88 @sebastianlau can you describe exactly what happened? Did you upgrade something? Did you change anything like environment variables?

When I shut down or restart the computer and restarted it all of my models in Ollama had disappeared from my directory. Now I know to run the ls command and then load the ones I want. Thanks!

<!-- gh-comment-id:2675511739 --> @blackpyramid88 commented on GitHub (Feb 21, 2025): > [@blackpyramid88](https://github.com/blackpyramid88) [@sebastianlau](https://github.com/sebastianlau) can you describe exactly what happened? Did you upgrade something? Did you change anything like environment variables? When I shut down or restart the computer and restarted it all of my models in Ollama had disappeared from my directory. Now I know to run the ls command and then load the ones I want. Thanks!
Author
Owner

@blackpyramid88 commented on GitHub (Feb 21, 2025):

@blackpyramid88 you should check whether they have been deleted from the filesystem, or just from ollama

The default location for models is something like C:\Users\AppData\Local\Ollama\models -- if they're gone from there you may be able to use a file recovery tool (e.g. PhotoRec), but I wouldn't pin my hopes on that. If they still exist in there you can probably re-add them (but I'm not familiar with how)

I got it! Just needed to run the ls command and load them back to Ollama

<!-- gh-comment-id:2675512889 --> @blackpyramid88 commented on GitHub (Feb 21, 2025): > [@blackpyramid88](https://github.com/blackpyramid88) you should check whether they have been deleted from the filesystem, or just from ollama > > The default location for models is something like _C:\Users<username>\AppData\Local\Ollama\models_ -- if they're gone from there you may be able to use a file recovery tool (e.g. PhotoRec), but I wouldn't pin my hopes on that. If they still exist in there you can probably re-add them (but I'm not familiar with how) I got it! Just needed to run the ls command and load them back to Ollama
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#805