[GH-ISSUE #2358] Models autodelete? #1366

Closed
opened 2026-04-12 11:11:59 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @SinanAkkoyun on GitHub (Feb 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2358

Originally assigned to: @jmorganca on GitHub.

Hi! I noticed, as soon as I kill ollama (because one can not unload models from VRAM manually) and start ollama serve on my own, all models delete themselves.

Is that a bug or a feature (perhaps ensuring non-corrupted files)?

Originally created by @SinanAkkoyun on GitHub (Feb 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2358 Originally assigned to: @jmorganca on GitHub. Hi! I noticed, as soon as I kill ollama (because one can not unload models from VRAM manually) and start ollama serve on my own, all models delete themselves. Is that a bug or a feature (perhaps ensuring non-corrupted files)?
GiteaMirror added the questionlinux labels 2026-04-12 11:11:59 -05:00
Author
Owner

@alpe commented on GitHub (Feb 5, 2024):

The models, license, prompts and other metadata are persisted in ~/.ollama/models (on OSX). A kill -9 does not corrupt them.
Can you provide more context like commands that you are using, OS, version, ...?

<!-- gh-comment-id:1926852885 --> @alpe commented on GitHub (Feb 5, 2024): The models, license, prompts and other metadata are persisted in ` ~/.ollama/models` (on OSX). A kill -9 does not corrupt them. Can you provide more context like commands that you are using, OS, version, ...?
Author
Owner

@SinanAkkoyun commented on GitHub (Feb 5, 2024):

I am running Ubuntu 22.04 server, NVIDIA, latest ollama installed per script, running kill -9 and pkills occasionally

It seems random but I recall that sometimes when I switch from the service ollama serve to running ollama server in home dir etc it sometimes deletes all models and I have to download them all again.

I have also encountered freezing of ollama when the VRAM is already being used, although I am not certain if that is the actual cause, but that is not as big of a deal, more the model deletion

Can I somehow provide more info?

<!-- gh-comment-id:1927112417 --> @SinanAkkoyun commented on GitHub (Feb 5, 2024): I am running Ubuntu 22.04 server, NVIDIA, latest ollama installed per script, running kill -9 and pkills occasionally It seems random but I recall that sometimes when I switch from the service `ollama serve` to running `ollama server` in home dir etc it sometimes deletes all models and I have to download them all again. I have also encountered freezing of ollama when the VRAM is already being used, although I am not certain if that is the actual cause, but that is not as big of a deal, more the model deletion Can I somehow provide more info?
Author
Owner

@SinanAkkoyun commented on GitHub (Feb 12, 2024):

It happened again, after I killed the service, stopped the service and ran ollama serve in another directory

<!-- gh-comment-id:1937987318 --> @SinanAkkoyun commented on GitHub (Feb 12, 2024): It happened again, after I killed the service, stopped the service and ran `ollama serve` in another directory
Author
Owner

@XujieYuan commented on GitHub (Feb 13, 2024):

I'm facing the same issue

<!-- gh-comment-id:1941388039 --> @XujieYuan commented on GitHub (Feb 13, 2024): I'm facing the same issue
Author
Owner

@DarkCat5501 commented on GitHub (Mar 1, 2024):

I had the same problem, as soon as i stoped the service, all my models are gone, but when i restarted the computer them all came back, but the ones i reinstalled are gone now

<!-- gh-comment-id:1973029535 --> @DarkCat5501 commented on GitHub (Mar 1, 2024): I had the same problem, as soon as i stoped the service, all my models are gone, but when i restarted the computer them all came back, but the ones i reinstalled are gone now
Author
Owner

@bmizerany commented on GitHub (Mar 12, 2024):

@SinanAkkoyun Can you confirm you have an empty ~/.ollama/models directory?

<!-- gh-comment-id:1989677207 --> @bmizerany commented on GitHub (Mar 12, 2024): @SinanAkkoyun Can you confirm you have an empty `~/.ollama/models` directory?
Author
Owner

@bmizerany commented on GitHub (Mar 12, 2024):

@DarkCat5501 @XujieYuan Can either of you provide some simple steps / script to reproduce? I'm unable to reproduce this, but maybe I'm missing something?

<!-- gh-comment-id:1992616589 --> @bmizerany commented on GitHub (Mar 12, 2024): @DarkCat5501 @XujieYuan Can either of you provide some simple steps / script to reproduce? I'm unable to reproduce this, but maybe I'm missing something?
Author
Owner

@bmizerany commented on GitHub (Mar 12, 2024):

Is it possible you're all seeing this while running two ollama serves at the same time? If so, that may explain models being clobbered.

<!-- gh-comment-id:1992639771 --> @bmizerany commented on GitHub (Mar 12, 2024): Is it possible you're all seeing this while running two `ollama serve`s at the same time? If so, that may explain models being clobbered.
Author
Owner

@navr32 commented on GitHub (Mar 26, 2024):

Hi all ! So i have about the same issue i think .
And i haven't succes to reproduce the problem at this time.
But i have want to try the ollama rocm aur to build package and the version is 1.30..
the 1.30 crash on my system when try to run..perhaps goes from 1.30 to the 1.27 have corrupted the database ?
I run ollama on manjaro version of distribution 0.1.27-1.
So i was thinking if i restart ollama run with the same model name was loaded previously and are always in my hardrive because the .ollama/models/blobs have always all the file and size of 317go... So i have the chance to have record all the title of the models i have installed...from the ollama list..so i have in a text file all :

jmorgan/qwen:latest b5dc5e784f2a 394 MB 4 days ago
codegpt-codellama:latest 108cf74a8e87 1.6 GB 4 days ago
codellama:7b-instruct-q6_K 3a5b549ceb36 5.5 GB 4 days ago
codeup:13b-llama2-chat-q8_0 773d7e80460c 13 GB 4 days ago
deepseek-coder:6.7b-base-q8_0 570c490f997d 7.2 GB 4 days ago
deepseek-coder:6.7b-instruct-q8_0 54b58e32d587 7.2 GB 4 days ago

So i was thinking rerun one models one after one give the ollama to find the file and register the model again in the database without to have to Re download again the model...But no...the models is download full again even if present in the blob directory...and even if the old blob file are always present . So now the models file are not seen by ollama and become unload-able. So my directory after i try to re register again just 3 models goes from 317go to 336.9 go ... this is a big problem fore the space usage. So someone have a trick to force ollama to register again the models in the blob directory even if the database is broken ? Many thanks to this great project. have a nice day.

<!-- gh-comment-id:2021313484 --> @navr32 commented on GitHub (Mar 26, 2024): Hi all ! So i have about the same issue i think . And i haven't succes to reproduce the problem at this time. But i have want to try the ollama rocm aur to build package and the version is 1.30.. the 1.30 crash on my system when try to run..perhaps goes from 1.30 to the 1.27 have corrupted the database ? I run ollama on manjaro version of distribution 0.1.27-1. So i was thinking if i restart ollama run with the same model name was loaded previously and are always in my hardrive because the .ollama/models/blobs have always all the file and size of 317go... So i have the chance to have record all the title of the models i have installed...from the ollama list..so i have in a text file all : jmorgan/qwen:latest b5dc5e784f2a 394 MB 4 days ago codegpt-codellama:latest 108cf74a8e87 1.6 GB 4 days ago codellama:7b-instruct-q6_K 3a5b549ceb36 5.5 GB 4 days ago codeup:13b-llama2-chat-q8_0 773d7e80460c 13 GB 4 days ago deepseek-coder:6.7b-base-q8_0 570c490f997d 7.2 GB 4 days ago deepseek-coder:6.7b-instruct-q8_0 54b58e32d587 7.2 GB 4 days ago So i was thinking rerun one models one after one give the ollama to find the file and register the model again in the database without to have to Re download again the model...But no...the models is download full again even if present in the blob directory...and even if the old blob file are always present . So now the models file are not seen by ollama and become unload-able. So my directory after i try to re register again just 3 models goes from 317go to 336.9 go ... this is a big problem fore the space usage. So someone have a trick to force ollama to register again the models in the blob directory even if the database is broken ? Many thanks to this great project. have a nice day.
Author
Owner

@bmizerany commented on GitHub (Apr 8, 2024):

@navr32 Do you happen to know if you were running two instances of ollama on the same machine at the same time? If you can try to reproduce by running two at the same time that would be helpful.

<!-- gh-comment-id:2043221976 --> @bmizerany commented on GitHub (Apr 8, 2024): @navr32 Do you happen to know if you were running two instances of ollama on the same machine at the same time? If you can try to reproduce by running two at the same time that would be helpful.
Author
Owner

@AlfSelen commented on GitHub (Jul 25, 2024):

OS: Windows 11
Version: ollama version is 0.2.8

All files and folders in the directory (C:\Users\USER\.ollama\models\blobs) which does not "belong" aka sourced from other locations such as huggingface or even a text file gets deleted when ollama serve gets run (which gets automaticaly run at startup apparently with a default install of ollama).

So if the model its not installed by ollama it seems like the file gets deleted whenever you run ollama serve

<!-- gh-comment-id:2250025704 --> @AlfSelen commented on GitHub (Jul 25, 2024): OS: Windows 11 Version: ollama version is 0.2.8 All files and folders in the directory (```C:\Users\USER\.ollama\models\blobs```) which does not "belong" aka sourced from other locations such as huggingface or even a text file gets deleted when ```ollama serve``` gets run (which gets automaticaly run at startup apparently with a default install of ollama). So if the model its not installed by ollama it seems like the file gets deleted whenever you run ```ollama serve```
Author
Owner

@jmorganca commented on GitHub (Sep 4, 2024):

@AlfSelen and @SinanAkkoyun is this still happening?

<!-- gh-comment-id:2329717994 --> @jmorganca commented on GitHub (Sep 4, 2024): @AlfSelen and @SinanAkkoyun is this still happening?
Author
Owner

@SinanAkkoyun commented on GitHub (Sep 4, 2024):

I will do thorough testing and report back

<!-- gh-comment-id:2329722238 --> @SinanAkkoyun commented on GitHub (Sep 4, 2024): I will do thorough testing and report back
Author
Owner

@AlfSelen commented on GitHub (Sep 11, 2024):

@AlfSelen and @SinanAkkoyun is this still happening?

@jmorganca
Tested on ollama version 0.3.10
Windows 11 home 23H2

Tested creating files, folder and copied the existing files giving them a new name within this folder:
"C:\Users\USER.ollama\models\blobs"

The result is as earlier that manually added files/folders within the blobs folder get deleted at startup of ollama (ollama serve)
So I think my "issue" is rather a feature request, that wants a way of manually registering models to ollama.

If I understand the first comments of this issue they have a problem of models downloaded through ollama also gets deleted, and that would be a bug, but to be clear I have not experienced models downloaded through Ollama get deleted.

<!-- gh-comment-id:2344444500 --> @AlfSelen commented on GitHub (Sep 11, 2024): > @AlfSelen and @SinanAkkoyun is this still happening? @jmorganca Tested on ollama version 0.3.10 Windows 11 home 23H2 Tested creating files, folder and copied the existing files giving them a new name within this folder: "C:\Users\USER\.ollama\models\blobs" The result is as earlier that manually added files/folders within the blobs folder get deleted at startup of ollama (ollama serve) So I think my "issue" is rather a feature request, that wants a way of manually registering models to ollama. If I understand the first comments of this issue they have a problem of models downloaded through ollama also gets deleted, and that would be a bug, but to be clear I have not experienced models downloaded through Ollama get deleted.
Author
Owner

@dhiltgen commented on GitHub (Sep 30, 2024):

I believe the most likely explanation of this behavior is running ollama serve under a different user. When you use our install script, the systemd service registers a user ollama with a home directory of /usr/share/ollama/ so models are stored in /usr/share/ollama/.ollama/models. If you run ollama serve as your own user account, this will use $HOME/.ollama/models. You can adjust this setting as described here: https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored

@AlfSelen your scenario is different. I think you're describing #335

<!-- gh-comment-id:2383964567 --> @dhiltgen commented on GitHub (Sep 30, 2024): I believe the most likely explanation of this behavior is running `ollama serve` under a different user. When you use our install script, the systemd service registers a user `ollama` with a home directory of `/usr/share/ollama/` so models are stored in `/usr/share/ollama/.ollama/models`. If you run `ollama serve` as your own user account, this will use `$HOME/.ollama/models`. You can adjust this setting as described here: https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored @AlfSelen your scenario is different. I think you're describing #335
Author
Owner

@DewiarQR commented on GitHub (Feb 21, 2025):

I have already deleted all the models twice. Apparently, when fully loaded, there is a moment when Ollama deletes its models for some reason. I have no strength left to install them, my hands are down

<!-- gh-comment-id:2674240234 --> @DewiarQR commented on GitHub (Feb 21, 2025): I have already deleted all the models twice. Apparently, when fully loaded, there is a moment when Ollama deletes its models for some reason. I have no strength left to install them, my hands are down
Author
Owner

@DewiarQR commented on GitHub (Feb 21, 2025):

dewiar@dewiar:$ systemctl stop ollama.service
Warning: The unit file, source configuration file or drop-ins of ollama.service changed on disk. Run 'systemctl daemon-reload' to reload units.
dewiar@dewiar:
$ ollama serve
2025/02/21 14:27:05 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434/ OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/dewiar/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost/ https://localhost/ http://localhost/:* https://localhost/:* http://127.0.0.1/ https://127.0.0.1/ http://127.0.0.1/:* https://127.0.0.1/:* http://0.0.0.0/ https://0.0.0.0/ http://0.0.0.0/:* https://0.0.0.0/:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-21T14:27:05.297+03:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-02-21T14:27:05.297+03:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-21T14:27:05.297+03:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-21T14:27:05.298+03:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-02-21T14:27:05.298+03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-21T14:27:05.440+03:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-59933eb2-145a-7eb0-60a3-0c0cec1c5c79 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.0 GiB"

<!-- gh-comment-id:2674953809 --> @DewiarQR commented on GitHub (Feb 21, 2025): [dewiar@dewiar](mailto:dewiar@dewiar):~$ systemctl stop ollama.service Warning: The unit file, source configuration file or drop-ins of ollama.service changed on disk. Run 'systemctl daemon-reload' to reload units. [dewiar@dewiar](mailto:dewiar@dewiar):~$ ollama serve 2025/02/21 14:27:05 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434/ OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/dewiar/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost/ https://localhost/ http://localhost/:* https://localhost/:* http://127.0.0.1/ https://127.0.0.1/ http://127.0.0.1/:* https://127.0.0.1/:* http://0.0.0.0/ https://0.0.0.0/ http://0.0.0.0/:* https://0.0.0.0/:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-21T14:27:05.297+03:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-02-21T14:27:05.297+03:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> [github.com/ollama/ollama/server.(*Server).PullHandler-fm](http://github.com/ollama/ollama/server.(*Server).PullHandler-fm) (5 handlers) [GIN-debug] POST /api/generate --> [github.com/ollama/ollama/server.(*Server).GenerateHandler-fm](http://github.com/ollama/ollama/server.(*Server).GenerateHandler-fm) (5 handlers) [GIN-debug] POST /api/chat --> [github.com/ollama/ollama/server.(*Server).ChatHandler-fm](http://github.com/ollama/ollama/server.(*Server).ChatHandler-fm) (5 handlers) [GIN-debug] POST /api/embed --> [github.com/ollama/ollama/server.(*Server).EmbedHandler-fm](http://github.com/ollama/ollama/server.(*Server).EmbedHandler-fm) (5 handlers) [GIN-debug] POST /api/embeddings --> [github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm](http://github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm) (5 handlers) [GIN-debug] POST /api/create --> [github.com/ollama/ollama/server.(*Server).CreateHandler-fm](http://github.com/ollama/ollama/server.(*Server).CreateHandler-fm) (5 handlers) [GIN-debug] POST /api/push --> [github.com/ollama/ollama/server.(*Server).PushHandler-fm](http://github.com/ollama/ollama/server.(*Server).PushHandler-fm) (5 handlers) [GIN-debug] POST /api/copy --> [github.com/ollama/ollama/server.(*Server).CopyHandler-fm](http://github.com/ollama/ollama/server.(*Server).CopyHandler-fm) (5 handlers) [GIN-debug] DELETE /api/delete --> [github.com/ollama/ollama/server.(*Server).DeleteHandler-fm](http://github.com/ollama/ollama/server.(*Server).DeleteHandler-fm) (5 handlers) [GIN-debug] POST /api/show --> [github.com/ollama/ollama/server.(*Server).ShowHandler-fm](http://github.com/ollama/ollama/server.(*Server).ShowHandler-fm) (5 handlers) [GIN-debug] POST /api/blobs/:digest --> [github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm](http://github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm) (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> [github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm](http://github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm) (5 handlers) [GIN-debug] GET /api/ps --> [github.com/ollama/ollama/server.(*Server).PsHandler-fm](http://github.com/ollama/ollama/server.(*Server).PsHandler-fm) (5 handlers) [GIN-debug] POST /v1/chat/completions --> [github.com/ollama/ollama/server.(*Server).ChatHandler-fm](http://github.com/ollama/ollama/server.(*Server).ChatHandler-fm) (6 handlers) [GIN-debug] POST /v1/completions --> [github.com/ollama/ollama/server.(*Server).GenerateHandler-fm](http://github.com/ollama/ollama/server.(*Server).GenerateHandler-fm) (6 handlers) [GIN-debug] POST /v1/embeddings --> [github.com/ollama/ollama/server.(*Server).EmbedHandler-fm](http://github.com/ollama/ollama/server.(*Server).EmbedHandler-fm) (6 handlers) [GIN-debug] GET /v1/models --> [github.com/ollama/ollama/server.(*Server).ListHandler-fm](http://github.com/ollama/ollama/server.(*Server).ListHandler-fm) (6 handlers) [GIN-debug] GET /v1/models/:model --> [github.com/ollama/ollama/server.(*Server).ShowHandler-fm](http://github.com/ollama/ollama/server.(*Server).ShowHandler-fm) (6 handlers) [GIN-debug] GET / --> [github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1](http://github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1) (5 handlers) [GIN-debug] GET /api/tags --> [github.com/ollama/ollama/server.(*Server).ListHandler-fm](http://github.com/ollama/ollama/server.(*Server).ListHandler-fm) (5 handlers) [GIN-debug] GET /api/version --> [github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2](http://github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2) (5 handlers) [GIN-debug] HEAD / --> [github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1](http://github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1) (5 handlers) [GIN-debug] HEAD /api/tags --> [github.com/ollama/ollama/server.(*Server).ListHandler-fm](http://github.com/ollama/ollama/server.(*Server).ListHandler-fm) (5 handlers) [GIN-debug] HEAD /api/version --> [github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2](http://github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2) (5 handlers) time=2025-02-21T14:27:05.297+03:00 level=INFO source=routes.go:1238 msg="Listening on [127.0.0.1:11434](http://127.0.0.1:11434/) (version 0.5.7)" time=2025-02-21T14:27:05.298+03:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" time=2025-02-21T14:27:05.298+03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-21T14:27:05.440+03:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-59933eb2-145a-7eb0-60a3-0c0cec1c5c79 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.0 GiB"
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1366