[GH-ISSUE #9263] There is no model displayed in "ollama list". #52548

Closed
opened 2026-04-28 23:38:58 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @lizhichao999 on GitHub (Feb 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9263

What is the issue?

I have already downloaded the models deepseek-r1:14b and qwen2.5. They could be used normally before. When I wanted to use them today, the models were being downloaded again. And when I executed the "ollama list" command, the previously downloaded models were not shown.

The local model data amounts to dozens of gigabytes.

Relevant log output

C:\Users\HKZC>ollama list
NAME    ID    SIZE    MODIFIED

C:\Users\HKZC>ollama start
2025/02/21 11:02:28 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\A_AIModels OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-21T11:02:28.628+08:00 level=INFO source=images.go:432 msg="total blobs: 22"
time=2025-02-21T11:02:28.629+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-21T11:02:28.632+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5)"
time=2025-02-21T11:02:28.634+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]"
time=2025-02-21T11:02:28.634+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-21T11:02:28.635+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-02-21T11:02:28.635+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-02-21T11:02:28.917+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-3abb0ca9-b1a2-9769-fb03-5fc339761fce library=cuda compute=7.5 driver=12.6 name="NVIDIA T600 Laptop GPU" overhead="533.3 MiB"
time=2025-02-21T11:02:28.922+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-3abb0ca9-b1a2-9769-fb03-5fc339761fce library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA T600 Laptop GPU" total="4.0 GiB" available="3.2 GiB"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4

Originally created by @lizhichao999 on GitHub (Feb 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9263 ### What is the issue? I have already downloaded the models deepseek-r1:14b and qwen2.5. They could be used normally before. When I wanted to use them today, the models were being downloaded again. And when I executed the "ollama list" command, the previously downloaded models were not shown. The local model data amounts to dozens of gigabytes. ### Relevant log output ```shell C:\Users\HKZC>ollama list NAME ID SIZE MODIFIED C:\Users\HKZC>ollama start 2025/02/21 11:02:28 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\A_AIModels OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-21T11:02:28.628+08:00 level=INFO source=images.go:432 msg="total blobs: 22" time=2025-02-21T11:02:28.629+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-21T11:02:28.632+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5)" time=2025-02-21T11:02:28.634+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]" time=2025-02-21T11:02:28.634+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-21T11:02:28.635+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-02-21T11:02:28.635+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-02-21T11:02:28.917+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-3abb0ca9-b1a2-9769-fb03-5fc339761fce library=cuda compute=7.5 driver=12.6 name="NVIDIA T600 Laptop GPU" overhead="533.3 MiB" time=2025-02-21T11:02:28.922+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-3abb0ca9-b1a2-9769-fb03-5fc339761fce library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA T600 Laptop GPU" total="4.0 GiB" available="3.2 GiB" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-28 23:38:58 -05:00
Author
Owner

@lzzzzzzzzz commented on GitHub (Feb 21, 2025):

same question, image disappear, second time

<!-- gh-comment-id:2673496399 --> @lzzzzzzzzz commented on GitHub (Feb 21, 2025): same question, image disappear, second time
Author
Owner

@rick-github commented on GitHub (Feb 21, 2025):

If you did ollama list before ollama start, then you are running two ollama servers. One is probably started when your system boots, and is configured to store the models in .ollama\models. The other is manually started by you via ollama start, and is configured to store the models in D:\A_AIModels. Either stop running the server manually, or configure the startup server to use D:\A_AIModels for storing models.

<!-- gh-comment-id:2673880576 --> @rick-github commented on GitHub (Feb 21, 2025): If you did `ollama list` before `ollama start`, then you are running two ollama servers. One is probably started when your system boots, and is configured to store the models in `.ollama\models`. The other is manually started by you via `ollama start`, and is configured to store the models in `D:\A_AIModels`. Either stop running the server manually, or configure the startup server to use `D:\A_AIModels` for storing models.
Author
Owner

@ENUMERA8OR commented on GitHub (Feb 26, 2025):

Use the docker ollama image to run your models. It should resolve most of the issues.

<!-- gh-comment-id:2685738398 --> @ENUMERA8OR commented on GitHub (Feb 26, 2025): Use the docker ollama image to run your models. It should resolve most of the issues.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52548