[GH-ISSUE #8641] ollama list command not listing installed models #5597

Closed
opened 2026-04-12 16:52:18 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @Straykinich on GitHub (Jan 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8641

What is the issue?

I installed ollama than changed the drive to install models using this article -

"https://medium.com/@dpn.majumder/how-to-deploy-and-experiment-with-ollama-models-on-your-local-machine-windows-34c967a7ab0e"

Than I installed deepseek r1-7b. Now I have restarted and also tried reinstalling everything again. Still ollama list command doesn't show installed models.

Also the models are running correctly

Any solution??

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.5.7

Originally created by @Straykinich on GitHub (Jan 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8641 ### What is the issue? I installed ollama than changed the drive to install models using this article - "https://medium.com/@dpn.majumder/how-to-deploy-and-experiment-with-ollama-models-on-your-local-machine-windows-34c967a7ab0e" Than I installed deepseek r1-7b. Now I have restarted and also tried reinstalling everything again. Still ollama list command doesn't show installed models. Also the models are running correctly Any solution?? ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 16:52:18 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

Still ollama list command doesn't show installed models.

Also the models are running correctly

How are you running models if you can't list them?

<!-- gh-comment-id:2619824559 --> @rick-github commented on GitHub (Jan 28, 2025): > Still ollama list command doesn't show installed models. > Also the models are running correctly How are you running models if you can't list them?
Author
Owner

@Straykinich commented on GitHub (Jan 28, 2025):

Still ollama list command doesn't show installed models.

Also the models are running correctly

How are you running models if you can't list them?

like i installed deepseek r1-7b with command - ollama run deepseek-r1:7b

and when I run this same command again it runs the model instead of installing.

<!-- gh-comment-id:2619835417 --> @Straykinich commented on GitHub (Jan 28, 2025): > > Still ollama list command doesn't show installed models. > > > Also the models are running correctly > > How are you running models if you can't list them? like i installed deepseek r1-7b with command - _**ollama run deepseek-r1:7b**_ and when I run this same command again it runs the model instead of installing.
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

and when I run this same command again it runs the model instead of installing.

That's the way it works, download once, run many. But once downloaded, it should be visible when you do ollama list.

<!-- gh-comment-id:2619841239 --> @rick-github commented on GitHub (Jan 28, 2025): > and when I run this same command again it runs the model instead of installing. That's the way it works, download once, run many. But once downloaded, it should be visible when you do `ollama list`.
Author
Owner

@Straykinich commented on GitHub (Jan 28, 2025):

That's the way it works, download once, run many. But once downloaded, it should be visible when you do ollama list.

is this happening because I changed the model_storing drive??

<!-- gh-comment-id:2619843652 --> @Straykinich commented on GitHub (Jan 28, 2025): > That's the way it works, download once, run many. But once downloaded, it should be visible when you do `ollama list`. is this happening because I changed the model_storing drive??
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

is this happening because I changed the model_storing drive??

What is "this" in your question? That models get downloaded? If you configured ollama to use a different storage location without moving the original models, then yes, models will be re-downloaded.

<!-- gh-comment-id:2619849480 --> @rick-github commented on GitHub (Jan 28, 2025): > is this happening because I changed the model_storing drive?? What is "this" in your question? That models get downloaded? If you configured ollama to use a different storage location without moving the original models, then yes, models will be re-downloaded.
Author
Owner

@Straykinich commented on GitHub (Jan 28, 2025):

is this happening because I changed the model_storing drive??

What is "this" in your question? That models get downloaded? If you configured ollama to use a different storage location without moving the original models, then yes, models will be re-downloaded.

I changed the model_storing location right just after installing ollama and than i installed deepseek model.

Can you help me check if I changed the model_storing location correctly? I used this article till step-2 only

"https://medium.com/@dpn.majumder/how-to-deploy-and-experiment-with-ollama-models-on-your-local-machine-windows-34c967a7ab0e"

Also when I now try running model with command ollama run deepseek-r1:7b it starts to download instead of running it.

But in my specified folder i can see model there.

<!-- gh-comment-id:2619866992 --> @Straykinich commented on GitHub (Jan 28, 2025): > > is this happening because I changed the model_storing drive?? > > What is "this" in your question? That models get downloaded? If you configured ollama to use a different storage location without moving the original models, then yes, models will be re-downloaded. I changed the model_storing location right just after installing ollama and than i installed deepseek model. Can you help me check if I changed the model_storing location correctly? I used this article till step-2 only "https://medium.com/@dpn.majumder/how-to-deploy-and-experiment-with-ollama-models-on-your-local-machine-windows-34c967a7ab0e" Also when I now try running model with command **_ollama run deepseek-r1:7b_** it starts to download instead of running it. But in my specified folder i can see model there.
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

Look in the server log for OLLAMA_MODELS, that will tell you where ollama will look for downloaded models. It is logged every time ollama starts, so look for the last entry to see where the current server is looking. If that's not the same as your specified folder, then the environment variable is not correctly configured.

<!-- gh-comment-id:2619881128 --> @rick-github commented on GitHub (Jan 28, 2025): Look in the [server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) for `OLLAMA_MODELS`, that will tell you where ollama will look for downloaded models. It is logged every time ollama starts, so look for the last entry to see where the current server is looking. If that's not the same as your specified folder, then the environment variable is not correctly configured.
Author
Owner

@Straykinich commented on GitHub (Jan 28, 2025):

Look in the server log for OLLAMA_MODELS, that will tell you where ollama will look for downloaded models. It is logged every time ollama starts, so look for the last entry to see where the current server is looking. If that's not the same as your specified folder, then the environment variable is not correctly configured.

bro Tell me one thing Should the specified location for models should be in models folder like D:\Name[6]OLLAMA_MODELS\models

Now what if I had just specified D:\Name[6]OLLAMA_MODELS\

so is not storing models in model folder the culprit??

let me try reinstalling everything.

<!-- gh-comment-id:2619894461 --> @Straykinich commented on GitHub (Jan 28, 2025): > Look in the [server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) for `OLLAMA_MODELS`, that will tell you where ollama will look for downloaded models. It is logged every time ollama starts, so look for the last entry to see where the current server is looking. If that's not the same as your specified folder, then the environment variable is not correctly configured. bro Tell me one thing Should the specified location for models should be in models folder like D:\Name\[6]OLLAMA_MODELS\models Now what if I had just specified D:\Name\[6]OLLAMA_MODELS\ so is not storing models in model folder the culprit?? let me try reinstalling everything.
Author
Owner

@Straykinich commented on GitHub (Jan 28, 2025):

Here is my log file

2025/01/29 01:13:20 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\[0]Abhishek\[6]OLLAMA_MODELS\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-01-29T01:13:20.587+05:30 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-01-29T01:13:20.587+05:30 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-01-29T01:13:20.588+05:30 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-01-29T01:13:20.589+05:30 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-01-29T01:13:20.589+05:30 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-29T01:13:20.589+05:30 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-01-29T01:13:20.589+05:30 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-01-29T01:13:21.406+05:30 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-a3db9583-ad44-583c-f821-8453307fac00 library=cuda compute=7.5 driver=12.7 name="NVIDIA GeForce GTX 1650" overhead="638.5 MiB"
time=2025-01-29T01:13:21.408+05:30 level=INFO source=types.go:131 msg="inference compute" id=GPU-a3db9583-ad44-583c-f821-8453307fac00 library=cuda variant=v12 compute=7.5 driver=12.7 name="NVIDIA GeForce GTX 1650" total="4.0 GiB" available="3.2 GiB"
[GIN] 2025/01/29 - 01:14:25 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/29 - 01:14:25 | 404 | 530.3µs | 127.0.0.1 | POST "/api/show"
time=2025-01-29T01:14:27.691+05:30 level=INFO source=download.go:175 msg="downloading aabd4debf0c8 in 12 100 MB part(s)"
time=2025-01-29T01:19:20.778+05:30 level=INFO source=download.go:175 msg="downloading 369ca498f347 in 1 387 B part(s)"
time=2025-01-29T01:19:22.421+05:30 level=INFO source=download.go:175 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)"
time=2025-01-29T01:19:24.035+05:30 level=INFO source=download.go:175 msg="downloading f4d24e9138dd in 1 148 B part(s)"
time=2025-01-29T01:19:25.676+05:30 level=INFO source=download.go:175 msg="downloading a85fe2a2e58e in 1 487 B part(s)"
[GIN] 2025/01/29 - 01:19:29 | 200 | 5m4s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/01/29 - 01:19:29 | 200 | 36.2913ms | 127.0.0.1 | POST "/api/show"
time=2025-01-29T01:19:29.979+05:30 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=D:[0]Abhishek[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-a3db9583-ad44-583c-f821-8453307fac00 parallel=4 available=3458675508 required="1.9 GiB"
time=2025-01-29T01:19:30.003+05:30 level=INFO source=server.go:104 msg="system memory" total="7.3 GiB" free="1.4 GiB" free_swap="5.6 GiB"
time=2025-01-29T01:19:30.004+05:30 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[3.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
time=2025-01-29T01:19:30.010+05:30 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\Users\abhis\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v12_avx\ollama_llama_server.exe runner --model D:\[0]Abhishek\[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --no-mmap --parallel 4 --port 3596"
time=2025-01-29T01:19:30.281+05:30 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-29T01:19:30.281+05:30 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-29T01:19:30.282+05:30 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-29T01:19:35.047+05:30 level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1650, compute capability 7.5, VMM: yes
time=2025-01-29T01:19:35.099+05:30 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=6
time=2025-01-29T01:19:35.102+05:30 level=INFO source=.:0 msg="Server listening on 127.0.0.1:3596"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce GTX 1650) - 3298 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from D:[0]Abhishek[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 1.5B
llama_model_loader: - kv 5: qwen2.block_count u32 = 28
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2025-01-29T01:19:35.292+05:30 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 151936
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 1536
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 12
llm_load_print_meta: n_head_kv = 2
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8960
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 1.5B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 1.78 B
llm_load_print_meta: model size = 1.04 GiB (5.00 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 1.5B
llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CPU model buffer size = 125.19 MiB
llm_load_tensors: CUDA0 model buffer size = 934.70 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 224.00 MiB
llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.34 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 299.75 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 19.01 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 2
time=2025-01-29T01:19:37.045+05:30 level=INFO source=server.go:594 msg="llama runner started in 6.76 seconds"
[GIN] 2025/01/29 - 01:19:37 | 200 | 7.1413861s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/01/29 - 01:19:46 | 200 | 1.4640412s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/01/29 - 01:19:53 | 200 | 1.0429ms | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/29 - 01:19:53 | 200 | 526.6µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/01/29 - 01:19:55 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/29 - 01:19:55 | 200 | 0s | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/01/29 - 01:19:56 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/29 - 01:19:56 | 200 | 533µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/01/29 - 01:20:13 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/29 - 01:20:13 | 200 | 0s | 127.0.0.1 | GET "/api/tags"

<!-- gh-comment-id:2619932597 --> @Straykinich commented on GitHub (Jan 28, 2025): Here is my log file 2025/01/29 01:13:20 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\[0]Abhishek\\[6]OLLAMA_MODELS\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-01-29T01:13:20.587+05:30 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-01-29T01:13:20.587+05:30 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-01-29T01:13:20.588+05:30 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-01-29T01:13:20.589+05:30 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" time=2025-01-29T01:13:20.589+05:30 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-29T01:13:20.589+05:30 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-01-29T01:13:20.589+05:30 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-01-29T01:13:21.406+05:30 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-a3db9583-ad44-583c-f821-8453307fac00 library=cuda compute=7.5 driver=12.7 name="NVIDIA GeForce GTX 1650" overhead="638.5 MiB" time=2025-01-29T01:13:21.408+05:30 level=INFO source=types.go:131 msg="inference compute" id=GPU-a3db9583-ad44-583c-f821-8453307fac00 library=cuda variant=v12 compute=7.5 driver=12.7 name="NVIDIA GeForce GTX 1650" total="4.0 GiB" available="3.2 GiB" [GIN] 2025/01/29 - 01:14:25 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/01/29 - 01:14:25 | 404 | 530.3µs | 127.0.0.1 | POST "/api/show" time=2025-01-29T01:14:27.691+05:30 level=INFO source=download.go:175 msg="downloading aabd4debf0c8 in 12 100 MB part(s)" time=2025-01-29T01:19:20.778+05:30 level=INFO source=download.go:175 msg="downloading 369ca498f347 in 1 387 B part(s)" time=2025-01-29T01:19:22.421+05:30 level=INFO source=download.go:175 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)" time=2025-01-29T01:19:24.035+05:30 level=INFO source=download.go:175 msg="downloading f4d24e9138dd in 1 148 B part(s)" time=2025-01-29T01:19:25.676+05:30 level=INFO source=download.go:175 msg="downloading a85fe2a2e58e in 1 487 B part(s)" [GIN] 2025/01/29 - 01:19:29 | 200 | 5m4s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/01/29 - 01:19:29 | 200 | 36.2913ms | 127.0.0.1 | POST "/api/show" time=2025-01-29T01:19:29.979+05:30 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=D:\[0]Abhishek\[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-a3db9583-ad44-583c-f821-8453307fac00 parallel=4 available=3458675508 required="1.9 GiB" time=2025-01-29T01:19:30.003+05:30 level=INFO source=server.go:104 msg="system memory" total="7.3 GiB" free="1.4 GiB" free_swap="5.6 GiB" time=2025-01-29T01:19:30.004+05:30 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[3.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB" time=2025-01-29T01:19:30.010+05:30 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\\Users\\abhis\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12_avx\\ollama_llama_server.exe runner --model D:\\[0]Abhishek\\[6]OLLAMA_MODELS\\models\\blobs\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --no-mmap --parallel 4 --port 3596" time=2025-01-29T01:19:30.281+05:30 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-29T01:19:30.281+05:30 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-29T01:19:30.282+05:30 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-29T01:19:35.047+05:30 level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1650, compute capability 7.5, VMM: yes time=2025-01-29T01:19:35.099+05:30 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=6 time=2025-01-29T01:19:35.102+05:30 level=INFO source=.:0 msg="Server listening on 127.0.0.1:3596" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce GTX 1650) - 3298 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from D:\[0]Abhishek\[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 1.5B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2025-01-29T01:19:35.292+05:30 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 1536 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 12 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 6 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8960 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1.5B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 1.78 B llm_load_print_meta: model size = 1.04 GiB (5.00 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 1.5B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU model buffer size = 125.19 MiB llm_load_tensors: CUDA0 model buffer size = 934.70 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 224.00 MiB llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.34 MiB llama_new_context_with_model: CUDA0 compute buffer size = 299.75 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 19.01 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 2 time=2025-01-29T01:19:37.045+05:30 level=INFO source=server.go:594 msg="llama runner started in 6.76 seconds" [GIN] 2025/01/29 - 01:19:37 | 200 | 7.1413861s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/01/29 - 01:19:46 | 200 | 1.4640412s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/01/29 - 01:19:53 | 200 | 1.0429ms | 127.0.0.1 | HEAD "/" [GIN] 2025/01/29 - 01:19:53 | 200 | 526.6µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/01/29 - 01:19:55 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/01/29 - 01:19:55 | 200 | 0s | 127.0.0.1 | GET "/api/tags" [GIN] 2025/01/29 - 01:19:56 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/01/29 - 01:19:56 | 200 | 533µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/01/29 - 01:20:13 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/01/29 - 01:20:13 | 200 | 0s | 127.0.0.1 | GET "/api/tags"
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

This is where ollama is configured to look for models.

OLLAMA_MODELS:D:\[0]Abhishek\[6]OLLAMA_MODELS\models

This is the GGUF file for deepseek-r1:1.5b-qwen-distill-q4_K_M that ollama loaded for inference.

D:\[0]Abhishek\[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc
<!-- gh-comment-id:2619955573 --> @rick-github commented on GitHub (Jan 28, 2025): This is where ollama is configured to look for models. ``` OLLAMA_MODELS:D:\[0]Abhishek\[6]OLLAMA_MODELS\models ``` This is the GGUF file for deepseek-r1:1.5b-qwen-distill-q4_K_M that ollama loaded for inference. ``` D:\[0]Abhishek\[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc ```
Author
Owner

@Straykinich commented on GitHub (Jan 28, 2025):

This is where ollama is configured to look for models.

OLLAMA_MODELS:D:\[0]Abhishek\[6]OLLAMA_MODELS\models

This is the GGUF file for deepseek-r1:1.5b-qwen-distill-q4_K_M that ollama loaded for inference.

D:\[0]Abhishek\[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc

and this is where I specified.

what's the problem? I tried to run the deepseek model after sometime of installing and now it just installs again instead of running and also it is automatically deleted from D:[0]Abhishek[6]OLLAMA_MODELS\models.

<!-- gh-comment-id:2620018610 --> @Straykinich commented on GitHub (Jan 28, 2025): > This is where ollama is configured to look for models. > > ``` > OLLAMA_MODELS:D:\[0]Abhishek\[6]OLLAMA_MODELS\models > ``` > > This is the GGUF file for deepseek-r1:1.5b-qwen-distill-q4_K_M that ollama loaded for inference. > > ``` > D:\[0]Abhishek\[6]OLLAMA_MODELS\models\blobs\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc > ``` and this is where I specified. what's the problem? I tried to run the deepseek model after sometime of installing and now it just installs again instead of running and also it is automatically deleted from D:\[0]Abhishek\[6]OLLAMA_MODELS\models.
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

It's not automatically deleted. The deepseek model is not currently stored in D:[0]Abhishek[6]OLLAMA_MODELS\models, when you ask ollama to run it, it has to be downloaded. The previously downloaded model needs to be located and moved to the new storage area, or you can just download it in to the new storage location.

<!-- gh-comment-id:2620031040 --> @rick-github commented on GitHub (Jan 28, 2025): It's not automatically deleted. The deepseek model is not currently stored in D:\[0]Abhishek\[6]OLLAMA_MODELS\models, when you ask ollama to run it, it has to be downloaded. The previously downloaded model needs to be located and moved to the new storage area, or you can just download it in to the new storage location.
Author
Owner

@Straykinich commented on GitHub (Jan 28, 2025):

It's not automatically deleted. The deepseek model is not currently stored in D:[0]Abhishek[6]OLLAMA_MODELS\models, when you ask ollama to run it, it has to be downloaded. The previously downloaded model needs to be located and moved to the new storage area, or you can just download it in to the new storage location.

I am so confused bro!! Can you tell me what to do in steps even if it is reinstalling everything. I want models to store at D drive not on C.

<!-- gh-comment-id:2620044752 --> @Straykinich commented on GitHub (Jan 28, 2025): > It's not automatically deleted. The deepseek model is not currently stored in D:[0]Abhishek[6]OLLAMA_MODELS\models, when you ask ollama to run it, it has to be downloaded. The previously downloaded model needs to be located and moved to the new storage area, or you can just download it in to the new storage location. I am so confused bro!! Can you tell me what to do in steps even if it is reinstalling everything. I want models to store at D drive not on C.
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

According to the log, ollama is currently configured to store models in D:\[0]Abhishek\[6]OLLAMA_MODELS\models. If that's where you want them, all you need to do now is download the models you want to store there.

ollama pull deepseek-r1:1.5b-qwen-distill-q4_K_M

Then, when you want to use a model:

ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M

If you want to download and run a model straight away, you can do that in one command:

ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M

The model will be downloaded as if you had done a pull command, and then the model will be loaded as if you had just done a run command.

If you find you are running out of disk space, you can delete a model with rm. ollama does not delete models otherwise:

ollama rm deepseek-r1:1.5b-qwen-distill-q4_K_M
<!-- gh-comment-id:2620057221 --> @rick-github commented on GitHub (Jan 28, 2025): According to the log, ollama is currently configured to store models in D:\\[0]Abhishek\\[6]OLLAMA_MODELS\\models. If that's where you want them, all you need to do now is download the models you want to store there. ``` ollama pull deepseek-r1:1.5b-qwen-distill-q4_K_M ``` Then, when you want to use a model: ``` ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M ``` If you want to download and run a model straight away, you can do that in one command: ``` ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M ``` The model will be downloaded as if you had done a `pull` command, and then the model will be loaded as if you had just done a `run` command. If you find you are running out of disk space, you can delete a model with `rm`. ollama does not delete models otherwise: ``` ollama rm deepseek-r1:1.5b-qwen-distill-q4_K_M ```
Author
Owner

@Straykinich commented on GitHub (Jan 29, 2025):

According to the log, ollama is currently configured to store models in D:[0]Abhishek[6]OLLAMA_MODELS\models. If that's where you want them, all you need to do now is download the models you want to store there.

ollama pull deepseek-r1:1.5b-qwen-distill-q4_K_M

Then, when you want to use a model:

ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M

If you want to download and run a model straight away, you can do that in one command:

ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M

The model will be downloaded as if you had done a pull command, and then the model will be loaded as if you had just done a run command.

If you find you are running out of disk space, you can delete a model with rm. ollama does not delete models otherwise:

ollama rm deepseek-r1:1.5b-qwen-distill-q4_K_M

I followed all the steps As you mentioned. But again ollama list command doesn't give output of installed model. In a video you can see that I can use deepseek r1 1.5b but model list command doesn't output installed model

https://imgur.com/a/m68W4bY

<!-- gh-comment-id:2620733669 --> @Straykinich commented on GitHub (Jan 29, 2025): > According to the log, ollama is currently configured to store models in D:\[0]Abhishek\[6]OLLAMA_MODELS\models. If that's where you want them, all you need to do now is download the models you want to store there. > > ``` > ollama pull deepseek-r1:1.5b-qwen-distill-q4_K_M > ``` > > Then, when you want to use a model: > > ``` > ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M > ``` > > If you want to download and run a model straight away, you can do that in one command: > > ``` > ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M > ``` > > The model will be downloaded as if you had done a `pull` command, and then the model will be loaded as if you had just done a `run` command. > > If you find you are running out of disk space, you can delete a model with `rm`. ollama does not delete models otherwise: > > ``` > ollama rm deepseek-r1:1.5b-qwen-distill-q4_K_M > ``` I followed all the steps As you mentioned. But again ollama list command doesn't give output of installed model. In a video you can see that I can use deepseek r1 1.5b but model list command doesn't output installed model https://imgur.com/a/m68W4bY
Author
Owner

@Straykinich commented on GitHub (Jan 29, 2025):

Update - Just after restarting my pc it deleted deepseek and now installs it again.

<!-- gh-comment-id:2620741627 --> @Straykinich commented on GitHub (Jan 29, 2025): Update - Just after restarting my pc it deleted deepseek and now installs it again.
Author
Owner

@Straykinich commented on GitHub (Jan 29, 2025):

I tired installing model without changing anything, and it worked flawlessly without a problem. So I think i will stick to that.

the problem was changing model storing location, I don't know why?

Anyway Thanks You for the help @rick-github

<!-- gh-comment-id:2621192474 --> @Straykinich commented on GitHub (Jan 29, 2025): I tired installing model without changing anything, and it worked flawlessly without a problem. So I think i will stick to that. the problem was changing model storing location, I don't know why? Anyway Thanks You for the help @rick-github
Author
Owner

@dean-jl commented on GitHub (May 23, 2025):

Hi folks. I'm having the same issue as Straykinich had with ollama not finding the models I downloaded. If you want me to open another issue on this I can, just let me know.

OS
MacOS Sequoia 15.5

Ollama version
0.7.1

I'm probably doing something wrong, but I can't figure out what it is. It looks like Ollama isn't reading my OLLAMA_MODELS environment variable.

ollama --version
ollama version is 0.7.1

more server.log
time=2025-05-23T18:26:42.286-04:00 level=INFO source=routes.go:1205 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/dean/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-05-23T18:26:42.287-04:00 level=INFO source=images.go:463 msg="total blobs: 0"
time=2025-05-23T18:26:42.287-04:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-23T18:26:42.287-04:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.1)"
time=2025-05-23T18:26:42.344-04:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="48.0 GiB" available="48.0 GiB"

env | grep OLLAMA
OLLAMA_MODELS=/Volumes/OWC2T/Users/dean/Ollama_Models

ollama list
NAME ID SIZE MODIFIED

ls /Volumes/OWC2T/Users/dean/Ollama_Models
blobs manifests

ls /Volumes/OWC2T/Users/dean/Ollama_Models/blobs
sha256-05a61d37b08453e59290add468e3bb2f688e23a01e967fecb0e2fa41218cea76
sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177
sha256-0d8282caa612c1f8fea92cac270905dcd27403272abdfb4edc58627eb7b0d327
.
.
.
73 files in blobs directory

Any help would be appreciated. Thanks.

<!-- gh-comment-id:2905982312 --> @dean-jl commented on GitHub (May 23, 2025): Hi folks. I'm having the same issue as Straykinich had with ollama not finding the models I downloaded. If you want me to open another issue on this I can, just let me know. OS MacOS Sequoia 15.5 Ollama version 0.7.1 I'm probably doing something wrong, but I can't figure out what it is. It looks like Ollama isn't reading my OLLAMA_MODELS environment variable. **ollama --version** ollama version is 0.7.1 **more server.log** time=2025-05-23T18:26:42.286-04:00 level=INFO source=routes.go:1205 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 _**OLLAMA_MODELS:/Users/dean/.ollama/models**_ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2025-05-23T18:26:42.287-04:00 level=INFO source=images.go:463 msg="total blobs: 0" time=2025-05-23T18:26:42.287-04:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-23T18:26:42.287-04:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.1)" time=2025-05-23T18:26:42.344-04:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="48.0 GiB" available="48.0 GiB" **env | grep OLLAMA** OLLAMA_MODELS=/Volumes/OWC2T/Users/dean/Ollama_Models **ollama list** NAME ID SIZE MODIFIED **ls /Volumes/OWC2T/Users/dean/Ollama_Models** blobs manifests **ls /Volumes/OWC2T/Users/dean/Ollama_Models/blobs** sha256-05a61d37b08453e59290add468e3bb2f688e23a01e967fecb0e2fa41218cea76 sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177 sha256-0d8282caa612c1f8fea92cac270905dcd27403272abdfb4edc58627eb7b0d327 . . . 73 files in blobs directory Any help would be appreciated. Thanks.
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

You need to start the server with OLLAMA_MODELS set in the environment of the server. If you are using ollama as a service on MacOS, this is done with launchctl. If you are running the ollama server from the command line (ollama serve) then it will use the value from our personal environment.

<!-- gh-comment-id:2905987609 --> @rick-github commented on GitHub (May 23, 2025): You need to start the server with `OLLAMA_MODELS` set in the environment of the server. If you are using ollama as a service on MacOS, this is done with [`launchctl`](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-mac). If you are running the ollama server from the command line (`ollama serve`) then it will use the value from our personal environment.
Author
Owner

@dean-jl commented on GitHub (May 24, 2025):

Perfect, thank you @rick-github

<!-- gh-comment-id:2906894701 --> @dean-jl commented on GitHub (May 24, 2025): Perfect, thank you @rick-github
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5597