[GH-ISSUE #9266] couldnt find ggml_backend #68096

Closed
opened 2026-05-04 12:31:19 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @Hsq12138 on GitHub (Feb 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9266

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

i tried many times of re installing this still exist,how can i sovle it ,this makes mt pc run model in cpu instead of gpu

Relevant log output

2025/02/21 13:56:53 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-21T13:56:53.134+08:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-02-21T13:56:53.134+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-21T13:56:53.134+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.12-rc1)"
time=2025-02-21T13:56:53.134+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-21T13:56:53.135+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-02-21T13:56:53.135+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-02-21T13:56:53.297+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd library=cuda compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4080 SUPER" overhead="535.9 MiB"
time=2025-02-21T13:56:53.298+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4080 SUPER" total="16.0 GiB" available="14.7 GiB"
[GIN] 2025/02/21 - 13:57:02 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 13:57:04 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:01:40 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:01:58 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:07:56 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:08:33 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:08:33 | 404 |      1.0667ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-21T14:08:34.980+08:00 level=INFO source=download.go:176 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)"
[GIN] 2025/02/21 - 14:16:41 | 200 |          8m8s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/02/21 - 14:16:48 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:16:48 | 404 |       503.5µs |       127.0.0.1 | POST     "/api/show"
time=2025-02-21T14:16:50.318+08:00 level=INFO source=download.go:176 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)"
[GIN] 2025/02/21 - 14:18:39 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:19:52 | 201 |   57.0902188s |       127.0.0.1 | POST     "/api/blobs/sha256:553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022"
[GIN] 2025/02/21 - 14:19:52 | 200 |     56.7086ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2025/02/21 - 14:20:10 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:20:10 | 200 |       509.2µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/21 - 14:20:23 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:20:23 | 200 |     10.2159ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-21T14:20:23.626+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T14:20:23.626+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T14:20:23.626+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T14:20:23.626+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T14:20:23.627+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T14:20:23.627+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T14:20:23.627+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T14:20:23.627+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T14:20:23.638+08:00 level=INFO source=server.go:97 msg="system memory" total="95.8 GiB" free="83.1 GiB" free_swap="83.6 GiB"
time=2025-02-21T14:20:23.638+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T14:20:23.638+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T14:20:23.639+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=22 layers.split="" memory.available="[13.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="28.3 GiB" memory.required.partial="13.3 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[13.3 GiB]" memory.weights.total="25.0 GiB" memory.weights.repeating="23.5 GiB" memory.weights.nonrepeating="1.5 GiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB"
time=2025-02-21T14:20:23.642+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 --ctx-size 2048 --batch-size 512 --n-gpu-layers 22 --threads 6 --no-mmap --parallel 1 --port 57323"
time=2025-02-21T14:20:23.699+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-02-21T14:20:23.699+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-02-21T14:20:23.699+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-21T14:20:23.721+08:00 level=INFO source=runner.go:932 msg="starting go runner"
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll
time=2025-02-21T14:20:23.765+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=6
time=2025-02-21T14:20:23.765+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:57323"
llama_model_loader: loaded meta data with 25 key-value pairs and 579 tensors from D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Checkpoint 887 Merged
llama_model_loader: - kv   3:                         general.size_label str              = 15B
llama_model_loader: - kv   4:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   5:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   6:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   7:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   8:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv   9:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 1
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = deepseek-r1-qwen
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type  f16:  338 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 5120
llm_load_print_meta: n_layer          = 48
llm_load_print_meta: n_head           = 40
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 5
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 13824
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 14B
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 14.77 B
llm_load_print_meta: model size       = 27.51 GiB (16.00 BPW) 
llm_load_print_meta: general.name     = Checkpoint 887 Merged
llm_load_print_meta: BOS token        = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
time=2025-02-21T14:20:23.949+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors:          CPU model buffer size = 28173.21 MiB
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init:        CPU KV buffer size =   384.00 MiB
llama_new_context_with_model: KV self size  =  384.00 MiB, K (f16):  192.00 MiB, V (f16):  192.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.60 MiB
llama_new_context_with_model:        CPU compute buffer size =   307.00 MiB
llama_new_context_with_model: graph nodes  = 1686
llama_new_context_with_model: graph splits = 1
time=2025-02-21T14:20:30.209+08:00 level=INFO source=server.go:596 msg="llama runner started in 6.51 seconds"
[GIN] 2025/02/21 - 14:20:30 | 200 |    6.6298258s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/02/21 - 14:20:56 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/21 - 14:20:56 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/02/21 - 14:21:14 | 200 |   33.3763755s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/02/21 - 14:21:18 | 200 |         4m30s |       127.0.0.1 | POST     "/api/pull"

OS

Windows

GPU

Intel, Nvidia

CPU

Intel

Ollama version

0.5.12

Originally created by @Hsq12138 on GitHub (Feb 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9266 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? i tried many times of re installing this still exist,how can i sovle it ,this makes mt pc run model in cpu instead of gpu ### Relevant log output ```shell 2025/02/21 13:56:53 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-21T13:56:53.134+08:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-02-21T13:56:53.134+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-21T13:56:53.134+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.12-rc1)" time=2025-02-21T13:56:53.134+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-21T13:56:53.135+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-02-21T13:56:53.135+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-02-21T13:56:53.297+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd library=cuda compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4080 SUPER" overhead="535.9 MiB" time=2025-02-21T13:56:53.298+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4080 SUPER" total="16.0 GiB" available="14.7 GiB" [GIN] 2025/02/21 - 13:57:02 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 13:57:04 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:01:40 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:01:58 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:07:56 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:08:33 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:08:33 | 404 | 1.0667ms | 127.0.0.1 | POST "/api/show" time=2025-02-21T14:08:34.980+08:00 level=INFO source=download.go:176 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)" [GIN] 2025/02/21 - 14:16:41 | 200 | 8m8s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/02/21 - 14:16:48 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:16:48 | 404 | 503.5µs | 127.0.0.1 | POST "/api/show" time=2025-02-21T14:16:50.318+08:00 level=INFO source=download.go:176 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)" [GIN] 2025/02/21 - 14:18:39 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:19:52 | 201 | 57.0902188s | 127.0.0.1 | POST "/api/blobs/sha256:553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022" [GIN] 2025/02/21 - 14:19:52 | 200 | 56.7086ms | 127.0.0.1 | POST "/api/create" [GIN] 2025/02/21 - 14:20:10 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:20:10 | 200 | 509.2µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/02/21 - 14:20:23 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:20:23 | 200 | 10.2159ms | 127.0.0.1 | POST "/api/show" time=2025-02-21T14:20:23.626+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T14:20:23.626+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T14:20:23.626+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T14:20:23.626+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T14:20:23.627+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T14:20:23.627+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T14:20:23.627+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T14:20:23.627+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T14:20:23.638+08:00 level=INFO source=server.go:97 msg="system memory" total="95.8 GiB" free="83.1 GiB" free_swap="83.6 GiB" time=2025-02-21T14:20:23.638+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T14:20:23.638+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T14:20:23.639+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=22 layers.split="" memory.available="[13.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="28.3 GiB" memory.required.partial="13.3 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[13.3 GiB]" memory.weights.total="25.0 GiB" memory.weights.repeating="23.5 GiB" memory.weights.nonrepeating="1.5 GiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB" time=2025-02-21T14:20:23.642+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 --ctx-size 2048 --batch-size 512 --n-gpu-layers 22 --threads 6 --no-mmap --parallel 1 --port 57323" time=2025-02-21T14:20:23.699+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-02-21T14:20:23.699+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-02-21T14:20:23.699+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-21T14:20:23.721+08:00 level=INFO source=runner.go:932 msg="starting go runner" ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll time=2025-02-21T14:20:23.765+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=6 time=2025-02-21T14:20:23.765+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:57323" llama_model_loader: loaded meta data with 25 key-value pairs and 579 tensors from D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Checkpoint 887 Merged llama_model_loader: - kv 3: general.size_label str = 15B llama_model_loader: - kv 4: qwen2.block_count u32 = 48 llama_model_loader: - kv 5: qwen2.context_length u32 = 131072 llama_model_loader: - kv 6: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 7: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 8: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 9: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 1 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 23: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 24: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type f16: 338 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 48 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 14B llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 27.51 GiB (16.00 BPW) llm_load_print_meta: general.name = Checkpoint 887 Merged llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 time=2025-02-21T14:20:23.949+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: CPU model buffer size = 28173.21 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 384.00 MiB llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB llama_new_context_with_model: CPU output buffer size = 0.60 MiB llama_new_context_with_model: CPU compute buffer size = 307.00 MiB llama_new_context_with_model: graph nodes = 1686 llama_new_context_with_model: graph splits = 1 time=2025-02-21T14:20:30.209+08:00 level=INFO source=server.go:596 msg="llama runner started in 6.51 seconds" [GIN] 2025/02/21 - 14:20:30 | 200 | 6.6298258s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/02/21 - 14:20:56 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 14:20:56 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/02/21 - 14:21:14 | 200 | 33.3763755s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/21 - 14:21:18 | 200 | 4m30s | 127.0.0.1 | POST "/api/pull" ``` ### OS Windows ### GPU Intel, Nvidia ### CPU Intel ### Ollama version 0.5.12
GiteaMirror added the bugwindows labels 2026-05-04 12:31:20 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 21, 2025):

Set OLLAMA_DEBUG=1 in the server environment and post the resulting logs.

<!-- gh-comment-id:2673861467 --> @rick-github commented on GitHub (Feb 21, 2025): Set `OLLAMA_DEBUG=1` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) and post the resulting logs.
Author
Owner

@Hsq12138 commented on GitHub (Feb 21, 2025):

Set OLLAMA_DEBUG=1 in the server environment and post the resulting logs.

2025/02/21 23:08:26 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-21T23:08:26.801+08:00 level=INFO source=images.go:432 msg="total blobs: 2"
time=2025-02-21T23:08:26.802+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-21T23:08:26.802+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.12-rc1)"
time=2025-02-21T23:08:26.802+08:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-02-21T23:08:26.803+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-21T23:08:26.803+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-02-21T23:08:26.803+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-02-21T23:08:26.803+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-02-21T23:08:26.803+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll
time=2025-02-21T23:08:26.803+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\nvml.dll C:\Program Files\NVIDIA\CUDNN\v9.7\bin\nvml.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvml.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp\nvml.dll C:\Users\zrway\AppData\Local\Programs\Ollama\nvml.dll C:\Program Files\Common Files\Oracle\Java\javapath\nvml.dll C:\Windows\system32\nvml.dll C:\Windows\nvml.dll C:\Windows\System32\Wbem\nvml.dll C:\Windows\System32\WindowsPowerShell\v1.0\nvml.dll C:\Windows\System32\OpenSSH\nvml.dll C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll C:\Program Files\Bandizip\nvml.dll C:\Program Files\dotnet\nvml.dll C:\Program Files\Git\cmd\nvml.dll D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-full_build\bin\nvml.dll C:\Program Files\NVIDIA Corporation\NVIDIA app\NvDLISR\nvml.dll C:\Users\zrway\AppData\Local\Microsoft\WindowsApps\python.exe\nvml.dll C:\Users\zrway\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\Scripts\nvml.dll C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.1.0\nvml.dll C:\Program Files\MySQL\MySQL Shell 8.0\bin\nvml.dll C:\Users\zrway\AppData\Local\Microsoft\WindowsApps\nvml.dll C:\Users\zrway\AppData\Local\Programs\Microsoft VS Code\bin\nvml.dll D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-essentials_build\bin\nvml.dll C:\Users\zrway\AppData\Local\Programs\Ollama\nvml.dll C:\Users\zrway\AppData\Local\Programs\Ollama\nvml.dll C:\Users\zrway\.lmstudio\bin\nvml.dll c:\Windows\System32\nvml.dll]"
time=2025-02-21T23:08:26.804+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll"
time=2025-02-21T23:08:26.805+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\Windows\system32\nvml.dll c:\Windows\System32\nvml.dll]"
time=2025-02-21T23:08:26.839+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll
time=2025-02-21T23:08:26.839+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll
time=2025-02-21T23:08:26.839+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\nvcuda.dll C:\Program Files\NVIDIA\CUDNN\v9.7\bin\nvcuda.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcuda.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp\nvcuda.dll C:\Users\zrway\AppData\Local\Programs\Ollama\nvcuda.dll C:\Program Files\Common Files\Oracle\Java\javapath\nvcuda.dll C:\Windows\system32\nvcuda.dll C:\Windows\nvcuda.dll C:\Windows\System32\Wbem\nvcuda.dll C:\Windows\System32\WindowsPowerShell\v1.0\nvcuda.dll C:\Windows\System32\OpenSSH\nvcuda.dll C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvcuda.dll C:\Program Files\Bandizip\nvcuda.dll C:\Program Files\dotnet\nvcuda.dll C:\Program Files\Git\cmd\nvcuda.dll D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-full_build\bin\nvcuda.dll C:\Program Files\NVIDIA Corporation\NVIDIA app\NvDLISR\nvcuda.dll C:\Users\zrway\AppData\Local\Microsoft\WindowsApps\python.exe\nvcuda.dll C:\Users\zrway\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\Scripts\nvcuda.dll C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.1.0\nvcuda.dll C:\Program Files\MySQL\MySQL Shell 8.0\bin\nvcuda.dll C:\Users\zrway\AppData\Local\Microsoft\WindowsApps\nvcuda.dll C:\Users\zrway\AppData\Local\Programs\Microsoft VS Code\bin\nvcuda.dll D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-essentials_build\bin\nvcuda.dll C:\Users\zrway\AppData\Local\Programs\Ollama\nvcuda.dll C:\Users\zrway\AppData\Local\Programs\Ollama\nvcuda.dll C:\Users\zrway\.lmstudio\bin\nvcuda.dll c:\windows\system
\nvcuda.dll]"
time=2025-02-21T23:08:26.840+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvcuda.dll"
time=2025-02-21T23:08:26.841+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll]
initializing C:\Windows\system32\nvcuda.dll
dlsym: cuInit - 00007FF97B775F80
dlsym: cuDriverGetVersion - 00007FF97B776020
dlsym: cuDeviceGetCount - 00007FF97B776816
dlsym: cuDeviceGet - 00007FF97B776810
dlsym: cuDeviceGetAttribute - 00007FF97B776170
dlsym: cuDeviceGetUuid - 00007FF97B776822
dlsym: cuDeviceGetName - 00007FF97B77681C
dlsym: cuCtxCreate_v3 - 00007FF97B776894
dlsym: cuMemGetInfo_v2 - 00007FF97B776996
dlsym: cuCtxDestroy - 00007FF97B7768A6
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-02-21T23:08:26.893+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll
[GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd] CUDA totalMem 16375 mb
[GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd] CUDA freeMem 15035 mb
[GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd] Compute Capability 8.9
time=2025-02-21T23:08:26.975+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd library=cuda compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4080 SUPER" overhead="265.6 MiB"
time=2025-02-21T23:08:26.976+08:00 level=DEBUG source=amd_windows.go:34 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The file cannot be accessed by the system."
releasing cuda driver library
releasing nvml library
time=2025-02-21T23:08:26.977+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4080 SUPER" total="16.0 GiB" available="14.7 GiB"
[GIN] 2025/02/21 - 23:18:16 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/21 - 23:18:16 | 200 | 1.0301ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/02/21 - 23:18:31 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/21 - 23:18:31 | 200 | 10.8992ms | 127.0.0.1 | POST "/api/show"
time=2025-02-21T23:18:31.919+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="95.8 GiB" before.free="87.3 GiB" before.free_swap="89.6 GiB" now.total="95.8 GiB" now.free="86.2 GiB" now.free_swap="87.3 GiB"
time=2025-02-21T23:18:31.928+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd name="NVIDIA GeForce RTX 4080 SUPER" overhead="265.6 MiB" before.total="16.0 GiB" before.free="14.7 GiB" now.total="16.0 GiB" now.free="14.4 GiB" now.used="1.3 GiB"
releasing nvml library
time=2025-02-21T23:18:31.928+08:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-02-21T23:18:31.946+08:00 level=DEBUG source=sched.go:225 msg="loading first model" model=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022
time=2025-02-21T23:18:31.946+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]"
time=2025-02-21T23:18:31.946+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T23:18:31.946+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T23:18:31.947+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]"
time=2025-02-21T23:18:31.947+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T23:18:31.947+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T23:18:31.948+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]"
time=2025-02-21T23:18:31.948+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T23:18:31.948+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T23:18:31.948+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]"
time=2025-02-21T23:18:31.948+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T23:18:31.948+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T23:18:31.948+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="95.8 GiB" before.free="86.2 GiB" before.free_swap="87.3 GiB" now.total="95.8 GiB" now.free="86.2 GiB" now.free_swap="87.3 GiB"
time=2025-02-21T23:18:31.956+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd name="NVIDIA GeForce RTX 4080 SUPER" overhead="265.6 MiB" before.total="16.0 GiB" before.free="14.4 GiB" now.total="16.0 GiB" now.free="14.4 GiB" now.used="1.3 GiB"
releasing nvml library
time=2025-02-21T23:18:31.956+08:00 level=INFO source=server.go:97 msg="system memory" total="95.8 GiB" free="86.2 GiB" free_swap="87.3 GiB"
time=2025-02-21T23:18:31.956+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]"
time=2025-02-21T23:18:31.956+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-02-21T23:18:31.956+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-02-21T23:18:31.957+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=24 layers.split="" memory.available="[14.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="28.3 GiB" memory.required.partial="14.4 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[14.4 GiB]" memory.weights.total="25.0 GiB" memory.weights.repeating="23.5 GiB" memory.weights.nonrepeating="1.5 GiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB"
time=2025-02-21T23:18:31.957+08:00 level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
time=2025-02-21T23:18:31.963+08:00 level=DEBUG source=server.go:302 msg="adding gpu library" path=C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-02-21T23:18:31.963+08:00 level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12]
time=2025-02-21T23:18:31.963+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\Users\zrway\AppData\Local\Programs\Ollama\ollama.exe runner --model D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 --ctx-size 2048 --batch-size 512 --n-gpu-layers 24 --verbose --threads 6 --no-mmap --parallel 1 --port 50841"
time=2025-02-21T23:18:31.963+08:00 level=DEBUG source=server.go:398 msg=subprocess environment="[CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8 CUDA_PATH_V12_8=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8 PATH=C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12;C:\Program Files\NVIDIA\CUDNN\v9.7\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp;;C:\Program Files\Common Files\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Bandizip\;C:\Program Files\dotnet\;C:\Program Files\Git\cmd;D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-full_build\bin;C:\Program Files\NVIDIA Corporation\NVIDIA app\NvDLISR;C:\Users\zrway\AppData\Local\Microsoft\WindowsApps\python.exe;C:\Users\zrway\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\Scripts;C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.1.0\;C:\Program Files\MySQL\MySQL Shell 8.0\bin\;C:\Users\zrway\AppData\Local\Microsoft\WindowsApps;C:\Users\zrway\AppData\Local\Programs\Microsoft VS Code\bin;D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-essentials_build\bin;;C:\Users\zrway\AppData\Local\Programs\Ollama;C:\Users\zrway\.lmstudio\bin;C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12;C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama CUDA_VISIBLE_DEVICES=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd]"
time=2025-02-21T23:18:32.020+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-02-21T23:18:32.020+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-02-21T23:18:32.020+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-21T23:18:32.035+08:00 level=INFO source=runner.go:932 msg="starting go runner"
time=2025-02-21T23:18:32.041+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-02-21T23:18:32.051+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA\CUDNN\v9.7\bin"
time=2025-02-21T23:18:32.051+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin"
time=2025-02-21T23:18:32.051+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp"
time=2025-02-21T23:18:32.051+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=C:\Users\zrway\AppData\Local\Programs\Ollama
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\Common Files\Oracle\Java\javapath"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows\system32
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows\System32\Wbem
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows\System32\WindowsPowerShell\v1.0
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows\System32\OpenSSH
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\Bandizip"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\dotnet"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\Git\cmd"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-full_build\bin
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA Corporation\NVIDIA app\NvDLISR"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Users\zrway\AppData\Local\Microsoft\WindowsApps\python.exe
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Users\zrway\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\Scripts
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.1.0"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Program Files\MySQL\MySQL Shell 8.0\bin"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Users\zrway\AppData\Local\Microsoft\WindowsApps
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\Users\zrway\AppData\Local\Programs\Microsoft VS Code\bin"
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-essentials_build\bin
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Users\zrway.lmstudio\bin
time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll
time=2025-02-21T23:18:32.060+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=6
time=2025-02-21T23:18:32.074+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:50841"
llama_model_loader: loaded meta data with 25 key-value pairs and 579 tensors from D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Checkpoint 887 Merged
llama_model_loader: - kv 3: general.size_label str = 15B
llama_model_loader: - kv 4: qwen2.block_count u32 = 48
llama_model_loader: - kv 5: qwen2.context_length u32 = 131072
llama_model_loader: - kv 6: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 7: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 8: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 9: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 10: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: general.file_type u32 = 1
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = deepseek-r1-qwen
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 23: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 24: general.quantization_version u32 = 2
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type f16: 338 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG
llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG
llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG
llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG
llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG
llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG
llm_load_vocab: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG
llm_load_vocab: control token: 151644 '<|User|>' is not marked as EOG
llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG
llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG
llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG
llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG
llm_load_vocab: control token: 151647 '<|EOT|>' is not marked as EOG
llm_load_vocab: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG
llm_load_vocab: control token: 151645 '<|Assistant|>' is not marked as EOG
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_layer = 48
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 5
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 13824
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 14B
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 14.77 B
llm_load_print_meta: model size = 27.51 GiB (16.00 BPW)
llm_load_print_meta: general.name = Checkpoint 887 Merged
llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: CPU model buffer size = 28173.21 MiB
load_all_data: no device found for buffer type CPU for async uploads
time=2025-02-21T23:18:32.271+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
time=2025-02-21T23:18:33.022+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.05"
time=2025-02-21T23:18:33.523+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.11"
time=2025-02-21T23:18:33.773+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.13"
time=2025-02-21T23:18:34.023+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.15"
time=2025-02-21T23:18:34.274+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.17"
time=2025-02-21T23:18:34.524+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.18"
time=2025-02-21T23:18:34.774+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.20"
time=2025-02-21T23:18:35.024+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.22"
time=2025-02-21T23:18:35.275+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.24"
time=2025-02-21T23:18:35.525+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.26"
time=2025-02-21T23:18:35.775+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.28"
time=2025-02-21T23:18:36.025+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.30"
time=2025-02-21T23:18:36.275+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.32"
time=2025-02-21T23:18:36.526+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.34"
time=2025-02-21T23:18:36.777+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.36"
time=2025-02-21T23:18:37.028+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.38"
time=2025-02-21T23:18:37.278+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.40"
time=2025-02-21T23:18:37.528+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.42"
time=2025-02-21T23:18:37.778+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.44"
time=2025-02-21T23:18:38.028+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.45"
time=2025-02-21T23:18:38.279+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.47"
time=2025-02-21T23:18:38.530+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.50"
time=2025-02-21T23:18:38.780+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.52"
time=2025-02-21T23:18:39.030+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.54"
time=2025-02-21T23:18:39.280+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.56"
time=2025-02-21T23:18:39.531+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.58"
time=2025-02-21T23:18:39.781+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.59"
time=2025-02-21T23:18:40.032+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.61"
time=2025-02-21T23:18:40.282+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.63"
time=2025-02-21T23:18:40.532+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.65"
time=2025-02-21T23:18:40.783+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.67"
time=2025-02-21T23:18:41.033+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.69"
time=2025-02-21T23:18:41.283+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.71"
time=2025-02-21T23:18:41.534+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.72"
time=2025-02-21T23:18:41.784+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.75"
time=2025-02-21T23:18:42.035+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.77"
time=2025-02-21T23:18:42.285+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.79"
time=2025-02-21T23:18:42.535+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.80"
time=2025-02-21T23:18:42.786+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.82"
time=2025-02-21T23:18:43.036+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.85"
time=2025-02-21T23:18:43.286+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.86"
time=2025-02-21T23:18:43.536+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.88"
time=2025-02-21T23:18:43.787+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.91"
time=2025-02-21T23:18:44.037+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.93"
time=2025-02-21T23:18:44.288+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.95"
time=2025-02-21T23:18:44.538+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.97"
time=2025-02-21T23:18:44.788+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.99"
llama_new_context_with_model: n_seq_max = 1
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: CPU KV buffer size = 384.00 MiB
llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.60 MiB
llama_new_context_with_model: CPU compute buffer size = 307.00 MiB
llama_new_context_with_model: graph nodes = 1686
llama_new_context_with_model: graph splits = 1
time=2025-02-21T23:18:45.039+08:00 level=INFO source=server.go:596 msg="llama runner started in 13.02 seconds"
time=2025-02-21T23:18:45.039+08:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022
[GIN] 2025/02/21 - 23:18:45 | 200 | 13.1287157s | 127.0.0.1 | POST "/api/generate"
time=2025-02-21T23:18:45.039+08:00 level=DEBUG source=sched.go:467 msg="context for request finished"
time=2025-02-21T23:18:45.039+08:00 level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 duration=5m0s
time=2025-02-21T23:18:45.039+08:00 level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 refCount=0
time=2025-02-21T23:18:54.108+08:00 level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022
time=2025-02-21T23:18:54.108+08:00 level=DEBUG source=routes.go:1462 msg="chat request" images=0 prompt=hi
time=2025-02-21T23:18:54.108+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=2 used=0 remaining=2

<!-- gh-comment-id:2674839178 --> @Hsq12138 commented on GitHub (Feb 21, 2025): > Set `OLLAMA_DEBUG=1` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) and post the resulting logs. 2025/02/21 23:08:26 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-21T23:08:26.801+08:00 level=INFO source=images.go:432 msg="total blobs: 2" time=2025-02-21T23:08:26.802+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-21T23:08:26.802+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.12-rc1)" time=2025-02-21T23:08:26.802+08:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-02-21T23:08:26.803+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-21T23:08:26.803+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-02-21T23:08:26.803+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-02-21T23:08:26.803+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-02-21T23:08:26.803+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll time=2025-02-21T23:08:26.803+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\NVIDIA\\CUDNN\\v9.7\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvml.dll C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files\\Bandizip\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll D:\\ai\\ffmpeg-2024-03-28-git-5d71f97e0e-full_build\\bin\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR\\nvml.dll C:\\Users\\zrway\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe\\nvml.dll C:\\Users\\zrway\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\nvml.dll C:\\Program Files\\MySQL\\MySQL Shell 8.0\\bin\\nvml.dll C:\\Users\\zrway\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\zrway\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvml.dll D:\\ai\\ffmpeg-2024-03-28-git-5d71f97e0e-essentials_build\\bin\\nvml.dll C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\zrway\\.lmstudio\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-02-21T23:08:26.804+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll" time=2025-02-21T23:08:26.805+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-02-21T23:08:26.839+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll time=2025-02-21T23:08:26.839+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll time=2025-02-21T23:08:26.839+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\NVIDIA\\CUDNN\\v9.7\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvcuda.dll C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files\\Bandizip\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll D:\\ai\\ffmpeg-2024-03-28-git-5d71f97e0e-full_build\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR\\nvcuda.dll C:\\Users\\zrway\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe\\nvcuda.dll C:\\Users\\zrway\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\nvcuda.dll C:\\Program Files\\MySQL\\MySQL Shell 8.0\\bin\\nvcuda.dll C:\\Users\\zrway\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\zrway\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll D:\\ai\\ffmpeg-2024-03-28-git-5d71f97e0e-essentials_build\\bin\\nvcuda.dll C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\zrway\\.lmstudio\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-02-21T23:08:26.840+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll" time=2025-02-21T23:08:26.841+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll] initializing C:\Windows\system32\nvcuda.dll dlsym: cuInit - 00007FF97B775F80 dlsym: cuDriverGetVersion - 00007FF97B776020 dlsym: cuDeviceGetCount - 00007FF97B776816 dlsym: cuDeviceGet - 00007FF97B776810 dlsym: cuDeviceGetAttribute - 00007FF97B776170 dlsym: cuDeviceGetUuid - 00007FF97B776822 dlsym: cuDeviceGetName - 00007FF97B77681C dlsym: cuCtxCreate_v3 - 00007FF97B776894 dlsym: cuMemGetInfo_v2 - 00007FF97B776996 dlsym: cuCtxDestroy - 00007FF97B7768A6 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-02-21T23:08:26.893+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll [GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd] CUDA totalMem 16375 mb [GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd] CUDA freeMem 15035 mb [GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd] Compute Capability 8.9 time=2025-02-21T23:08:26.975+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd library=cuda compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4080 SUPER" overhead="265.6 MiB" time=2025-02-21T23:08:26.976+08:00 level=DEBUG source=amd_windows.go:34 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The file cannot be accessed by the system." releasing cuda driver library releasing nvml library time=2025-02-21T23:08:26.977+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4080 SUPER" total="16.0 GiB" available="14.7 GiB" [GIN] 2025/02/21 - 23:18:16 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 23:18:16 | 200 | 1.0301ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/02/21 - 23:18:31 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/21 - 23:18:31 | 200 | 10.8992ms | 127.0.0.1 | POST "/api/show" time=2025-02-21T23:18:31.919+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="95.8 GiB" before.free="87.3 GiB" before.free_swap="89.6 GiB" now.total="95.8 GiB" now.free="86.2 GiB" now.free_swap="87.3 GiB" time=2025-02-21T23:18:31.928+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd name="NVIDIA GeForce RTX 4080 SUPER" overhead="265.6 MiB" before.total="16.0 GiB" before.free="14.7 GiB" now.total="16.0 GiB" now.free="14.4 GiB" now.used="1.3 GiB" releasing nvml library time=2025-02-21T23:18:31.928+08:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-02-21T23:18:31.946+08:00 level=DEBUG source=sched.go:225 msg="loading first model" model=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 time=2025-02-21T23:18:31.946+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]" time=2025-02-21T23:18:31.946+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T23:18:31.946+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T23:18:31.947+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]" time=2025-02-21T23:18:31.947+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T23:18:31.947+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T23:18:31.948+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]" time=2025-02-21T23:18:31.948+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T23:18:31.948+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T23:18:31.948+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]" time=2025-02-21T23:18:31.948+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T23:18:31.948+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T23:18:31.948+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="95.8 GiB" before.free="86.2 GiB" before.free_swap="87.3 GiB" now.total="95.8 GiB" now.free="86.2 GiB" now.free_swap="87.3 GiB" time=2025-02-21T23:18:31.956+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd name="NVIDIA GeForce RTX 4080 SUPER" overhead="265.6 MiB" before.total="16.0 GiB" before.free="14.4 GiB" now.total="16.0 GiB" now.free="14.4 GiB" now.used="1.3 GiB" releasing nvml library time=2025-02-21T23:18:31.956+08:00 level=INFO source=server.go:97 msg="system memory" total="95.8 GiB" free="86.2 GiB" free_swap="87.3 GiB" time=2025-02-21T23:18:31.956+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.4 GiB]" time=2025-02-21T23:18:31.956+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-02-21T23:18:31.956+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-02-21T23:18:31.957+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=24 layers.split="" memory.available="[14.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="28.3 GiB" memory.required.partial="14.4 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[14.4 GiB]" memory.weights.total="25.0 GiB" memory.weights.repeating="23.5 GiB" memory.weights.nonrepeating="1.5 GiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB" time=2025-02-21T23:18:31.957+08:00 level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" time=2025-02-21T23:18:31.963+08:00 level=DEBUG source=server.go:302 msg="adding gpu library" path=C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-02-21T23:18:31.963+08:00 level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12] time=2025-02-21T23:18:31.963+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 --ctx-size 2048 --batch-size 512 --n-gpu-layers 24 --verbose --threads 6 --no-mmap --parallel 1 --port 50841" time=2025-02-21T23:18:31.963+08:00 level=DEBUG source=server.go:398 msg=subprocess environment="[CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 CUDA_PATH_V12_8=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 PATH=C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA\\CUDNN\\v9.7\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;;C:\\Program Files\\Common Files\\Oracle\\Java\\javapath;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Bandizip\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Git\\cmd;D:\\ai\\ffmpeg-2024-03-28-git-5d71f97e0e-full_build\\bin;C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR;C:\\Users\\zrway\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe;C:\\Users\\zrway\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0\\;C:\\Program Files\\MySQL\\MySQL Shell 8.0\\bin\\;C:\\Users\\zrway\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\zrway\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;D:\\ai\\ffmpeg-2024-03-28-git-5d71f97e0e-essentials_build\\bin;;C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama;C:\\Users\\zrway\\.lmstudio\\bin;C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\zrway\\AppData\\Local\\Programs\\Ollama\\lib\\ollama CUDA_VISIBLE_DEVICES=GPU-ecc0382b-7d7c-7b61-8572-a21b10ac9fcd]" time=2025-02-21T23:18:32.020+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-02-21T23:18:32.020+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-02-21T23:18:32.020+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-21T23:18:32.035+08:00 level=INFO source=runner.go:932 msg="starting go runner" time=2025-02-21T23:18:32.041+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-02-21T23:18:32.051+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA\\CUDNN\\v9.7\\bin" time=2025-02-21T23:18:32.051+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin" time=2025-02-21T23:18:32.051+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp" time=2025-02-21T23:18:32.051+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=C:\Users\zrway\AppData\Local\Programs\Ollama time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Common Files\\Oracle\\Java\\javapath" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows\system32 time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows\System32\Wbem time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows\System32\WindowsPowerShell\v1.0 time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Windows\System32\OpenSSH time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Bandizip" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\dotnet" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Git\\cmd" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-full_build\bin time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA Corporation\\NVIDIA app\\NvDLISR" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Users\zrway\AppData\Local\Microsoft\WindowsApps\python.exe time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Users\zrway\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\Scripts time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.0" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Program Files\\MySQL\\MySQL Shell 8.0\\bin" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Users\zrway\AppData\Local\Microsoft\WindowsApps time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path="C:\\Users\\zrway\\AppData\\Local\\Programs\\Microsoft VS Code\\bin" time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=D:\ai\ffmpeg-2024-03-28-git-5d71f97e0e-essentials_build\bin time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:83 msg="skipping path which is not part of ollama" path=C:\Users\zrway\.lmstudio\bin time=2025-02-21T23:18:32.052+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll time=2025-02-21T23:18:32.060+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=6 time=2025-02-21T23:18:32.074+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:50841" llama_model_loader: loaded meta data with 25 key-value pairs and 579 tensors from D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Checkpoint 887 Merged llama_model_loader: - kv 3: general.size_label str = 15B llama_model_loader: - kv 4: qwen2.block_count u32 = 48 llama_model_loader: - kv 5: qwen2.context_length u32 = 131072 llama_model_loader: - kv 6: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 7: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 8: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 9: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 1 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 23: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 24: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type f16: 338 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG llm_load_vocab: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG llm_load_vocab: control token: 151644 '<|User|>' is not marked as EOG llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG llm_load_vocab: control token: 151647 '<|EOT|>' is not marked as EOG llm_load_vocab: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG llm_load_vocab: control token: 151645 '<|Assistant|>' is not marked as EOG llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 48 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 14B llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 27.51 GiB (16.00 BPW) llm_load_print_meta: general.name = Checkpoint 887 Merged llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: CPU model buffer size = 28173.21 MiB load_all_data: no device found for buffer type CPU for async uploads time=2025-02-21T23:18:32.271+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" time=2025-02-21T23:18:33.022+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.05" time=2025-02-21T23:18:33.523+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.11" time=2025-02-21T23:18:33.773+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.13" time=2025-02-21T23:18:34.023+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.15" time=2025-02-21T23:18:34.274+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.17" time=2025-02-21T23:18:34.524+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.18" time=2025-02-21T23:18:34.774+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.20" time=2025-02-21T23:18:35.024+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.22" time=2025-02-21T23:18:35.275+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.24" time=2025-02-21T23:18:35.525+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.26" time=2025-02-21T23:18:35.775+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.28" time=2025-02-21T23:18:36.025+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.30" time=2025-02-21T23:18:36.275+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.32" time=2025-02-21T23:18:36.526+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.34" time=2025-02-21T23:18:36.777+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.36" time=2025-02-21T23:18:37.028+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.38" time=2025-02-21T23:18:37.278+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.40" time=2025-02-21T23:18:37.528+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.42" time=2025-02-21T23:18:37.778+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.44" time=2025-02-21T23:18:38.028+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.45" time=2025-02-21T23:18:38.279+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.47" time=2025-02-21T23:18:38.530+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.50" time=2025-02-21T23:18:38.780+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.52" time=2025-02-21T23:18:39.030+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.54" time=2025-02-21T23:18:39.280+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.56" time=2025-02-21T23:18:39.531+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.58" time=2025-02-21T23:18:39.781+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.59" time=2025-02-21T23:18:40.032+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.61" time=2025-02-21T23:18:40.282+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.63" time=2025-02-21T23:18:40.532+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.65" time=2025-02-21T23:18:40.783+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.67" time=2025-02-21T23:18:41.033+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.69" time=2025-02-21T23:18:41.283+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.71" time=2025-02-21T23:18:41.534+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.72" time=2025-02-21T23:18:41.784+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.75" time=2025-02-21T23:18:42.035+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.77" time=2025-02-21T23:18:42.285+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.79" time=2025-02-21T23:18:42.535+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.80" time=2025-02-21T23:18:42.786+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.82" time=2025-02-21T23:18:43.036+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.85" time=2025-02-21T23:18:43.286+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.86" time=2025-02-21T23:18:43.536+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.88" time=2025-02-21T23:18:43.787+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.91" time=2025-02-21T23:18:44.037+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.93" time=2025-02-21T23:18:44.288+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.95" time=2025-02-21T23:18:44.538+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.97" time=2025-02-21T23:18:44.788+08:00 level=DEBUG source=server.go:602 msg="model load progress 0.99" llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: CPU KV buffer size = 384.00 MiB llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB llama_new_context_with_model: CPU output buffer size = 0.60 MiB llama_new_context_with_model: CPU compute buffer size = 307.00 MiB llama_new_context_with_model: graph nodes = 1686 llama_new_context_with_model: graph splits = 1 time=2025-02-21T23:18:45.039+08:00 level=INFO source=server.go:596 msg="llama runner started in 13.02 seconds" time=2025-02-21T23:18:45.039+08:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 [GIN] 2025/02/21 - 23:18:45 | 200 | 13.1287157s | 127.0.0.1 | POST "/api/generate" time=2025-02-21T23:18:45.039+08:00 level=DEBUG source=sched.go:467 msg="context for request finished" time=2025-02-21T23:18:45.039+08:00 level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 duration=5m0s time=2025-02-21T23:18:45.039+08:00 level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 refCount=0 time=2025-02-21T23:18:54.108+08:00 level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=D:\ollama\models\blobs\sha256-553aa261cfb6856c595c9fefdb5453b98fdef331bf2ca918a5e0a23aa254d022 time=2025-02-21T23:18:54.108+08:00 level=DEBUG source=routes.go:1462 msg="chat request" images=0 prompt=hi time=2025-02-21T23:18:54.108+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=2 used=0 remaining=2
Author
Owner

@rick-github commented on GitHub (Feb 21, 2025):

ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll

Unable to load backends but no errors logged. What's the output of

dir C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\
<!-- gh-comment-id:2674936606 --> @rick-github commented on GitHub (Feb 21, 2025): ``` ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll ggml_backend_load_best: failed to load C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll ``` Unable to load backends but no errors logged. What's the output of ``` dir C:\Users\zrway\AppData\Local\Programs\Ollama\lib\ollama\ ```
Author
Owner

@Hsq12138 commented on GitHub (Feb 21, 2025):

Image

<!-- gh-comment-id:2674974974 --> @Hsq12138 commented on GitHub (Feb 21, 2025): ![Image](https://github.com/user-attachments/assets/8028e82f-3d40-4425-81e0-7042fca71ca8)
Author
Owner

@Kosuri-crypto commented on GitHub (Feb 22, 2025):

I faced the same problem.

I found the following solution:
I set the environment variable Path for olama lib folder.

Before (running on cpu with error failed to load ggml-cpu-*):

E:\Ollama

After (running on gpu without error):

E:\Ollama
E:\Ollama\lib\ollama

OS: Windows 11
GPU: NVIDIA GeForce RTX 3060
CPU: Intel i7-12700K
Ollama version: 0.5.11

<!-- gh-comment-id:2676213200 --> @Kosuri-crypto commented on GitHub (Feb 22, 2025): I faced the same problem. I found the following solution: I set the environment variable `Path` for olama lib folder. Before (running on cpu with error `failed to load ggml-cpu-*`): ``` E:\Ollama ``` After (running on gpu without error): ``` E:\Ollama E:\Ollama\lib\ollama ``` OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5.11
Author
Owner

@Hsq12138 commented on GitHub (Feb 23, 2025):

I faced the same problem.

I found the following solution: I set the environment variable Path for olama lib folder.

Before (running on cpu with error failed to load ggml-cpu-*):

E:\Ollama

After (running on gpu without error):

E:\Ollama
E:\Ollama\lib\ollama

OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5
thanks

I faced the same problem.

I found the following solution: I set the environment variable Path for olama lib folder.

Before (running on cpu with error failed to load ggml-cpu-*):

E:\Ollama

After (running on gpu without error):

E:\Ollama
E:\Ollama\lib\ollama

OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5.11

thanks for advice but for me this does not works

<!-- gh-comment-id:2676650213 --> @Hsq12138 commented on GitHub (Feb 23, 2025): > I faced the same problem. > > I found the following solution: I set the environment variable `Path` for olama lib folder. > > Before (running on cpu with error `failed to load ggml-cpu-*`): > > ``` > E:\Ollama > ``` > > After (running on gpu without error): > > ``` > E:\Ollama > E:\Ollama\lib\ollama > ``` > > OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5 thanks > I faced the same problem. > > I found the following solution: I set the environment variable `Path` for olama lib folder. > > Before (running on cpu with error `failed to load ggml-cpu-*`): > > ``` > E:\Ollama > ``` > > After (running on gpu without error): > > ``` > E:\Ollama > E:\Ollama\lib\ollama > ``` > > OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5.11 thanks for advice but for me this does not works
Author
Owner

@jmorganca commented on GitHub (Feb 25, 2025):

Does 0.5.12 (just released) fix this for you?

<!-- gh-comment-id:2679984439 --> @jmorganca commented on GitHub (Feb 25, 2025): Does [0.5.12 (just released)](https://github.com/ollama/ollama/releases/tag/v0.5.12) fix this for you?
Author
Owner

@Hsq12138 commented on GitHub (Feb 25, 2025):

Does 0.5.12 (just released) fix this for you?

no

<!-- gh-comment-id:2681024781 --> @Hsq12138 commented on GitHub (Feb 25, 2025): > Does [0.5.12 (just released)](https://github.com/ollama/ollama/releases/tag/v0.5.12) fix this for you? no
Author
Owner

@RadEdje commented on GitHub (Mar 12, 2025):

I faced the same problem.
I found the following solution: I set the environment variable Path for olama lib folder.
Before (running on cpu with error failed to load ggml-cpu-*):

E:\Ollama

After (running on gpu without error):

E:\Ollama
E:\Ollama\lib\ollama

OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5
thanks

I faced the same problem.
I found the following solution: I set the environment variable Path for olama lib folder.
Before (running on cpu with error failed to load ggml-cpu-*):

E:\Ollama

After (running on gpu without error):

E:\Ollama
E:\Ollama\lib\ollama

OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5.11

thanks for advice but for me this does not works

Thank YOU.... THIS WORKED!!!! i wonder why I have to do this now? I did not have to before?

<!-- gh-comment-id:2718310283 --> @RadEdje commented on GitHub (Mar 12, 2025): > > I faced the same problem. > > I found the following solution: I set the environment variable `Path` for olama lib folder. > > Before (running on cpu with error `failed to load ggml-cpu-*`): > > ``` > > E:\Ollama > > ``` > > > > > > > > > > > > > > > > > > > > > > > > After (running on gpu without error): > > ``` > > E:\Ollama > > E:\Ollama\lib\ollama > > ``` > > > > > > > > > > > > > > > > > > > > > > > > OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5 > > thanks > > > I faced the same problem. > > I found the following solution: I set the environment variable `Path` for olama lib folder. > > Before (running on cpu with error `failed to load ggml-cpu-*`): > > ``` > > E:\Ollama > > ``` > > > > > > > > > > > > > > > > > > > > > > > > After (running on gpu without error): > > ``` > > E:\Ollama > > E:\Ollama\lib\ollama > > ``` > > > > > > > > > > > > > > > > > > > > > > > > OS: Windows 11 GPU: NVIDIA GeForce RTX 3060 CPU: Intel i7-12700K Ollama version: 0.5.11 > > thanks for advice but for me this does not works Thank YOU.... THIS WORKED!!!! i wonder why I have to do this now? I did not have to before?
Author
Owner

@dhiltgen commented on GitHub (Jul 4, 2025):

We've been moving things around over the past few releases, but I believe everything should be stable now. I think this should be resolved on the latest version (0.9.5) but if you're still having any problems or have to set custom PATH settings, please let me know and I'll reopen the issue so we can get it fixed.

<!-- gh-comment-id:3037305187 --> @dhiltgen commented on GitHub (Jul 4, 2025): We've been moving things around over the past few releases, but I believe everything should be stable now. I think this should be resolved on the latest version (0.9.5) but if you're still having any problems or have to set custom PATH settings, please let me know and I'll reopen the issue so we can get it fixed.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68096