Abnormal CPU/GPU Invocation in Ollama v0.6.6​ #6832

Closed
opened 2025-11-12 13:46:27 -06:00 by GiteaMirror · 3 comments
Owner

Originally created by @minghua-123 on GitHub (Apr 24, 2025).

What is the issue?

Ask the same question to the model. In the new version of Ollama, use the CPU instead of the GPU, and configuring environment variables cannot be forced to use the GPU. However, it was normal in the previous version.

Image

Image

Version 0.6.6

PS C:\Users\wmh21> ollama ps
NAME              ID              SIZE      PROCESSOR    UNTIL
deepseek-r1:8b    28f8fd6cdc67    5.4 GB    100% CPU     4 minutes from now

Version 0.6.5

PS C:\Users\wmh21> ollama ps
NAME              ID              SIZE      PROCESSOR    UNTIL
deepseek-r1:8b    28f8fd6cdc67    5.8 GB    100% GPU     4 minutes from now

Relevant log output

PS C:\Users\wmh21> ollama serve
2025/04/24 11:06:48 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-24T11:06:48.470+08:00 level=INFO source=images.go:458 msg="total blobs: 57"
time=2025-04-24T11:06:48.473+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-24T11:06:48.475+08:00 level=INFO source=routes.go:1299 msg="Listening on 127.0.0.1:11434 (version 0.6.6)"
time=2025-04-24T11:06:48.475+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-24T11:06:48.475+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-04-24T11:06:48.475+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-04-24T11:06:48.475+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-04-24T11:06:49.177+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=cuda variant=v12 compute=8.9 driver=12.9 name="NVIDIA GeForce RTX 4060 Laptop GPU" total="8.0 GiB" available="6.4 GiB"
time=2025-04-24T11:06:54.396+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T11:06:54.427+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T11:06:54.442+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T11:06:54.443+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-04-24T11:06:54.443+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128
time=2025-04-24T11:06:54.443+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128
time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128
time=2025-04-24T11:06:54.460+08:00 level=INFO source=server.go:105 msg="system memory" total="31.7 GiB" free="19.6 GiB" free_swap="26.9 GiB"
time=2025-04-24T11:06:54.460+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-04-24T11:06:54.460+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128
time=2025-04-24T11:06:54.460+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128
time=2025-04-24T11:06:54.461+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[829.6 MiB]" memory.gpu_overhead="0 B" memory.required.full="4.8 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB"
llama_model_loader: loaded meta data with 28 key-value pairs and 292 tensors from E:\.ollama\models\blobs\sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 8B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
llama_model_loader: - kv   7:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  15:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  21:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  22:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  27:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = DeepSeek R1 Distill Llama 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token        = 128001 '<|end▁of▁sentence|>'
print_info: EOT token        = 128001 '<|end▁of▁sentence|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end▁of▁sentence|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-04-24T11:06:54.679+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\wmh21\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model E:\\.ollama\\models\\blobs\\sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be --ctx-size 2048 --batch-size 512 --threads 8 --no-mmap --parallel 1 --port 4046"
time=2025-04-24T11:06:54.692+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-24T11:06:54.692+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-24T11:06:54.693+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-24T11:06:54.726+08:00 level=INFO source=runner.go:853 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2025-04-24T11:06:57.843+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-04-24T11:06:57.845+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:4046"
llama_model_loader: loaded meta data with 28 key-value pairs and 292 tensors from E:\.ollama\models\blobs\sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 8B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
llama_model_loader: - kv   7:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  15:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  21:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  22:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  27:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW)
time=2025-04-24T11:06:57.949+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = DeepSeek R1 Distill Llama 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token        = 128001 '<|end▁of▁sentence|>'
print_info: EOT token        = 128001 '<|end▁of▁sentence|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end▁of▁sentence|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors:          CPU model buffer size =  4685.30 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 2048
llama_context: n_ctx_per_seq = 2048
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.50 MiB
init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
init:        CPU KV buffer size =   256.00 MiB
llama_context: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_context:        CPU compute buffer size =   258.50 MiB
llama_context: graph nodes  = 1094
llama_context: graph splits = 1

OS

Windows

GPU

Intel, Nvidia

CPU

Intel

Ollama version

ollama version is 0.6.6

Originally created by @minghua-123 on GitHub (Apr 24, 2025). ### What is the issue? Ask the same question to the model. In the new version of Ollama, use the CPU instead of the GPU, and configuring environment variables cannot be forced to use the GPU. However, it was normal in the previous version. ![Image](https://github.com/user-attachments/assets/51e8040b-9be2-4ce1-8a8e-4380ca439e27) ![Image](https://github.com/user-attachments/assets/16de13e2-fc3e-4f12-862d-329ca355a2cd) Version 0.6.6 ``` PS C:\Users\wmh21> ollama ps NAME ID SIZE PROCESSOR UNTIL deepseek-r1:8b 28f8fd6cdc67 5.4 GB 100% CPU 4 minutes from now ``` Version 0.6.5 ``` PS C:\Users\wmh21> ollama ps NAME ID SIZE PROCESSOR UNTIL deepseek-r1:8b 28f8fd6cdc67 5.8 GB 100% GPU 4 minutes from now ``` ### Relevant log output ```shell PS C:\Users\wmh21> ollama serve 2025/04/24 11:06:48 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-24T11:06:48.470+08:00 level=INFO source=images.go:458 msg="total blobs: 57" time=2025-04-24T11:06:48.473+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-24T11:06:48.475+08:00 level=INFO source=routes.go:1299 msg="Listening on 127.0.0.1:11434 (version 0.6.6)" time=2025-04-24T11:06:48.475+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-24T11:06:48.475+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-04-24T11:06:48.475+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-04-24T11:06:48.475+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-04-24T11:06:49.177+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=cuda variant=v12 compute=8.9 driver=12.9 name="NVIDIA GeForce RTX 4060 Laptop GPU" total="8.0 GiB" available="6.4 GiB" time=2025-04-24T11:06:54.396+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T11:06:54.427+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T11:06:54.442+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T11:06:54.443+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-04-24T11:06:54.443+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128 time=2025-04-24T11:06:54.443+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128 time=2025-04-24T11:06:54.444+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128 time=2025-04-24T11:06:54.460+08:00 level=INFO source=server.go:105 msg="system memory" total="31.7 GiB" free="19.6 GiB" free_swap="26.9 GiB" time=2025-04-24T11:06:54.460+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-04-24T11:06:54.460+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128 time=2025-04-24T11:06:54.460+08:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128 time=2025-04-24T11:06:54.461+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[829.6 MiB]" memory.gpu_overhead="0 B" memory.required.full="4.8 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB" llama_model_loader: loaded meta data with 28 key-value pairs and 292 tensors from E:\.ollama\models\blobs\sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 8B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 8B llama_model_loader: - kv 5: llama.block_count u32 = 32 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 4096 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 9: llama.attention.head_count u32 = 32 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: llama.vocab_size u32 = 128256 llama_model_loader: - kv 15: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 16: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 17: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 18: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 20: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 21: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 22: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 26: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 27: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.03 B print_info: general.name = DeepSeek R1 Distill Llama 8B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-04-24T11:06:54.679+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\wmh21\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model E:\\.ollama\\models\\blobs\\sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be --ctx-size 2048 --batch-size 512 --threads 8 --no-mmap --parallel 1 --port 4046" time=2025-04-24T11:06:54.692+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-24T11:06:54.692+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-24T11:06:54.693+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-24T11:06:54.726+08:00 level=INFO source=runner.go:853 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-04-24T11:06:57.843+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-04-24T11:06:57.845+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:4046" llama_model_loader: loaded meta data with 28 key-value pairs and 292 tensors from E:\.ollama\models\blobs\sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 8B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 8B llama_model_loader: - kv 5: llama.block_count u32 = 32 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 4096 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 9: llama.attention.head_count u32 = 32 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: llama.vocab_size u32 = 128256 llama_model_loader: - kv 15: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 16: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 17: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 18: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 20: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 21: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 22: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 26: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 27: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) time=2025-04-24T11:06:57.949+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 8B print_info: model params = 8.03 B print_info: general.name = DeepSeek R1 Distill Llama 8B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 4685.30 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 2048 llama_context: n_ctx_per_seq = 2048 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.50 MiB init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1 init: CPU KV buffer size = 256.00 MiB llama_context: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_context: CPU compute buffer size = 258.50 MiB llama_context: graph nodes = 1094 llama_context: graph splits = 1 ``` ### OS Windows ### GPU Intel, Nvidia ### CPU Intel ### Ollama version ollama version is 0.6.6
GiteaMirror added the
bug
needs more info
labels 2025-11-12 13:46:27 -06:00
Author
Owner

@rick-github commented on GitHub (Apr 24, 2025):

time=2025-04-24T11:06:54.461+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1
 layers.model=33 layers.offload=0 layers.split="" memory.available="[829.6 MiB]" memory.gpu_overhead="0 B"
 memory.required.full="4.8 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB"
 memory.required.allocations="[0 B]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB"
 memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB"

At the time the model was loaded, there was only 829MB free on the GPU, leaving not enough room to load any model layers.

@rick-github commented on GitHub (Apr 24, 2025): ``` time=2025-04-24T11:06:54.461+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[829.6 MiB]" memory.gpu_overhead="0 B" memory.required.full="4.8 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB" ``` At the time the model was loaded, there was only 829MB free on the GPU, leaving not enough room to load any model layers.
Author
Owner

@minghua-123 commented on GitHub (Apr 24, 2025):

time=2025-04-24T11:06:54.461+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1
 layers.model=33 layers.offload=0 layers.split="" memory.available="[829.6 MiB]" memory.gpu_overhead="0 B"
 memory.required.full="4.8 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB"
 memory.required.allocations="[0 B]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB"
 memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB"

At the time the model was loaded, there was only 829MB free on the GPU, leaving not enough room to load any model layers.

After multiple verifications, I've identified significant resource utilization anomalies and performance degradation in ollama v0.6.6. The specific manifestations are: during model inference, system monitoring shows GPU utilization consistently near 0% (confirmed via nvidia-smi/riva tools), while CPU utilization abnormally increases by ~40%. Concurrently, the ollama ps command reports 100% process occupancy, accompanied by severe response latency. Rolling back to v0.6.5 restores normal behavior with stable 98% GPU utilization, negligible CPU impact, and responsive performance.

This issue persists across:

Multiple clean installations (including system reboots)

Full application environment consistency

Recommended investigation priorities for the development team:

GPU scheduling module changes in v0.6.6

Abnormalities in process resource monitoring logic

@minghua-123 commented on GitHub (Apr 24, 2025): > ``` > time=2025-04-24T11:06:54.461+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 > layers.model=33 layers.offload=0 layers.split="" memory.available="[829.6 MiB]" memory.gpu_overhead="0 B" > memory.required.full="4.8 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" > memory.required.allocations="[0 B]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" > memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB" > ``` > At the time the model was loaded, there was only 829MB free on the GPU, leaving not enough room to load any model layers. After multiple verifications, I've identified significant resource utilization anomalies and performance degradation in ollama v0.6.6. The specific manifestations are: during model inference, system monitoring shows GPU utilization consistently near 0% (confirmed via nvidia-smi/riva tools), while CPU utilization abnormally increases by ~40%. Concurrently, the ollama ps command reports 100% process occupancy, accompanied by severe response latency. Rolling back to v0.6.5 restores normal behavior with stable 98% GPU utilization, negligible CPU impact, and responsive performance. This issue persists across: Multiple clean installations (including system reboots) Full application environment consistency Recommended investigation priorities for the development team: GPU scheduling module changes in v0.6.6 Abnormalities in process resource monitoring logic
Author
Owner

@rick-github commented on GitHub (Apr 24, 2025):

This is consistent with the model being loaded in RAM instead of VRAM. More debugging information may help, set OLLAMA_DEBUG=1 in the server environment and redo the tests.

Also note that OLLAMA_GPU_LAYER is not an ollama configuration variable.

@rick-github commented on GitHub (Apr 24, 2025): This is consistent with the model being loaded in RAM instead of VRAM. More debugging information may help, set `OLLAMA_DEBUG=1` in the server environment and redo the tests. Also note that `OLLAMA_GPU_LAYER` is not an ollama configuration variable.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#6832
No description provided.