[GH-ISSUE #11039] 0% GPU utilization #33041

Closed
opened 2026-04-22 15:12:40 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @udiram on GitHub (Jun 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11039

What is the issue?

Running Qwen3:4B on an Nvidia 4070 Laptop GPU, 16GB of RAM, 8GB of VRAM, intel i9-13900H shows 0% GPU utilization. logs seem to indicate that the model is loaded onto the GPU and it's being correctly identified.

ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes

llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Laptop GPU) - 7056 MiB free

takes on the order of 5s to generate a response (after thinking), is there any way to speed this up or ensure that the GPU is being used to generate tokens?

Thanks!

Relevant log output

time=2025-06-10T14:36:18.798-05:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 gpu=GPU-72d4f421-7574-5561-561c-41c5df8cce38 parallel=2 available=7398752256 required="4.9 GiB"
time=2025-06-10T14:36:18.823-05:00 level=INFO source=server.go:135 msg="system memory" total="15.6 GiB" free="4.4 GiB" free_swap="8.3 GiB"
time=2025-06-10T14:36:18.823-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[6.9 GiB]" memory.gpu_o
verhead="0 B" memory.required.full="4.9 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="304.3 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB"
llama_model_loader: loaded meta data with 27 key-value pairs and 398 tensors from C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 4B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 4B
llama_model_loader: - kv   5:                          qwen3.block_count u32              = 36
llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 2560
llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 9728
llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 32
llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  145 tensors
llama_model_loader: - type  f16:   36 tensors
llama_model_loader: - type q4_K:  198 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 2.44 GiB (5.20 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 4.02 B
print_info: general.name     = Qwen3 4B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-10T14:36:18.982-05:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\udbha\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\udbha\\.ollama\\models\\blobs\\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 --ctx-size 8192 --batch-size 512 --n-gpu-layers 37 --threads 6 --no-mmap --parallel 2 --port 55647"
time=2025-06-10T14:36:18.986-05:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-10T14:36:18.986-05:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-10T14:36:18.986-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-10T14:36:19.026-05:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\udbha\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\udbha\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-06-10T14:36:19.170-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-10T14:36:19.171-05:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:55647"
time=2025-06-10T14:36:19.237-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Laptop GPU) - 7056 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 398 tensors from C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 4B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 4B
llama_model_loader: - kv   5:                          qwen3.block_count u32              = 36
llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 2560
llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 9728
llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 32
llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  145 tensors
llama_model_loader: - type  f16:   36 tensors
llama_model_loader: - type q4_K:  198 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 2.44 GiB (5.20 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 2560
print_info: n_layer          = 36
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 9728
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 4B
print_info: model params     = 4.02 B
print_info: general.name     = Qwen3 4B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 36 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 37/37 layers to GPU
load_tensors:        CUDA0 model buffer size =  2493.69 MiB
load_tensors:          CPU model buffer size =   304.28 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     1.18 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =  1152.00 MiB
llama_kv_cache_unified: KV self size  = 1152.00 MiB, K (f16):  576.00 MiB, V (f16):  576.00 MiB
llama_context:      CUDA0 compute buffer size =   554.00 MiB
llama_context:  CUDA_Host compute buffer size =    21.01 MiB
llama_context: graph nodes  = 1374
llama_context: graph splits = 2

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.9.0

Originally created by @udiram on GitHub (Jun 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11039 ### What is the issue? Running Qwen3:4B on an Nvidia 4070 Laptop GPU, 16GB of RAM, 8GB of VRAM, intel i9-13900H shows 0% GPU utilization. logs seem to indicate that the model is loaded onto the GPU and it's being correctly identified. ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Laptop GPU) - 7056 MiB free takes on the order of 5s to generate a response (after thinking), is there any way to speed this up or ensure that the GPU is being used to generate tokens? Thanks! ### Relevant log output ```shell time=2025-06-10T14:36:18.798-05:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 gpu=GPU-72d4f421-7574-5561-561c-41c5df8cce38 parallel=2 available=7398752256 required="4.9 GiB" time=2025-06-10T14:36:18.823-05:00 level=INFO source=server.go:135 msg="system memory" total="15.6 GiB" free="4.4 GiB" free_swap="8.3 GiB" time=2025-06-10T14:36:18.823-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[6.9 GiB]" memory.gpu_o verhead="0 B" memory.required.full="4.9 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="304.3 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB" llama_model_loader: loaded meta data with 27 key-value pairs and 398 tensors from C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 4B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 4B llama_model_loader: - kv 5: qwen3.block_count u32 = 36 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 2560 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 9728 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 32 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 145 tensors llama_model_loader: - type f16: 36 tensors llama_model_loader: - type q4_K: 198 tensors llama_model_loader: - type q6_K: 19 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 2.44 GiB (5.20 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 4.02 B print_info: general.name = Qwen3 4B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-10T14:36:18.982-05:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\udbha\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\udbha\\.ollama\\models\\blobs\\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 --ctx-size 8192 --batch-size 512 --n-gpu-layers 37 --threads 6 --no-mmap --parallel 2 --port 55647" time=2025-06-10T14:36:18.986-05:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-10T14:36:18.986-05:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-10T14:36:18.986-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-10T14:36:19.026-05:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\udbha\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\udbha\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-06-10T14:36:19.170-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-06-10T14:36:19.171-05:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:55647" time=2025-06-10T14:36:19.237-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Laptop GPU) - 7056 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 398 tensors from C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 4B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 4B llama_model_loader: - kv 5: qwen3.block_count u32 = 36 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 2560 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 9728 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 32 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 145 tensors llama_model_loader: - type f16: 36 tensors llama_model_loader: - type q4_K: 198 tensors llama_model_loader: - type q6_K: 19 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 2.44 GiB (5.20 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 2560 print_info: n_layer = 36 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 9728 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 4B print_info: model params = 4.02 B print_info: general.name = Qwen3 4B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 36 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 37/37 layers to GPU load_tensors: CUDA0 model buffer size = 2493.69 MiB load_tensors: CPU model buffer size = 304.28 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 1.18 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 1152.00 MiB llama_kv_cache_unified: KV self size = 1152.00 MiB, K (f16): 576.00 MiB, V (f16): 576.00 MiB llama_context: CUDA0 compute buffer size = 554.00 MiB llama_context: CUDA_Host compute buffer size = 21.01 MiB llama_context: graph nodes = 1374 llama_context: graph splits = 2 ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.9.0
GiteaMirror added the bug label 2026-04-22 15:12:40 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 10, 2025):

It certainly looks like it's running on the GPU. Could you include the rest of the log?

<!-- gh-comment-id:2960443211 --> @rick-github commented on GitHub (Jun 10, 2025): It certainly looks like it's running on the GPU. Could you include the rest of the log?
Author
Owner

@pavanrajkg04 commented on GitHub (Jun 11, 2025):

The logs suggest that the Qwen3:4B model is being loaded onto the GPU (CUDA0), and the available VRAM (7056 MiB) seems sufficient for the model, which requires approximately 4.9 GiB of memory. However, you're experiencing 0% GPU utilization, and generation takes around 5 seconds per response

Possible causes:

  1. You’re using --ctx-size 8192 and --batch-size 512.
    • While large context sizes are useful, they can reduce throughput because the model has more work to do per token.
    • If you don’t need such a long context, reducing it could improve speed and GPU usage.
  2. The log shows flash_attn = 0. Flash Attention improves attention computation efficiency significantly on supported GPUs (like RTX 40xx series). Enabling it might help.

Solution :

  1. Use the command below and enable flash attention :
    export OLLAMA_FLASH_ATTENTION=true
  2. Try reducing --ctx-size from 8192 to 4096 or even 2048 unless you really need long contexts
  3. Try increasing or decreasing the batch size in small increments to find an optimal value

IMPROTANT CHECK OLLAMA IS UPDATED

<!-- gh-comment-id:2961301660 --> @pavanrajkg04 commented on GitHub (Jun 11, 2025): The logs suggest that the Qwen3:4B model is being loaded onto the GPU (CUDA0), and the available VRAM (7056 MiB) seems sufficient for the model, which requires approximately 4.9 GiB of memory. However, you're experiencing 0% GPU utilization, and generation takes around 5 seconds per response Possible causes: 1. You’re using --ctx-size 8192 and --batch-size 512. - While large context sizes are useful, they can reduce throughput because the model has more work to do per token. - If you don’t need such a long context, reducing it could improve speed and GPU usage. 2. The log shows flash_attn = 0. Flash Attention improves attention computation efficiency significantly on supported GPUs (like RTX 40xx series). Enabling it might help. Solution : 1. Use the command below and enable flash attention : export OLLAMA_FLASH_ATTENTION=true 2. Try reducing --ctx-size from 8192 to 4096 or even 2048 unless you really need long contexts 3. Try increasing or decreasing the batch size in small increments to find an optimal value IMPROTANT CHECK OLLAMA IS UPDATED
Author
Owner

@rick-github commented on GitHub (Jun 11, 2025):

@pavanrajkg04 Please don't give bad advice.

<!-- gh-comment-id:2961808777 --> @rick-github commented on GitHub (Jun 11, 2025): @pavanrajkg04 Please don't give bad advice.
Author
Owner

@pavanrajkg04 commented on GitHub (Jun 11, 2025):

@pavanrajkg04 Please don't give bad advice.

ok

<!-- gh-comment-id:2963496495 --> @pavanrajkg04 commented on GitHub (Jun 11, 2025): > [@pavanrajkg04](https://github.com/pavanrajkg04) Please don't give bad advice. ok
Author
Owner

@udiram commented on GitHub (Jun 11, 2025):

@rick-github is there a specific log output/file that would be helpful to see?

<!-- gh-comment-id:2964006859 --> @udiram commented on GitHub (Jun 11, 2025): @rick-github is there a specific log output/file that would be helpful to see?
Author
Owner

@rick-github commented on GitHub (Jun 11, 2025):

Everything after where the first log stopped.

<!-- gh-comment-id:2964012103 --> @rick-github commented on GitHub (Jun 11, 2025): Everything after where the first log stopped.
Author
Owner

@udiram commented on GitHub (Jun 12, 2025):

hey! here's an entire run, from running ollama serve, through to the first API call,

hope this is helpful!

ollama serve
time=2025-06-12T09:29:10.336-05:00 level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\udbha\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-06-12T09:29:10.432-05:00 level=INFO source=images.go:479 msg="total blobs: 40"
time=2025-06-12T09:29:10.433-05:00 level=INFO source=images.go:486 msg="total unused blobs removed: 0"
time=2025-06-12T09:29:10.437-05:00 level=INFO source=routes.go:1287 msg="Listening on 127.0.0.1:11434 (version 0.9.0)"
time=2025-06-12T09:29:10.440-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-06-12T09:29:10.441-05:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-06-12T09:29:10.441-05:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-06-12T09:29:10.441-05:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=14 efficiency=8 threads=20
time=2025-06-12T09:29:10.641-05:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-72d4f421-7574-5561-561c-41c5df8cce38 library=cuda compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4070 Laptop GPU" overhead="511.4 MiB"
time=2025-06-12T09:29:10.645-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-72d4f421-7574-5561-561c-41c5df8cce38 library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4070 Laptop GPU" total="8.0 GiB" available="6.9 GiB"
time=2025-06-12T09:32:05.828-05:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\udbha.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 gpu=GPU-72d4f421-7574-5561-561c-41c5df8cce38 parallel=2 available=7311777792 required="4.9 GiB"
time=2025-06-12T09:32:05.857-05:00 level=INFO source=server.go:135 msg="system memory" total="15.6 GiB" free="2.2 GiB" free_swap="9.7 GiB"
time=2025-06-12T09:32:05.858-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[6.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="4.9 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="304.3 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB"
llama_model_loader: loaded meta data with 27 key-value pairs and 398 tensors from C:\Users\udbha.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3 4B
llama_model_loader: - kv 3: general.basename str = Qwen3
llama_model_loader: - kv 4: general.size_label str = 4B
llama_model_loader: - kv 5: qwen3.block_count u32 = 36
llama_model_loader: - kv 6: qwen3.context_length u32 = 40960
llama_model_loader: - kv 7: qwen3.embedding_length u32 = 2560
llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 9728
llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 32
llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128
llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128
llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - kv 26: general.file_type u32 = 15
llama_model_loader: - type f32: 145 tensors
llama_model_loader: - type f16: 36 tensors
llama_model_loader: - type q4_K: 198 tensors
llama_model_loader: - type q6_K: 19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 2.44 GiB (5.20 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 4.02 B
print_info: general.name = Qwen3 4B
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-12T09:32:06.041-05:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\Users\udbha\AppData\Local\Programs\Ollama\ollama.exe runner --model C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 --ctx-size 8192 --batch-size 512 --n-gpu-layers 37 --threads 6 --no-mmap --parallel 2 --port 63306"
time=2025-06-12T09:32:06.046-05:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-12T09:32:06.046-05:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-12T09:32:06.050-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-12T09:32:06.085-05:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\udbha\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\udbha\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-06-12T09:32:08.138-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-12T09:32:08.138-05:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:63306"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Laptop GPU) - 7056 MiB free
time=2025-06-12T09:32:08.307-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 27 key-value pairs and 398 tensors from C:\Users\udbha.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3 4B
llama_model_loader: - kv 3: general.basename str = Qwen3
llama_model_loader: - kv 4: general.size_label str = 4B
llama_model_loader: - kv 5: qwen3.block_count u32 = 36
llama_model_loader: - kv 6: qwen3.context_length u32 = 40960
llama_model_loader: - kv 7: qwen3.embedding_length u32 = 2560
llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 9728
llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 32
llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128
llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128
llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - kv 26: general.file_type u32 = 15
llama_model_loader: - type f32: 145 tensors
llama_model_loader: - type f16: 36 tensors
llama_model_loader: - type q4_K: 198 tensors
llama_model_loader: - type q6_K: 19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 2.44 GiB (5.20 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3
print_info: vocab_only = 0
print_info: n_ctx_train = 40960
print_info: n_embd = 2560
print_info: n_layer = 36
print_info: n_head = 32
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 4
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 9728
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 40960
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 4B
print_info: model params = 4.02 B
print_info: general.name = Qwen3 4B
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 36 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 37/37 layers to GPU
load_tensors: CPU model buffer size = 304.28 MiB
load_tensors: CUDA0 model buffer size = 2493.69 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 2
llama_context: n_ctx = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 1024
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context: CUDA_Host output buffer size = 1.18 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32
llama_kv_cache_unified: CUDA0 KV buffer size = 1152.00 MiB
llama_kv_cache_unified: KV self size = 1152.00 MiB, K (f16): 576.00 MiB, V (f16): 576.00 MiB
llama_context: CUDA0 compute buffer size = 554.00 MiB
llama_context: CUDA_Host compute buffer size = 21.01 MiB
llama_context: graph nodes = 1374
llama_context: graph splits = 2
time=2025-06-12T09:32:11.815-05:00 level=INFO source=server.go:630 msg="llama runner started in 5.77 seconds"
[GIN] 2025/06/12 - 09:32:35 | 200 | 29.3201453s | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:2967090826 --> @udiram commented on GitHub (Jun 12, 2025): hey! here's an entire run, from running ollama serve, through to the first API call, hope this is helpful! > ollama serve time=2025-06-12T09:29:10.336-05:00 level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\udbha\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-06-12T09:29:10.432-05:00 level=INFO source=images.go:479 msg="total blobs: 40" time=2025-06-12T09:29:10.433-05:00 level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-06-12T09:29:10.437-05:00 level=INFO source=routes.go:1287 msg="Listening on 127.0.0.1:11434 (version 0.9.0)" time=2025-06-12T09:29:10.440-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-06-12T09:29:10.441-05:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-06-12T09:29:10.441-05:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-06-12T09:29:10.441-05:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=14 efficiency=8 threads=20 time=2025-06-12T09:29:10.641-05:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-72d4f421-7574-5561-561c-41c5df8cce38 library=cuda compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4070 Laptop GPU" overhead="511.4 MiB" time=2025-06-12T09:29:10.645-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-72d4f421-7574-5561-561c-41c5df8cce38 library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4070 Laptop GPU" total="8.0 GiB" available="6.9 GiB" time=2025-06-12T09:32:05.828-05:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 gpu=GPU-72d4f421-7574-5561-561c-41c5df8cce38 parallel=2 available=7311777792 required="4.9 GiB" time=2025-06-12T09:32:05.857-05:00 level=INFO source=server.go:135 msg="system memory" total="15.6 GiB" free="2.2 GiB" free_swap="9.7 GiB" time=2025-06-12T09:32:05.858-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[6.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="4.9 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="304.3 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB" llama_model_loader: loaded meta data with 27 key-value pairs and 398 tensors from C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 4B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 4B llama_model_loader: - kv 5: qwen3.block_count u32 = 36 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 2560 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 9728 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 32 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 145 tensors llama_model_loader: - type f16: 36 tensors llama_model_loader: - type q4_K: 198 tensors llama_model_loader: - type q6_K: 19 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 2.44 GiB (5.20 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 4.02 B print_info: general.name = Qwen3 4B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-12T09:32:06.041-05:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\udbha\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\udbha\\.ollama\\models\\blobs\\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 --ctx-size 8192 --batch-size 512 --n-gpu-layers 37 --threads 6 --no-mmap --parallel 2 --port 63306" time=2025-06-12T09:32:06.046-05:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-12T09:32:06.046-05:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-12T09:32:06.050-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-12T09:32:06.085-05:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\udbha\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\udbha\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-06-12T09:32:08.138-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-06-12T09:32:08.138-05:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:63306" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Laptop GPU) - 7056 MiB free time=2025-06-12T09:32:08.307-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 27 key-value pairs and 398 tensors from C:\Users\udbha\.ollama\models\blobs\sha256-163553aea1b1de62de7c5eb2ef5afb756b4b3133308d9ae7e42e951d8d696ef5 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 4B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 4B llama_model_loader: - kv 5: qwen3.block_count u32 = 36 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 2560 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 9728 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 32 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 145 tensors llama_model_loader: - type f16: 36 tensors llama_model_loader: - type q4_K: 198 tensors llama_model_loader: - type q6_K: 19 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 2.44 GiB (5.20 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 2560 print_info: n_layer = 36 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 9728 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 4B print_info: model params = 4.02 B print_info: general.name = Qwen3 4B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 36 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 37/37 layers to GPU load_tensors: CPU model buffer size = 304.28 MiB load_tensors: CUDA0 model buffer size = 2493.69 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 1.18 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 1152.00 MiB llama_kv_cache_unified: KV self size = 1152.00 MiB, K (f16): 576.00 MiB, V (f16): 576.00 MiB llama_context: CUDA0 compute buffer size = 554.00 MiB llama_context: CUDA_Host compute buffer size = 21.01 MiB llama_context: graph nodes = 1374 llama_context: graph splits = 2 time=2025-06-12T09:32:11.815-05:00 level=INFO source=server.go:630 msg="llama runner started in 5.77 seconds" [GIN] 2025/06/12 - 09:32:35 | 200 | 29.3201453s | 127.0.0.1 | POST "/api/generate"
Author
Owner

@rick-github commented on GitHub (Jun 12, 2025):

time=2025-06-12T09:32:05.858-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1
 layers.model=37 layers.offload=37 layers.split="" memory.available="[6.8 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="4.9 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.1 GiB"
 memory.required.allocations="[4.9 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB"
 memory.weights.nonrepeating="304.3 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB"

The server estimated that it could offload all 37 layers, using 4.9G.

load_tensors: offloaded 37/37 layers to GPU

The runner indicates it offloaded all layers to the GPU.

If everything is running in the GPU and the responses are slow, that might indicate some throttling on the GPU. What does the following show when you are running inference:

nvidia-smi -q
<!-- gh-comment-id:2967250180 --> @rick-github commented on GitHub (Jun 12, 2025): ``` time=2025-06-12T09:32:05.858-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[6.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="4.9 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="304.3 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB" ``` The server estimated that it could offload all 37 layers, using 4.9G. ``` load_tensors: offloaded 37/37 layers to GPU ``` The runner indicates it offloaded all layers to the GPU. If everything is running in the GPU and the responses are slow, that might indicate some throttling on the GPU. What does the following show when you are running inference: ``` nvidia-smi -q ```
Author
Owner

@udiram commented on GitHub (Jun 12, 2025):

==============NVSMI LOG==============

Timestamp : Thu Jun 12 10:19:25 2025
Driver Version : 566.14
CUDA Version : 12.7

Attached GPUs : 1
GPU 00000000:01:00.0
Product Name : NVIDIA GeForce RTX 4070 Laptop GPU
Product Brand : GeForce
Product Architecture : Ada Lovelace
Display Mode : Disabled
Display Active : Disabled
Persistence Mode : N/A
Addressing Mode : N/A
MIG Mode
Current : N/A
Pending : N/A
Accounting Mode : Disabled
Accounting Mode Buffer Size : 4000
Driver Model
Current : WDDM
Pending : WDDM
Serial Number : N/A
GPU UUID : GPU-72d4f421-7574-5561-561c-41c5df8cce38
Minor Number : N/A
VBIOS Version : 95.06.15.00.f7
MultiGPU Board : No
Board ID : 0x100
Board Part Number : N/A
GPU Part Number : 2860-775-A1
FRU Part Number : N/A
Platform Info
RACK Serial Number : N/A
Chassis Physical Slot Number : N/A
Compute Slot Index : N/A
Node Index : N/A
Peer Type : N/A
Module Id : 1
Inforom Version
Image Version : G002.0000.00.03
OEM Object : 2.0
ECC Object : N/A
Power Management Object : N/A
Inforom BBX Object Flush
Latest Timestamp : N/A
Latest Duration : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
GPU C2C Mode : N/A
GPU Virtualization Mode
Virtualization Mode : None
Host VGPU Mode : N/A
vGPU Heterogeneous Mode : N/A
GPU Reset Status
Reset Required : No
Drain and Reset Recommended : No
GPU Recovery Action : None
GSP Firmware Version : N/A
IBMNPU
Relaxed Ordering Mode : N/A
PCI
Bus : 0x01
Device : 0x00
Domain : 0x0000
Device Id : 0x286010DE
Bus Id : 00000000:01:00.0
Sub System Id : 0x14731043
GPU Link Info
PCIe Generation
Max : 4
Current : 4
Device Current : 4
Device Max : 4
Host Max : 5
Link Width
Max : 8x
Current : 8x
Bridge Chip
Type : N/A
Firmware : N/A
Replays Since Reset : 0
Replay Number Rollovers : 0
Tx Throughput : 624050 KB/s
Rx Throughput : 10950 KB/s
Atomic Caps Outbound : N/A
Atomic Caps Inbound : N/A
Fan Speed : N/A
Performance State : P0
Clocks Event Reasons
Idle : Not Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
HW Thermal Slowdown : Not Active
HW Power Brake Slowdown : Not Active
Sync Boost : Not Active
SW Thermal Slowdown : Not Active
Display Clock Setting : Not Active
Sparse Operation Mode : N/A
FB Memory Usage
Total : 8188 MiB
Reserved : 240 MiB
Used : 4974 MiB
Free : 2975 MiB
BAR1 Memory Usage
Total : 8192 MiB
Used : 8164 MiB
Free : 28 MiB
Conf Compute Protected Memory Usage
Total : N/A
Used : N/A
Free : N/A
Compute Mode : Default
Utilization
Gpu : 95 %
Memory : 95 %
Encoder : 0 %
Decoder : 0 %
JPEG : 0 %
OFA : 0 %
Encoder Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
FBC Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
ECC Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
SRAM Correctable : N/A
SRAM Uncorrectable Parity : N/A
SRAM Uncorrectable SEC-DED : N/A
DRAM Correctable : N/A
DRAM Uncorrectable : N/A
Aggregate
SRAM Correctable : N/A
SRAM Uncorrectable Parity : N/A
SRAM Uncorrectable SEC-DED : N/A
DRAM Correctable : N/A
DRAM Uncorrectable : N/A
SRAM Threshold Exceeded : N/A
Aggregate Uncorrectable SRAM Sources
SRAM L2 : N/A
SRAM SM : N/A
SRAM Microcontroller : N/A
SRAM PCIE : N/A
SRAM Other : N/A
Retired Pages
Single Bit ECC : N/A
Double Bit ECC : N/A
Pending Page Blacklist : N/A
Remapped Rows
Correctable Error : 0
Uncorrectable Error : 0
Pending : No
Remapping Failure Occurred : No
Bank Remap Availability Histogram
Max : 64 bank(s)
High : 0 bank(s)
Partial : 0 bank(s)
Low : 0 bank(s)
None : 0 bank(s)
Temperature
GPU Current Temp : 59 C
GPU T.Limit Temp : 28 C
GPU Shutdown T.Limit Temp : -5 C
GPU Slowdown T.Limit Temp : -2 C
GPU Max Operating T.Limit Temp : 0 C
GPU Target Temperature : 87 C
Memory Current Temp : N/A
Memory Max Operating T.Limit Temp : N/A
GPU Power Readings
Power Draw : 25.06 W
Current Power Limit : 127.03 W
Requested Power Limit : N/A
Default Power Limit : 80.00 W
Min Power Limit : 5.00 W
Max Power Limit : 140.00 W
GPU Memory Power Readings
Power Draw : N/A
Module Power Readings
Power Draw : N/A
Current Power Limit : N/A
Requested Power Limit : N/A
Default Power Limit : N/A
Min Power Limit : N/A
Max Power Limit : N/A
Clocks
Graphics : 2490 MHz
SM : 2490 MHz
Memory : 8100 MHz
Video : 2115 MHz
Applications Clocks
Graphics : N/A
Memory : N/A
Default Applications Clocks
Graphics : N/A
Memory : N/A
Deferred Clocks
Memory : N/A
Max Clocks
Graphics : 3105 MHz
SM : 3105 MHz
Memory : 8001 MHz
Video : 2415 MHz
Max Customer Boost Clocks
Graphics : N/A
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Voltage
Graphics : 985.000 mV
Fabric
State : N/A
Status : N/A
CliqueId : N/A
ClusterUUID : N/A
Health
Bandwidth : N/A
Processes
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 508
Type : C+G
Name : C:\Program Files\WindowsApps\Microsoft.6365217CE6EB4_102.2504.16004.0_x64__8wekyb3d8bbwe\MicrosoftSecurityApp\MicrosoftSecurityApp.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 3648
Type : C+G
Name : C:\Program Files\WindowsApps\Microsoft.YourPhone_1.25042.96.0_x64__8wekyb3d8bbwe\PhoneExperienceHost.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 4628
Type : C+G
Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 8452
Type : C+G
Name : C:\Windows\SystemApps\Microsoft.Windows.StartMenuExperienceHost_cw5n1h2txyewy\StartMenuExperienceHost.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 10276
Type : C+G
Name : C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 10732
Type : C+G
Name : C:\Program Files (x86)\Epic Games\Launcher\Portal\Binaries\Win64\EpicGamesLauncher.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 10804
Type : C+G
Name : C:\Program Files\Dell\Dell Peripheral Manager\DPM.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 12636
Type : C+G
Name : C:\Program Files\WindowsApps\Microsoft.Edge.GameAssist_1.0.3336.0_x64__8wekyb3d8bbwe\EdgeGameAssist.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 13008
Type : C+G
Name : C:\Program Files\WindowsApps\Microsoft.WindowsTerminal_1.22.11141.0_x64__8wekyb3d8bbwe\WindowsTerminal.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 13644
Type : C+G
Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 14180
Type : C+G
Name : C:\Windows\SystemApps\MicrosoftWindows.Client.CBS_cw5n1h2txyewy\TextInputHost.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 15436
Type : C+G
Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 16048
Type : C+G
Name : C:\Windows\explorer.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 17188
Type : C+G
Name : C:\Windows\ImmersiveControlPanel\SystemSettings.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 17580
Type : C+G
Name :
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 17904
Type : C+G
Name : C:\Windows\System32\ApplicationFrameHost.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 18744
Type : C+G
Name : C:\Windows\System32\ShellHost.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 19348
Type : C+G
Name : C:\Users\udbha\AppData\Roaming\Zoom\bin\Zoom.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 19580
Type : C
Name : C:\Users\udbha\AppData\Local\Programs\Ollama\ollama.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 20044
Type : C+G
Name : C:\Program Files\WindowsApps\SpotifyAB.SpotifyMusic_1.265.255.0_x64__zpdnekdrzrea0\XboxGameBarSpotify.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 21116
Type : C+G
Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 24024
Type : C+G
Name : C:\Program Files (x86)\Citrix\ICA Client\SelfServicePlugin\SelfService.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 26312
Type : C+G
Name : C:\Program Files\WindowsApps\5319275A.WhatsAppDesktop_2.2523.1.0_x64__cv1g1gvanyjgm\WhatsApp.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 27060
Type : C+G
Name : C:\Users\udbha\AppData\Roaming\Zoom\bin\Zoom.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 27560
Type : C+G
Name : C:\Windows\SystemApps\MicrosoftWindows.Client.CBS_cw5n1h2txyewy\SearchHost.exe
Used GPU Memory : Not available in WDDM driver model
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 27916
Type : C+G
Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe
Used GPU Memory : Not available in WDDM driver model
Capabilities
EGM : disabled

<!-- gh-comment-id:2967267944 --> @udiram commented on GitHub (Jun 12, 2025): ==============NVSMI LOG============== Timestamp : Thu Jun 12 10:19:25 2025 Driver Version : 566.14 CUDA Version : 12.7 Attached GPUs : 1 GPU 00000000:01:00.0 Product Name : NVIDIA GeForce RTX 4070 Laptop GPU Product Brand : GeForce Product Architecture : Ada Lovelace Display Mode : Disabled Display Active : Disabled Persistence Mode : N/A Addressing Mode : N/A MIG Mode Current : N/A Pending : N/A Accounting Mode : Disabled Accounting Mode Buffer Size : 4000 Driver Model Current : WDDM Pending : WDDM Serial Number : N/A GPU UUID : GPU-72d4f421-7574-5561-561c-41c5df8cce38 Minor Number : N/A VBIOS Version : 95.06.15.00.f7 MultiGPU Board : No Board ID : 0x100 Board Part Number : N/A GPU Part Number : 2860-775-A1 FRU Part Number : N/A Platform Info RACK Serial Number : N/A Chassis Physical Slot Number : N/A Compute Slot Index : N/A Node Index : N/A Peer Type : N/A Module Id : 1 Inforom Version Image Version : G002.0000.00.03 OEM Object : 2.0 ECC Object : N/A Power Management Object : N/A Inforom BBX Object Flush Latest Timestamp : N/A Latest Duration : N/A GPU Operation Mode Current : N/A Pending : N/A GPU C2C Mode : N/A GPU Virtualization Mode Virtualization Mode : None Host VGPU Mode : N/A vGPU Heterogeneous Mode : N/A GPU Reset Status Reset Required : No Drain and Reset Recommended : No GPU Recovery Action : None GSP Firmware Version : N/A IBMNPU Relaxed Ordering Mode : N/A PCI Bus : 0x01 Device : 0x00 Domain : 0x0000 Device Id : 0x286010DE Bus Id : 00000000:01:00.0 Sub System Id : 0x14731043 GPU Link Info PCIe Generation Max : 4 Current : 4 Device Current : 4 Device Max : 4 Host Max : 5 Link Width Max : 8x Current : 8x Bridge Chip Type : N/A Firmware : N/A Replays Since Reset : 0 Replay Number Rollovers : 0 Tx Throughput : 624050 KB/s Rx Throughput : 10950 KB/s Atomic Caps Outbound : N/A Atomic Caps Inbound : N/A Fan Speed : N/A Performance State : P0 Clocks Event Reasons Idle : Not Active Applications Clocks Setting : Not Active SW Power Cap : Not Active HW Slowdown : Not Active HW Thermal Slowdown : Not Active HW Power Brake Slowdown : Not Active Sync Boost : Not Active SW Thermal Slowdown : Not Active Display Clock Setting : Not Active Sparse Operation Mode : N/A FB Memory Usage Total : 8188 MiB Reserved : 240 MiB Used : 4974 MiB Free : 2975 MiB BAR1 Memory Usage Total : 8192 MiB Used : 8164 MiB Free : 28 MiB Conf Compute Protected Memory Usage Total : N/A Used : N/A Free : N/A Compute Mode : Default Utilization Gpu : 95 % Memory : 95 % Encoder : 0 % Decoder : 0 % JPEG : 0 % OFA : 0 % Encoder Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 FBC Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 ECC Mode Current : N/A Pending : N/A ECC Errors Volatile SRAM Correctable : N/A SRAM Uncorrectable Parity : N/A SRAM Uncorrectable SEC-DED : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A Aggregate SRAM Correctable : N/A SRAM Uncorrectable Parity : N/A SRAM Uncorrectable SEC-DED : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A SRAM Threshold Exceeded : N/A Aggregate Uncorrectable SRAM Sources SRAM L2 : N/A SRAM SM : N/A SRAM Microcontroller : N/A SRAM PCIE : N/A SRAM Other : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending Page Blacklist : N/A Remapped Rows Correctable Error : 0 Uncorrectable Error : 0 Pending : No Remapping Failure Occurred : No Bank Remap Availability Histogram Max : 64 bank(s) High : 0 bank(s) Partial : 0 bank(s) Low : 0 bank(s) None : 0 bank(s) Temperature GPU Current Temp : 59 C GPU T.Limit Temp : 28 C GPU Shutdown T.Limit Temp : -5 C GPU Slowdown T.Limit Temp : -2 C GPU Max Operating T.Limit Temp : 0 C GPU Target Temperature : 87 C Memory Current Temp : N/A Memory Max Operating T.Limit Temp : N/A GPU Power Readings Power Draw : 25.06 W Current Power Limit : 127.03 W Requested Power Limit : N/A Default Power Limit : 80.00 W Min Power Limit : 5.00 W Max Power Limit : 140.00 W GPU Memory Power Readings Power Draw : N/A Module Power Readings Power Draw : N/A Current Power Limit : N/A Requested Power Limit : N/A Default Power Limit : N/A Min Power Limit : N/A Max Power Limit : N/A Clocks Graphics : 2490 MHz SM : 2490 MHz Memory : 8100 MHz Video : 2115 MHz Applications Clocks Graphics : N/A Memory : N/A Default Applications Clocks Graphics : N/A Memory : N/A Deferred Clocks Memory : N/A Max Clocks Graphics : 3105 MHz SM : 3105 MHz Memory : 8001 MHz Video : 2415 MHz Max Customer Boost Clocks Graphics : N/A Clock Policy Auto Boost : N/A Auto Boost Default : N/A Voltage Graphics : 985.000 mV Fabric State : N/A Status : N/A CliqueId : N/A ClusterUUID : N/A Health Bandwidth : N/A Processes GPU instance ID : N/A Compute instance ID : N/A Process ID : 508 Type : C+G Name : C:\Program Files\WindowsApps\Microsoft.6365217CE6EB4_102.2504.16004.0_x64__8wekyb3d8bbwe\MicrosoftSecurityApp\MicrosoftSecurityApp.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 3648 Type : C+G Name : C:\Program Files\WindowsApps\Microsoft.YourPhone_1.25042.96.0_x64__8wekyb3d8bbwe\PhoneExperienceHost.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 4628 Type : C+G Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 8452 Type : C+G Name : C:\Windows\SystemApps\Microsoft.Windows.StartMenuExperienceHost_cw5n1h2txyewy\StartMenuExperienceHost.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 10276 Type : C+G Name : C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 10732 Type : C+G Name : C:\Program Files (x86)\Epic Games\Launcher\Portal\Binaries\Win64\EpicGamesLauncher.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 10804 Type : C+G Name : C:\Program Files\Dell\Dell Peripheral Manager\DPM.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 12636 Type : C+G Name : C:\Program Files\WindowsApps\Microsoft.Edge.GameAssist_1.0.3336.0_x64__8wekyb3d8bbwe\EdgeGameAssist.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 13008 Type : C+G Name : C:\Program Files\WindowsApps\Microsoft.WindowsTerminal_1.22.11141.0_x64__8wekyb3d8bbwe\WindowsTerminal.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 13644 Type : C+G Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 14180 Type : C+G Name : C:\Windows\SystemApps\MicrosoftWindows.Client.CBS_cw5n1h2txyewy\TextInputHost.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 15436 Type : C+G Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 16048 Type : C+G Name : C:\Windows\explorer.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 17188 Type : C+G Name : C:\Windows\ImmersiveControlPanel\SystemSettings.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 17580 Type : C+G Name : Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 17904 Type : C+G Name : C:\Windows\System32\ApplicationFrameHost.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 18744 Type : C+G Name : C:\Windows\System32\ShellHost.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 19348 Type : C+G Name : C:\Users\udbha\AppData\Roaming\Zoom\bin\Zoom.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 19580 Type : C Name : C:\Users\udbha\AppData\Local\Programs\Ollama\ollama.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 20044 Type : C+G Name : C:\Program Files\WindowsApps\SpotifyAB.SpotifyMusic_1.265.255.0_x64__zpdnekdrzrea0\XboxGameBarSpotify.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 21116 Type : C+G Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 24024 Type : C+G Name : C:\Program Files (x86)\Citrix\ICA Client\SelfServicePlugin\SelfService.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 26312 Type : C+G Name : C:\Program Files\WindowsApps\5319275A.WhatsAppDesktop_2.2523.1.0_x64__cv1g1gvanyjgm\WhatsApp.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 27060 Type : C+G Name : C:\Users\udbha\AppData\Roaming\Zoom\bin\Zoom.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 27560 Type : C+G Name : C:\Windows\SystemApps\MicrosoftWindows.Client.CBS_cw5n1h2txyewy\SearchHost.exe Used GPU Memory : Not available in WDDM driver model GPU instance ID : N/A Compute instance ID : N/A Process ID : 27916 Type : C+G Name : C:\Program Files (x86)\Microsoft\EdgeWebView\Application\137.0.3296.68\msedgewebview2.exe Used GPU Memory : Not available in WDDM driver model Capabilities EGM : disabled
Author
Owner

@rick-github commented on GitHub (Jun 12, 2025):

Power draw is too low:

Utilization
  Gpu : 95 %
GPU Power Readings
  Power Draw : 25.06 W

The GPU is at 95% and only drawing 25W. My 4070 draws 200W at that utilization level. Do you have power saver settings enabled on your machine? That would explain the slowness, but your initial post said 0% GPU utilization. How do you get that figure?

<!-- gh-comment-id:2967311884 --> @rick-github commented on GitHub (Jun 12, 2025): Power draw is too low: ``` Utilization Gpu : 95 % GPU Power Readings Power Draw : 25.06 W ``` The GPU is at 95% and only drawing 25W. My 4070 draws 200W at that utilization level. Do you have power saver settings enabled on your machine? That would explain the slowness, but your initial post said 0% GPU utilization. How do you get that figure?
Author
Owner

@udiram commented on GitHub (Jun 12, 2025):

Task manager reports no GPU usage at all, I should have checked nvidia-smi, which DOES show the usage.

I'm on 'turbo' mode, plugged in to a wall socket, so the power draw shouldn't be an issue, I'm wondering why Task manager doesn't pick up on the process, and why the tok/s are still so slow. Are there any good LLM output benchmarkers I could try to make sure the bug is/isn't with Ollama?

Image

Image

<!-- gh-comment-id:2967340434 --> @udiram commented on GitHub (Jun 12, 2025): Task manager reports no GPU usage at all, I should have checked nvidia-smi, which DOES show the usage. I'm on 'turbo' mode, plugged in to a wall socket, so the power draw shouldn't be an issue, I'm wondering why Task manager doesn't pick up on the process, and why the tok/s are still so slow. Are there any good LLM output benchmarkers I could try to make sure the bug is/isn't with Ollama? ![Image](https://github.com/user-attachments/assets/cd3342a3-b79f-4884-8b29-b60be946bea7) ![Image](https://github.com/user-attachments/assets/8efa89b6-1c1e-4563-8f90-950815326fe3)
Author
Owner

@rick-github commented on GitHub (Jun 12, 2025):

You could try LMStudio, it's an easy install and you can check to see if also has problems utilizing the GPU. I don't know of any LLM benchmarks, but OCCT has a GPU test which might be useful.

<!-- gh-comment-id:2967361548 --> @rick-github commented on GitHub (Jun 12, 2025): You could try [LMStudio](https://lmstudio.ai/), it's an easy install and you can check to see if also has problems utilizing the GPU. I don't know of any LLM benchmarks, but [OCCT](https://www.ocbase.com/occt/personal) has a GPU test which might be useful.
Author
Owner

@udiram commented on GitHub (Jun 12, 2025):

I've just run it through geekbench, and got a 120723 on the OpenCL backend. (according to geekbench, the 4070 laptop GPU is scoring 109218, so seems in line https://browser.geekbench.com/gpus/nvidia-geforce-rtx-4070-laptop)

still no GPU utilization, so it must be a task manager issue, we can close this issue, unless you think it's worth making a PR into the FAQ's or something, since I've seen some other issues about similar issues. Happy to make that PR if you think it's worth it.

Cheers

<!-- gh-comment-id:2967462754 --> @udiram commented on GitHub (Jun 12, 2025): I've just run it through geekbench, and got a 120723 on the OpenCL backend. (according to geekbench, the 4070 laptop GPU is scoring 109218, so seems in line https://browser.geekbench.com/gpus/nvidia-geforce-rtx-4070-laptop) still no GPU utilization, so it must be a task manager issue, we can close this issue, unless you think it's worth making a PR into the FAQ's or something, since I've seen some other issues about similar issues. Happy to make that PR if you think it's worth it. Cheers
Author
Owner

@rick-github commented on GitHub (Jun 12, 2025):

I'll add it to the FAQ notes. Your last nvidia-smi screenshot shows 108W usage so apart from the Task Manager issue this all looks normal, so I'll close the ticket.

<!-- gh-comment-id:2967487600 --> @rick-github commented on GitHub (Jun 12, 2025): I'll add it to the FAQ notes. Your last nvidia-smi screenshot shows 108W usage so apart from the Task Manager issue this all looks normal, so I'll close the ticket.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33041