[GH-ISSUE #10911] There still is VRAM estimation issue in 0.9.0 version #7176

Closed
opened 2026-04-12 19:10:24 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @konn-submarine-bu on GitHub (May 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10911

Originally assigned to: @jessegross on GitHub.

What is the issue?

Image

Relevant log output

time=2025-05-30T15:42:11.816+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0187921 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=7948 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
time=2025-05-30T15:42:11.929+08:00 level=INFO source=server.go:135 msg="system memory" total="127.7 GiB" free="77.9 GiB" free_swap="96.1 GiB"
time=2025-05-30T15:42:11.931+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=31 layers.split="" memory.available="[46.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="66.1 GiB" memory.required.partial="46.0 GiB" memory.required.kv="20.0 GiB" memory.required.allocations="[46.0 GiB]" memory.weights.total="18.4 GiB" memory.weights.repeating="17.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="26.7 GiB" memory.graph.partial="26.7 GiB"
llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 32B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen3.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 25600
llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 64
llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
time=2025-05-30T15:42:12.066+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2690338 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=7948 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  257 tensors
llama_model_loader: - type  f16:   64 tensors
llama_model_loader: - type q4_K:  353 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 18.81 GiB (4.93 BPW) 
load: special tokens cache size = 26
time=2025-05-30T15:42:12.316+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5189133 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=7948 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 32.76 B
print_info: general.name     = Qwen3 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-30T15:42:12.364+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\SHV4SZH\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\dru1szh\\.ollama\\models\\blobs\\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 --ctx-size 81920 --batch-size 512 --n-gpu-layers 31 --threads 40 --no-mmap --parallel 10 --port 53441"
time=2025-05-30T15:42:13.019+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-05-30T15:42:13.019+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-30T15:42:13.021+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-30T15:42:13.365+08:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-05-30T15:42:13.743+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-30T15:42:13.748+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:53441"
time=2025-05-30T15:42:13.780+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX A6000) - 47545 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 32B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen3.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 25600
llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 64
llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  257 tensors
llama_model_loader: - type  f16:   64 tensors
llama_model_loader: - type q4_K:  353 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 18.81 GiB (4.93 BPW) 
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 5120
print_info: n_layer          = 64
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 25600
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 32B
print_info: model params     = 32.76 B
print_info: general.name     = Qwen3 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 31 repeating layers to GPU
load_tensors: offloaded 31/65 layers to GPU
load_tensors:    CUDA_Host model buffer size =  9994.29 MiB
load_tensors:        CUDA0 model buffer size =  8848.12 MiB
load_tensors:          CPU model buffer size =   417.30 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 10
llama_context: n_ctx         = 81920
llama_context: n_ctx_per_seq = 8192
llama_context: n_batch       = 5120
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (8192) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     5.99 MiB
llama_kv_cache_unified: kv_size = 81920, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =  9920.00 MiB

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.9.0

Originally created by @konn-submarine-bu on GitHub (May 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10911 Originally assigned to: @jessegross on GitHub. ### What is the issue? ![Image](https://github.com/user-attachments/assets/e6d66f24-5acd-4110-80ee-8f15b3f7e8b3) ### Relevant log output ```shell time=2025-05-30T15:42:11.816+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0187921 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=7948 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 time=2025-05-30T15:42:11.929+08:00 level=INFO source=server.go:135 msg="system memory" total="127.7 GiB" free="77.9 GiB" free_swap="96.1 GiB" time=2025-05-30T15:42:11.931+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=31 layers.split="" memory.available="[46.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="66.1 GiB" memory.required.partial="46.0 GiB" memory.required.kv="20.0 GiB" memory.required.allocations="[46.0 GiB]" memory.weights.total="18.4 GiB" memory.weights.repeating="17.8 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="26.7 GiB" memory.graph.partial="26.7 GiB" llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 32B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen3.block_count u32 = 64 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 time=2025-05-30T15:42:12.066+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2690338 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=7948 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type f16: 64 tensors llama_model_loader: - type q4_K: 353 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.81 GiB (4.93 BPW) load: special tokens cache size = 26 time=2025-05-30T15:42:12.316+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5189133 runner.size="42.6 GiB" runner.vram="42.6 GiB" runner.parallel=10 runner.pid=7948 runner.model=C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 32.76 B print_info: general.name = Qwen3 32B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-30T15:42:12.364+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\SHV4SZH\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\dru1szh\\.ollama\\models\\blobs\\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 --ctx-size 81920 --batch-size 512 --n-gpu-layers 31 --threads 40 --no-mmap --parallel 10 --port 53441" time=2025-05-30T15:42:13.019+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-30T15:42:13.019+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-30T15:42:13.021+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-30T15:42:13.365+08:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\SHV4SZH\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-30T15:42:13.743+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-30T15:42:13.748+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:53441" time=2025-05-30T15:42:13.780+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX A6000) - 47545 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from C:\Users\dru1szh\.ollama\models\blobs\sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 32B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen3.block_count u32 = 64 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type f16: 64 tensors llama_model_loader: - type q4_K: 353 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.81 GiB (4.93 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 25600 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = Qwen3 32B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 31 repeating layers to GPU load_tensors: offloaded 31/65 layers to GPU load_tensors: CUDA_Host model buffer size = 9994.29 MiB load_tensors: CUDA0 model buffer size = 8848.12 MiB load_tensors: CPU model buffer size = 417.30 MiB llama_context: constructing llama_context llama_context: n_seq_max = 10 llama_context: n_ctx = 81920 llama_context: n_ctx_per_seq = 8192 llama_context: n_batch = 5120 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (8192) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 5.99 MiB llama_kv_cache_unified: kv_size = 81920, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 9920.00 MiB ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.9.0
GiteaMirror added the bug label 2026-04-12 19:10:24 -05:00
Author
Owner

@johnnysn commented on GitHub (May 31, 2025):

Ollama seems to be overestimating memory use, specially for qwen3. With two RTX 3090's I am being able to run qwen3:32b-q8_0 with a context size of up to 12K tokens before Ollama starts pushing layers to the system RAM.

$ ollama -v

ollama version is 0.9.0

ollama ps shows a VRAM use of 47 GB:

$ ollama ps

NAME                ID              SIZE     PROCESSOR    UNTIL              
qwen3:lg-or-x1.5    ed491e0bbed5    47 GB    100% GPU     4 minutes from now

But there is actually plenty of VRAM available on the system:

$ nvidia-smi

Sat May 31 12:09:09 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.77                 Driver Version: 565.77         CUDA Version: 12.7     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090        Off |   00000000:01:00.0 Off |                  N/A |
| 30%   36C    P0            182W /  200W |   18500MiB /  24576MiB |     45%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3090        Off |   00000000:21:00.0 Off |                  N/A |
|  0%   41C    P0            181W /  200W |   18653MiB /  24576MiB |     50%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      2022      G   /usr/lib/xorg/Xorg                              9MiB |
|    0   N/A  N/A      3026      G   /usr/bin/gnome-shell                            8MiB |
|    0   N/A  N/A     17892      C   /usr/local/bin/ollama                       18454MiB |
|    1   N/A  N/A      2022      G   /usr/lib/xorg/Xorg                              4MiB |
|    1   N/A  N/A     17892      C   /usr/local/bin/ollama                       18630MiB |
+-----------------------------------------------------------------------------------------+

Is there anything specific to this model architecture that might be causing this behavior? I can fit a much larger context window in the GPUs with qwen2.5:32b-instruct-q8_0, which is about the same size.

<!-- gh-comment-id:2925324986 --> @johnnysn commented on GitHub (May 31, 2025): Ollama seems to be overestimating memory use, specially for qwen3. With two RTX 3090's I am being able to run qwen3:32b-q8_0 with a context size of up to 12K tokens before Ollama starts pushing layers to the system RAM. ```bash $ ollama -v ollama version is 0.9.0 ``` `ollama ps` shows a VRAM use of 47 GB: ```bash $ ollama ps NAME ID SIZE PROCESSOR UNTIL qwen3:lg-or-x1.5 ed491e0bbed5 47 GB 100% GPU 4 minutes from now ``` But there is actually plenty of VRAM available on the system: ```bash $ nvidia-smi Sat May 31 12:09:09 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 565.77 Driver Version: 565.77 CUDA Version: 12.7 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A | | 30% 36C P0 182W / 200W | 18500MiB / 24576MiB | 45% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3090 Off | 00000000:21:00.0 Off | N/A | | 0% 41C P0 181W / 200W | 18653MiB / 24576MiB | 50% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2022 G /usr/lib/xorg/Xorg 9MiB | | 0 N/A N/A 3026 G /usr/bin/gnome-shell 8MiB | | 0 N/A N/A 17892 C /usr/local/bin/ollama 18454MiB | | 1 N/A N/A 2022 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 17892 C /usr/local/bin/ollama 18630MiB | +-----------------------------------------------------------------------------------------+ ``` Is there anything specific to this model architecture that might be causing this behavior? I can fit a much larger context window in the GPUs with qwen2.5:32b-instruct-q8_0, which is about the same size.
Author
Owner

@johnnysn commented on GitHub (May 31, 2025):

I just tested the default quantized version of qwen3:32b, which is much smaller than the q8 version. The VRAM over-estimation looks significantly worse... for a context window size of just 27.5K, ollama reports 48 GB of VRAM usage, when the system is actually consuming less than 28 GB.

$ ollama ps

NAME              ID              SIZE     PROCESSOR    UNTIL              
qwen3:lg-df-x4    7e3b54845a5b    48 GB    100% GPU     5 minutes from now
nvidia-smi

Sat May 31 14:52:05 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.77                 Driver Version: 565.77         CUDA Version: 12.7     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090        Off |   00000000:01:00.0 Off |                  N/A |
|  0%   35C    P0            182W /  200W |   13658MiB /  24576MiB |     48%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3090        Off |   00000000:21:00.0 Off |                  N/A |
|  0%   37C    P0            187W /  200W |   13749MiB /  24576MiB |     49%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      1999      G   /usr/lib/xorg/Xorg                              9MiB |
|    0   N/A  N/A      3075      G   /usr/bin/gnome-shell                            8MiB |
|    0   N/A  N/A    199197      C   /usr/local/bin/ollama                       13612MiB |
|    1   N/A  N/A      1999      G   /usr/lib/xorg/Xorg                              4MiB |
|    1   N/A  N/A    199197      C   /usr/local/bin/ollama                       13726MiB |
+-----------------------------------------------------------------------------------------+

It looks like the problem scales with the context size and the number of available GPUs (#10740 )

<!-- gh-comment-id:2925513480 --> @johnnysn commented on GitHub (May 31, 2025): I just tested the default quantized version of qwen3:32b, which is much smaller than the q8 version. The VRAM over-estimation looks significantly worse... for a context window size of just 27.5K, ollama reports 48 GB of VRAM usage, when the system is actually consuming less than 28 GB. ```bash $ ollama ps NAME ID SIZE PROCESSOR UNTIL qwen3:lg-df-x4 7e3b54845a5b 48 GB 100% GPU 5 minutes from now ``` ```bash nvidia-smi Sat May 31 14:52:05 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 565.77 Driver Version: 565.77 CUDA Version: 12.7 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A | | 0% 35C P0 182W / 200W | 13658MiB / 24576MiB | 48% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3090 Off | 00000000:21:00.0 Off | N/A | | 0% 37C P0 187W / 200W | 13749MiB / 24576MiB | 49% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1999 G /usr/lib/xorg/Xorg 9MiB | | 0 N/A N/A 3075 G /usr/bin/gnome-shell 8MiB | | 0 N/A N/A 199197 C /usr/local/bin/ollama 13612MiB | | 1 N/A N/A 1999 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 199197 C /usr/local/bin/ollama 13726MiB | +-----------------------------------------------------------------------------------------+ ``` It looks like the problem scales with the context size and the number of available GPUs (#10740 )
Author
Owner

@ccebelenski commented on GitHub (Jun 1, 2025):

The estimation (and reporting of memory use through Ollama PS) is really off. For example:
NAME ID SIZE PROCESSOR UNTIL
deepseek-r1:32b edba8017331d 80 GB 39%/61% CPU/GPU 57 minutes from now

That's not even physically possible on that particular machine - and when running doesn't even touch the CPU at all, so the percentage offloading is off also. Worse, this is spread across 4 cards, and one card isn't being used at all somehow - reporting almost no memory usage with NVTOP.

Just do a quick actual calculation, total VRAM usage is actually around 42GB, or slightly more than half of that reported. I can provide logs if needed, but they're not substantially different from the above example.

Flash attention set, quantized models (Q4) and quantized KV cache (Q8). num_gpu is 65 (total for this model).
Context is 128K.
GPU's are 4x 4090 Ti 16GB cards.
(Just FYI actually getting not-bad generation speed - around 14 TPS)

<!-- gh-comment-id:2928029241 --> @ccebelenski commented on GitHub (Jun 1, 2025): The estimation (and reporting of memory use through Ollama PS) is really off. For example: NAME ID SIZE PROCESSOR UNTIL deepseek-r1:32b edba8017331d 80 GB 39%/61% CPU/GPU 57 minutes from now That's not even physically possible on that particular machine - and when running doesn't even touch the CPU at all, so the percentage offloading is off also. Worse, this is spread across 4 cards, and one card isn't being used at all somehow - reporting almost no memory usage with NVTOP. Just do a quick actual calculation, total VRAM usage is actually around 42GB, or slightly more than half of that reported. I can provide logs if needed, but they're not substantially different from the above example. Flash attention set, quantized models (Q4) and quantized KV cache (Q8). num_gpu is 65 (total for this model). Context is 128K. GPU's are 4x 4090 Ti 16GB cards. (Just FYI actually getting not-bad generation speed - around 14 TPS)
Author
Owner

@Master-Pr0grammer commented on GitHub (Jun 4, 2025):

yes, I am also having this issue, I have a gtx 1080 ti with 12Gb VRAM, and ollama is only able to use 7GB (~55%) before offloading to cpu. No matter what I try, it just will not allocate anymore than 7GB.

it is very frustrating, models I used to be able to run, I can no longer run. I used to be able to run 14b models entirely on GPU, but now I cant even run gemma 12b with out 65% CPU offload.

also there seems to be many bugs with multi GPU support. I also have a GTX 1050 ti with 4GB vram, but when I use it, on some models like gemma, ollama just crashes at inference time. other times it works, but both gpus.

<!-- gh-comment-id:2940380694 --> @Master-Pr0grammer commented on GitHub (Jun 4, 2025): yes, I am also having this issue, I have a gtx 1080 ti with 12Gb VRAM, and ollama is only able to use 7GB (~55%) before offloading to cpu. No matter what I try, it just will not allocate anymore than 7GB. it is very frustrating, models I used to be able to run, I can no longer run. I used to be able to run 14b models entirely on GPU, but now I cant even run gemma 12b with out 65% CPU offload. also there seems to be many bugs with multi GPU support. I also have a GTX 1050 ti with 4GB vram, but when I use it, on some models like gemma, ollama just crashes at inference time. other times it works, but both gpus.
Author
Owner

@jessegross commented on GitHub (Jun 16, 2025):

There is an early preview of Ollama's new memory management with the goal of comprehensively fixing these issues. It is still in development, however, if you want to compile from source and try it out, you can find it here: https://github.com/ollama/ollama/pull/11090

Please leave any feedback on that PR.

<!-- gh-comment-id:2978296115 --> @jessegross commented on GitHub (Jun 16, 2025): There is an early preview of Ollama's new memory management with the goal of comprehensively fixing these issues. It is still in development, however, if you want to compile from source and try it out, you can find it here: https://github.com/ollama/ollama/pull/11090 Please leave any feedback on that PR.
Author
Owner

@jessegross commented on GitHub (Sep 24, 2025):

I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.

<!-- gh-comment-id:3330075549 --> @jessegross commented on GitHub (Sep 24, 2025): I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.
Author
Owner

@ivanbaldo commented on GitHub (Sep 24, 2025):

Thanks a lot for your work on this @jessegross , well done!!!

<!-- gh-comment-id:3330109255 --> @ivanbaldo commented on GitHub (Sep 24, 2025): Thanks a lot for your work on this @jessegross , well done!!!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7176