[GH-ISSUE #11896] GPU memory usage after update to 0.11.4 #54411

Closed
opened 2026-04-29 05:54:23 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @mcr-ksh on GitHub (Aug 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11896

What is the issue?

Hello,

version: 0.11.4

I have an issue after updating ollama to the latest to support gpt-oss.

Before, I used to run the same model using a new ctx_len size of 12288 and all layers put to the GPU.
Now I only have a fraction of the layers on the GPU or I reduce the ctx_len to around ~2600 to haver all layers running on the GPU.

GPU: RTX 2080 Ti
Model: hf.co/mradermacher/Lamarck-14B-v0.6-GGUF:Q4_K_M 00327444d2e4 9.0 GB

What is the problem here?

time=2025-08-13T23:42:55.075Z level=INFO source=server.go:135 msg="system memory" total="48.0 GiB" free="38.7 GiB" free_swap="41.8 GiB"
time=2025-08-13T23:42:55.076Z level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=48 layers.split="" memory.available="[9.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.0 GiB" memory.required.partial="9.4 GiB" memory.required.kv="534.0 MiB" memory.required.allocations="[9.4 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="607.5 MiB" memory.graph.full="306.2 MiB" memory.graph.partial="913.7 MiB"
llama_model_loader: loaded meta data with 63 key-value pairs and 579 tensors from O:\models\blobs\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Lamarck 14b Converge Della Linear
llama_model_loader: - kv   3:                            general.version str              = v0.6
llama_model_loader: - kv   4:                       general.organization str              = Sometimesanotion
llama_model_loader: - kv   5:                           general.finetune str              = converge-della-linear
llama_model_loader: - kv   6:                           general.basename str              = lamarck
llama_model_loader: - kv   7:                         general.size_label str              = 14B
llama_model_loader: - kv   8:                            general.license str              = apache-2.0
llama_model_loader: - kv   9:                   general.base_model.count u32              = 7
llama_model_loader: - kv  10:                  general.base_model.0.name str              = Qwen2.5 14B Vimarckoso v3
llama_model_loader: - kv  11:               general.base_model.0.version str              = v3
llama_model_loader: - kv  12:          general.base_model.0.organization str              = Sometimesanotion
llama_model_loader: - kv  13:              general.base_model.0.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  14:                  general.base_model.1.name str              = Lamarck 14B v0.3
llama_model_loader: - kv  15:               general.base_model.1.version str              = v0.3
llama_model_loader: - kv  16:          general.base_model.1.organization str              = Sometimesanotion
llama_model_loader: - kv  17:              general.base_model.1.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  18:                  general.base_model.2.name str              = Qwenvergence 14B v3 Prose
llama_model_loader: - kv  19:               general.base_model.2.version str              = v3
llama_model_loader: - kv  20:          general.base_model.2.organization str              = Sometimesanotion
llama_model_loader: - kv  21:              general.base_model.2.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  22:                  general.base_model.3.name str              = DRT O1 14B
llama_model_loader: - kv  23:          general.base_model.3.organization str              = Krystalan
llama_model_loader: - kv  24:              general.base_model.3.repo_url str              = https://huggingface.co/Krystalan/DRT-...
llama_model_loader: - kv  25:                  general.base_model.4.name str              = Medius Erebus Magnum 14b
llama_model_loader: - kv  26:          general.base_model.4.organization str              = Underwoods
llama_model_loader: - kv  27:              general.base_model.4.repo_url str              = https://huggingface.co/underwoods/med...
llama_model_loader: - kv  28:                  general.base_model.5.name str              = Abliterate Qwenvergence
llama_model_loader: - kv  29:          general.base_model.5.organization str              = Sometimesanotion
llama_model_loader: - kv  30:              general.base_model.5.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  31:                  general.base_model.6.name str              = Qwen2.5 14B Instruct Abliterated v2
llama_model_loader: - kv  32:               general.base_model.6.version str              = v2
llama_model_loader: - kv  33:          general.base_model.6.organization str              = Huihui Ai
llama_model_loader: - kv  34:              general.base_model.6.repo_url str              = https://huggingface.co/huihui-ai/Qwen...
llama_model_loader: - kv  35:                               general.tags arr[str,3]       = ["mergekit", "merge", "text-generation"]
llama_model_loader: - kv  36:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  37:                          qwen2.block_count u32              = 48
llama_model_loader: - kv  38:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv  39:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv  40:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv  41:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  42:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  43:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  44:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  45:                          general.file_type u32              = 15
llama_model_loader: - kv  46:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  47:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  48:                      tokenizer.ggml.tokens arr[str,151665]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  49:                  tokenizer.ggml.token_type arr[i32,151665]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  50:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  51:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  52:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  53:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  54:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  55:               general.quantization_version u32              = 2
llama_model_loader: - kv  56:                                general.url str              = https://huggingface.co/mradermacher/L...
llama_model_loader: - kv  57:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  58:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  59:                  mradermacher.quantized_at str              = 2025-01-06T19:00:03+01:00
llama_model_loader: - kv  60:                  mradermacher.quantized_on str              = leia
llama_model_loader: - kv  61:                         general.source.url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  62:                  mradermacher.convert_type str              = hf
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.36 GiB (4.86 BPW) 
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 14.77 B
print_info: general.name     = Lamarck 14b Converge Della Linear
print_info: vocab type       = BPE
print_info: n_vocab          = 151665
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-13T23:42:55.282Z level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\ksh.IRONSOFTWARE\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model O:\\models\\blobs\\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 --ctx-size 2848 --batch-size 512 --n-gpu-layers 48 --threads 32 --no-mmap --parallel 1 --port 49206"
time=2025-08-13T23:42:55.286Z level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-13T23:42:55.286Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-13T23:42:55.286Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-08-13T23:42:55.361Z level=INFO source=runner.go:815 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes
load_backend: loaded CUDA backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-08-13T23:42:55.545Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-08-13T23:42:55.547Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:49206"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2080 Ti) - 10107 MiB free
llama_model_loader: loaded meta data with 63 key-value pairs and 579 tensors from O:\models\blobs\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Lamarck 14b Converge Della Linear
llama_model_loader: - kv   3:                            general.version str              = v0.6
llama_model_loader: - kv   4:                       general.organization str              = Sometimesanotion
llama_model_loader: - kv   5:                           general.finetune str              = converge-della-linear
llama_model_loader: - kv   6:                           general.basename str              = lamarck
llama_model_loader: - kv   7:                         general.size_label str              = 14B
llama_model_loader: - kv   8:                            general.license str              = apache-2.0
llama_model_loader: - kv   9:                   general.base_model.count u32              = 7
llama_model_loader: - kv  10:                  general.base_model.0.name str              = Qwen2.5 14B Vimarckoso v3
llama_model_loader: - kv  11:               general.base_model.0.version str              = v3
llama_model_loader: - kv  12:          general.base_model.0.organization str              = Sometimesanotion
llama_model_loader: - kv  13:              general.base_model.0.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  14:                  general.base_model.1.name str              = Lamarck 14B v0.3
llama_model_loader: - kv  15:               general.base_model.1.version str              = v0.3
llama_model_loader: - kv  16:          general.base_model.1.organization str              = Sometimesanotion
llama_model_loader: - kv  17:              general.base_model.1.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  18:                  general.base_model.2.name str              = Qwenvergence 14B v3 Prose
llama_model_loader: - kv  19:               general.base_model.2.version str              = v3
llama_model_loader: - kv  20:          general.base_model.2.organization str              = Sometimesanotion
llama_model_loader: - kv  21:              general.base_model.2.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  22:                  general.base_model.3.name str              = DRT O1 14B
llama_model_loader: - kv  23:          general.base_model.3.organization str              = Krystalan
llama_model_loader: - kv  24:              general.base_model.3.repo_url str              = https://huggingface.co/Krystalan/DRT-...
llama_model_loader: - kv  25:                  general.base_model.4.name str              = Medius Erebus Magnum 14b
llama_model_loader: - kv  26:          general.base_model.4.organization str              = Underwoods
llama_model_loader: - kv  27:              general.base_model.4.repo_url str              = https://huggingface.co/underwoods/med...
llama_model_loader: - kv  28:                  general.base_model.5.name str              = Abliterate Qwenvergence
llama_model_loader: - kv  29:          general.base_model.5.organization str              = Sometimesanotion
llama_model_loader: - kv  30:              general.base_model.5.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  31:                  general.base_model.6.name str              = Qwen2.5 14B Instruct Abliterated v2
llama_model_loader: - kv  32:               general.base_model.6.version str              = v2
llama_model_loader: - kv  33:          general.base_model.6.organization str              = Huihui Ai
llama_model_loader: - kv  34:              general.base_model.6.repo_url str              = https://huggingface.co/huihui-ai/Qwen...
llama_model_loader: - kv  35:                               general.tags arr[str,3]       = ["mergekit", "merge", "text-generation"]
llama_model_loader: - kv  36:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  37:                          qwen2.block_count u32              = 48
llama_model_loader: - kv  38:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv  39:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv  40:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv  41:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  42:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  43:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  44:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  45:                          general.file_type u32              = 15
llama_model_loader: - kv  46:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  47:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  48:                      tokenizer.ggml.tokens arr[str,151665]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  49:                  tokenizer.ggml.token_type arr[i32,151665]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  50:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  51:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  52:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  53:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  54:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  55:               general.quantization_version u32              = 2
llama_model_loader: - kv  56:                                general.url str              = https://huggingface.co/mradermacher/L...
llama_model_loader: - kv  57:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  58:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  59:                  mradermacher.quantized_at str              = 2025-01-06T19:00:03+01:00
llama_model_loader: - kv  60:                  mradermacher.quantized_on str              = leia
llama_model_loader: - kv  61:                         general.source.url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  62:                  mradermacher.convert_type str              = hf
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.36 GiB (4.86 BPW) 
time=2025-08-13T23:42:55.789Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 48
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 13824
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 14B
print_info: model params     = 14.77 B
print_info: general.name     = Lamarck 14b Converge Della Linear
print_info: vocab type       = BPE
print_info: n_vocab          = 151665
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloaded 48/49 layers to GPU
load_tensors:    CUDA_Host model buffer size =   607.50 MiB
load_tensors:        CUDA0 model buffer size =  7539.28 MiB
load_tensors:          CPU model buffer size =   416.56 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 2848
llama_context: n_ctx_per_seq = 2848
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (2848) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.60 MiB
llama_kv_cache_unified: kv_size = 2848, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =   534.00 MiB
llama_kv_cache_unified: KV self size  =  534.00 MiB, K (f16):  267.00 MiB, V (f16):  267.00 MiB
llama_context:      CUDA0 compute buffer size =   913.70 MiB
llama_context:  CUDA_Host compute buffer size =    15.57 MiB
llama_context: graph nodes  = 1782
llama_context: graph splits = 4 (with bs=512), 3 (with bs=1)
time=2025-08-13T23:43:06.575Z level=INFO source=server.go:637 msg="llama runner started in 11.29 seconds"

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @mcr-ksh on GitHub (Aug 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11896 ### What is the issue? Hello, version: 0.11.4 I have an issue after updating ollama to the latest to support gpt-oss. Before, I used to run the same model using a new ctx_len size of 12288 and all layers put to the GPU. Now I only have a fraction of the layers on the GPU or I reduce the ctx_len to around ~2600 to haver all layers running on the GPU. GPU: RTX 2080 Ti Model: hf.co/mradermacher/Lamarck-14B-v0.6-GGUF:Q4_K_M 00327444d2e4 9.0 GB What is the problem here? ``` time=2025-08-13T23:42:55.075Z level=INFO source=server.go:135 msg="system memory" total="48.0 GiB" free="38.7 GiB" free_swap="41.8 GiB" time=2025-08-13T23:42:55.076Z level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=48 layers.split="" memory.available="[9.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.0 GiB" memory.required.partial="9.4 GiB" memory.required.kv="534.0 MiB" memory.required.allocations="[9.4 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="607.5 MiB" memory.graph.full="306.2 MiB" memory.graph.partial="913.7 MiB" llama_model_loader: loaded meta data with 63 key-value pairs and 579 tensors from O:\models\blobs\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Lamarck 14b Converge Della Linear llama_model_loader: - kv 3: general.version str = v0.6 llama_model_loader: - kv 4: general.organization str = Sometimesanotion llama_model_loader: - kv 5: general.finetune str = converge-della-linear llama_model_loader: - kv 6: general.basename str = lamarck llama_model_loader: - kv 7: general.size_label str = 14B llama_model_loader: - kv 8: general.license str = apache-2.0 llama_model_loader: - kv 9: general.base_model.count u32 = 7 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 14B Vimarckoso v3 llama_model_loader: - kv 11: general.base_model.0.version str = v3 llama_model_loader: - kv 12: general.base_model.0.organization str = Sometimesanotion llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 14: general.base_model.1.name str = Lamarck 14B v0.3 llama_model_loader: - kv 15: general.base_model.1.version str = v0.3 llama_model_loader: - kv 16: general.base_model.1.organization str = Sometimesanotion llama_model_loader: - kv 17: general.base_model.1.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 18: general.base_model.2.name str = Qwenvergence 14B v3 Prose llama_model_loader: - kv 19: general.base_model.2.version str = v3 llama_model_loader: - kv 20: general.base_model.2.organization str = Sometimesanotion llama_model_loader: - kv 21: general.base_model.2.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 22: general.base_model.3.name str = DRT O1 14B llama_model_loader: - kv 23: general.base_model.3.organization str = Krystalan llama_model_loader: - kv 24: general.base_model.3.repo_url str = https://huggingface.co/Krystalan/DRT-... llama_model_loader: - kv 25: general.base_model.4.name str = Medius Erebus Magnum 14b llama_model_loader: - kv 26: general.base_model.4.organization str = Underwoods llama_model_loader: - kv 27: general.base_model.4.repo_url str = https://huggingface.co/underwoods/med... llama_model_loader: - kv 28: general.base_model.5.name str = Abliterate Qwenvergence llama_model_loader: - kv 29: general.base_model.5.organization str = Sometimesanotion llama_model_loader: - kv 30: general.base_model.5.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 31: general.base_model.6.name str = Qwen2.5 14B Instruct Abliterated v2 llama_model_loader: - kv 32: general.base_model.6.version str = v2 llama_model_loader: - kv 33: general.base_model.6.organization str = Huihui Ai llama_model_loader: - kv 34: general.base_model.6.repo_url str = https://huggingface.co/huihui-ai/Qwen... llama_model_loader: - kv 35: general.tags arr[str,3] = ["mergekit", "merge", "text-generation"] llama_model_loader: - kv 36: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 37: qwen2.block_count u32 = 48 llama_model_loader: - kv 38: qwen2.context_length u32 = 131072 llama_model_loader: - kv 39: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 40: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 41: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 42: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 43: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 44: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 45: general.file_type u32 = 15 llama_model_loader: - kv 46: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 47: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 48: tokenizer.ggml.tokens arr[str,151665] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 49: tokenizer.ggml.token_type arr[i32,151665] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 50: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 51: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 52: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 53: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 54: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 55: general.quantization_version u32 = 2 llama_model_loader: - kv 56: general.url str = https://huggingface.co/mradermacher/L... llama_model_loader: - kv 57: mradermacher.quantize_version str = 2 llama_model_loader: - kv 58: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 59: mradermacher.quantized_at str = 2025-01-06T19:00:03+01:00 llama_model_loader: - kv 60: mradermacher.quantized_on str = leia llama_model_loader: - kv 61: general.source.url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 62: mradermacher.convert_type str = hf llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.36 GiB (4.86 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = Lamarck 14b Converge Della Linear print_info: vocab type = BPE print_info: n_vocab = 151665 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-13T23:42:55.282Z level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\ksh.IRONSOFTWARE\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model O:\\models\\blobs\\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 --ctx-size 2848 --batch-size 512 --n-gpu-layers 48 --threads 32 --no-mmap --parallel 1 --port 49206" time=2025-08-13T23:42:55.286Z level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-13T23:42:55.286Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-13T23:42:55.286Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-08-13T23:42:55.361Z level=INFO source=runner.go:815 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes load_backend: loaded CUDA backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-08-13T23:42:55.545Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-08-13T23:42:55.547Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:49206" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2080 Ti) - 10107 MiB free llama_model_loader: loaded meta data with 63 key-value pairs and 579 tensors from O:\models\blobs\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Lamarck 14b Converge Della Linear llama_model_loader: - kv 3: general.version str = v0.6 llama_model_loader: - kv 4: general.organization str = Sometimesanotion llama_model_loader: - kv 5: general.finetune str = converge-della-linear llama_model_loader: - kv 6: general.basename str = lamarck llama_model_loader: - kv 7: general.size_label str = 14B llama_model_loader: - kv 8: general.license str = apache-2.0 llama_model_loader: - kv 9: general.base_model.count u32 = 7 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 14B Vimarckoso v3 llama_model_loader: - kv 11: general.base_model.0.version str = v3 llama_model_loader: - kv 12: general.base_model.0.organization str = Sometimesanotion llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 14: general.base_model.1.name str = Lamarck 14B v0.3 llama_model_loader: - kv 15: general.base_model.1.version str = v0.3 llama_model_loader: - kv 16: general.base_model.1.organization str = Sometimesanotion llama_model_loader: - kv 17: general.base_model.1.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 18: general.base_model.2.name str = Qwenvergence 14B v3 Prose llama_model_loader: - kv 19: general.base_model.2.version str = v3 llama_model_loader: - kv 20: general.base_model.2.organization str = Sometimesanotion llama_model_loader: - kv 21: general.base_model.2.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 22: general.base_model.3.name str = DRT O1 14B llama_model_loader: - kv 23: general.base_model.3.organization str = Krystalan llama_model_loader: - kv 24: general.base_model.3.repo_url str = https://huggingface.co/Krystalan/DRT-... llama_model_loader: - kv 25: general.base_model.4.name str = Medius Erebus Magnum 14b llama_model_loader: - kv 26: general.base_model.4.organization str = Underwoods llama_model_loader: - kv 27: general.base_model.4.repo_url str = https://huggingface.co/underwoods/med... llama_model_loader: - kv 28: general.base_model.5.name str = Abliterate Qwenvergence llama_model_loader: - kv 29: general.base_model.5.organization str = Sometimesanotion llama_model_loader: - kv 30: general.base_model.5.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 31: general.base_model.6.name str = Qwen2.5 14B Instruct Abliterated v2 llama_model_loader: - kv 32: general.base_model.6.version str = v2 llama_model_loader: - kv 33: general.base_model.6.organization str = Huihui Ai llama_model_loader: - kv 34: general.base_model.6.repo_url str = https://huggingface.co/huihui-ai/Qwen... llama_model_loader: - kv 35: general.tags arr[str,3] = ["mergekit", "merge", "text-generation"] llama_model_loader: - kv 36: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 37: qwen2.block_count u32 = 48 llama_model_loader: - kv 38: qwen2.context_length u32 = 131072 llama_model_loader: - kv 39: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 40: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 41: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 42: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 43: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 44: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 45: general.file_type u32 = 15 llama_model_loader: - kv 46: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 47: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 48: tokenizer.ggml.tokens arr[str,151665] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 49: tokenizer.ggml.token_type arr[i32,151665] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 50: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 51: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 52: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 53: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 54: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 55: general.quantization_version u32 = 2 llama_model_loader: - kv 56: general.url str = https://huggingface.co/mradermacher/L... llama_model_loader: - kv 57: mradermacher.quantize_version str = 2 llama_model_loader: - kv 58: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 59: mradermacher.quantized_at str = 2025-01-06T19:00:03+01:00 llama_model_loader: - kv 60: mradermacher.quantized_on str = leia llama_model_loader: - kv 61: general.source.url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 62: mradermacher.convert_type str = hf llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.36 GiB (4.86 BPW) time=2025-08-13T23:42:55.789Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = Lamarck 14b Converge Della Linear print_info: vocab type = BPE print_info: n_vocab = 151665 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 48 repeating layers to GPU load_tensors: offloaded 48/49 layers to GPU load_tensors: CUDA_Host model buffer size = 607.50 MiB load_tensors: CUDA0 model buffer size = 7539.28 MiB load_tensors: CPU model buffer size = 416.56 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 2848 llama_context: n_ctx_per_seq = 2848 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (2848) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.60 MiB llama_kv_cache_unified: kv_size = 2848, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 534.00 MiB llama_kv_cache_unified: KV self size = 534.00 MiB, K (f16): 267.00 MiB, V (f16): 267.00 MiB llama_context: CUDA0 compute buffer size = 913.70 MiB llama_context: CUDA_Host compute buffer size = 15.57 MiB llama_context: graph nodes = 1782 llama_context: graph splits = 4 (with bs=512), 3 (with bs=1) time=2025-08-13T23:43:06.575Z level=INFO source=server.go:637 msg="llama runner started in 11.29 seconds" ``` ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 05:54:23 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 14, 2025):

time=2025-08-13T23:42:55.076Z level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49
 layers.offload=48 layers.split="" memory.available="[9.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.0 GiB"
 memory.required.partial="9.4 GiB" memory.required.kv="534.0 MiB" memory.required.allocations="[9.4 GiB]"
 memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="607.5 MiB"
 memory.graph.full="306.2 MiB" memory.graph.partial="913.7 MiB"

The memory estimation logic is undergoing some work to improve accuracy, for example to reduce the possibility of an OOM. In this case it's made the server more conservative, and it's calculated that it can fit 48 of 49 layers on the GPU, using 9.4G of the available 9.9G VRAM. You can force more layers onto the GPU by setting num_gpu as described here. This may increase the possibility of an OOM or cause performance issues. There are more changes to the memory estimation logic in the pipeline which may improve layer allocations in future releases.

<!-- gh-comment-id:3187859389 --> @rick-github commented on GitHub (Aug 14, 2025): ``` time=2025-08-13T23:42:55.076Z level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=48 layers.split="" memory.available="[9.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.0 GiB" memory.required.partial="9.4 GiB" memory.required.kv="534.0 MiB" memory.required.allocations="[9.4 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="607.5 MiB" memory.graph.full="306.2 MiB" memory.graph.partial="913.7 MiB" ``` The memory estimation logic is undergoing some work to improve accuracy, for example to reduce the possibility of an OOM. In this case it's made the server more conservative, and it's calculated that it can fit 48 of 49 layers on the GPU, using 9.4G of the available 9.9G VRAM. You can force more layers onto the GPU by setting `num_gpu` as described [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650). This may increase the possibility of an OOM or cause performance issues. There are more changes to the memory estimation logic in the pipeline which may improve layer allocations in future releases.
Author
Owner

@ka-admin commented on GitHub (Aug 14, 2025):

I think the problem the same as mine
https://github.com/ollama/ollama/issues/11744

<!-- gh-comment-id:3188255819 --> @ka-admin commented on GitHub (Aug 14, 2025): I think the problem the same as mine https://github.com/ollama/ollama/issues/11744
Author
Owner

@swapnilshah10 commented on GitHub (Aug 14, 2025):

For me if i use the ollama app that is the ui. GPU is not used for qwen GPU usage is 0-1% and model is too slow. But when i use the command line ollama is used. Am i missing some configuration in app?

<!-- gh-comment-id:3189041775 --> @swapnilshah10 commented on GitHub (Aug 14, 2025): For me if i use the ollama app that is the ui. GPU is not used for qwen GPU usage is 0-1% and model is too slow. But when i use the command line ollama is used. Am i missing some configuration in app?
Author
Owner

@mcr-ksh commented on GitHub (Aug 14, 2025):

Thanksfor the fast response. I will def try the num_gpu parameter but for the memory consumption the performance monitors shows that the entire GPU VRAM is claimed. So I'm not sure if that will help but in my case what I've described by the 48/49 is the reduced ctx_len of 2600 already. If I switch to the original len of 28k it's lrss than 20 layers what used to be the full 49 on the GPU. It's a small model already that is supposed to fit entirely on the GPU

<!-- gh-comment-id:3189541598 --> @mcr-ksh commented on GitHub (Aug 14, 2025): Thanksfor the fast response. I will def try the num_gpu parameter but for the memory consumption the performance monitors shows that the entire GPU VRAM is claimed. So I'm not sure if that will help but in my case what I've described by the 48/49 is the reduced ctx_len of 2600 already. If I switch to the original len of 28k it's lrss than 20 layers what used to be the full 49 on the GPU. It's a small model already that is supposed to fit entirely on the GPU
Author
Owner

@maglat commented on GitHub (Aug 15, 2025):

the new 0.11.5-rc2 (pre release) finaly fixed the memory allocation. try this version

<!-- gh-comment-id:3191147030 --> @maglat commented on GitHub (Aug 15, 2025): the new 0.11.5-rc2 (pre release) finaly fixed the memory allocation. try this version
Author
Owner

@alienatedsec commented on GitHub (Aug 15, 2025):

the new 0.11.5-rc2 (pre release) finaly fixed the memory allocation. try this version

Don't forget OLLAMA_NEW_ESTIMATES=1 env variable too.

<!-- gh-comment-id:3191356709 --> @alienatedsec commented on GitHub (Aug 15, 2025): > the new 0.11.5-rc2 (pre release) finaly fixed the memory allocation. try this version Don't forget `OLLAMA_NEW_ESTIMATES=1` env variable too.
Author
Owner

@mcr-ksh commented on GitHub (Aug 18, 2025):

I've tried 0.11.5-rc2 with the new env variable and there is no change for me:

OLLAMA_MODELS=O:\models
OLLAMA_MAX_LOADED_MODELS=1
OLLAMA_HOST=0.0.0.0:11434
OLLAMA_NEW_ESTIMATES=1
OLLAMA_LOAD_TIMEOUT=30m
OLLAMA_DEBUG=0
OLLAMA_KEEP_ALIVE=24h
OLLAMA_FLASH_ATTENTION=1

llama_model_loader: loaded meta data with 63 key-value pairs and 579 tensors from O:\models\blobs\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Lamarck 14b Converge Della Linear
llama_model_loader: - kv   3:                            general.version str              = v0.6
llama_model_loader: - kv   4:                       general.organization str              = Sometimesanotion
llama_model_loader: - kv   5:                           general.finetune str              = converge-della-linear
llama_model_loader: - kv   6:                           general.basename str              = lamarck
llama_model_loader: - kv   7:                         general.size_label str              = 14B
llama_model_loader: - kv   8:                            general.license str              = apache-2.0
llama_model_loader: - kv   9:                   general.base_model.count u32              = 7
llama_model_loader: - kv  10:                  general.base_model.0.name str              = Qwen2.5 14B Vimarckoso v3
llama_model_loader: - kv  11:               general.base_model.0.version str              = v3
llama_model_loader: - kv  12:          general.base_model.0.organization str              = Sometimesanotion
llama_model_loader: - kv  13:              general.base_model.0.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  14:                  general.base_model.1.name str              = Lamarck 14B v0.3
llama_model_loader: - kv  15:               general.base_model.1.version str              = v0.3
llama_model_loader: - kv  16:          general.base_model.1.organization str              = Sometimesanotion
llama_model_loader: - kv  17:              general.base_model.1.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  18:                  general.base_model.2.name str              = Qwenvergence 14B v3 Prose
llama_model_loader: - kv  19:               general.base_model.2.version str              = v3
llama_model_loader: - kv  20:          general.base_model.2.organization str              = Sometimesanotion
llama_model_loader: - kv  21:              general.base_model.2.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  22:                  general.base_model.3.name str              = DRT O1 14B
llama_model_loader: - kv  23:          general.base_model.3.organization str              = Krystalan
llama_model_loader: - kv  24:              general.base_model.3.repo_url str              = https://huggingface.co/Krystalan/DRT-...
llama_model_loader: - kv  25:                  general.base_model.4.name str              = Medius Erebus Magnum 14b
llama_model_loader: - kv  26:          general.base_model.4.organization str              = Underwoods
llama_model_loader: - kv  27:              general.base_model.4.repo_url str              = https://huggingface.co/underwoods/med...
llama_model_loader: - kv  28:                  general.base_model.5.name str              = Abliterate Qwenvergence
llama_model_loader: - kv  29:          general.base_model.5.organization str              = Sometimesanotion
llama_model_loader: - kv  30:              general.base_model.5.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  31:                  general.base_model.6.name str              = Qwen2.5 14B Instruct Abliterated v2
llama_model_loader: - kv  32:               general.base_model.6.version str              = v2
llama_model_loader: - kv  33:          general.base_model.6.organization str              = Huihui Ai
llama_model_loader: - kv  34:              general.base_model.6.repo_url str              = https://huggingface.co/huihui-ai/Qwen...
llama_model_loader: - kv  35:                               general.tags arr[str,3]       = ["mergekit", "merge", "text-generation"]
llama_model_loader: - kv  36:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  37:                          qwen2.block_count u32              = 48
llama_model_loader: - kv  38:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv  39:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv  40:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv  41:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  42:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  43:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  44:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  45:                          general.file_type u32              = 15
llama_model_loader: - kv  46:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  47:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  48:                      tokenizer.ggml.tokens arr[str,151665]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  49:                  tokenizer.ggml.token_type arr[i32,151665]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  50:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  51:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  52:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  53:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  54:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  55:               general.quantization_version u32              = 2
llama_model_loader: - kv  56:                                general.url str              = https://huggingface.co/mradermacher/L...
llama_model_loader: - kv  57:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  58:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  59:                  mradermacher.quantized_at str              = 2025-01-06T19:00:03+01:00
llama_model_loader: - kv  60:                  mradermacher.quantized_on str              = leia
llama_model_loader: - kv  61:                         general.source.url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  62:                  mradermacher.convert_type str              = hf
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.36 GiB (4.86 BPW) 
load: printing all EOG tokens:
load:   - 151643 ('<|endoftext|>')
load:   - 151645 ('<|im_end|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 14.77 B
print_info: general.name     = Lamarck 14b Converge Della Linear
print_info: vocab type       = BPE
print_info: n_vocab          = 151665
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-18T10:16:55.862Z level=INFO source=server.go:211 msg="enabling flash attention"
time=2025-08-18T10:16:55.862Z level=WARN source=server.go:219 msg="kv cache type not supported by model" type=""
time=2025-08-18T10:16:55.879Z level=INFO source=server.go:383 msg="starting runner" cmd="C:\\Users\\ksh.IRONSOFTWARE\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model O:\\models\\blobs\\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 --port 63215"
time=2025-08-18T10:16:55.891Z level=INFO source=server.go:488 msg="system memory" total="48.0 GiB" free="39.1 GiB" free_swap="42.4 GiB"
time=2025-08-18T10:16:55.912Z level=INFO source=server.go:531 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=40 layers.split=[40] memory.available="[9.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.1 GiB" memory.required.partial="9.8 GiB" memory.required.kv="2.2 GiB" memory.required.allocations="[9.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="607.5 MiB" memory.graph.full="997.0 MiB" memory.graph.partial="1.2 GiB"
time=2025-08-18T10:16:55.956Z level=INFO source=runner.go:864 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-620f3fa0-47a8-a31e-eff8-c87476f589db
load_backend: loaded CUDA backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-08-18T10:16:56.187Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-08-18T10:16:56.188Z level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:63215"
time=2025-08-18T10:16:56.197Z level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12200 KvCacheType: NumThreads:32 GPULayers:40[ID:GPU-620f3fa0-47a8-a31e-eff8-c87476f589db Layers:40(8..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2080 Ti) - 10107 MiB free
time=2025-08-18T10:16:56.368Z level=INFO source=server.go:1232 msg="waiting for llama runner to start responding"
time=2025-08-18T10:16:56.369Z level=INFO source=server.go:1266 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 63 key-value pairs and 579 tensors from O:\models\blobs\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Lamarck 14b Converge Della Linear
llama_model_loader: - kv   3:                            general.version str              = v0.6
llama_model_loader: - kv   4:                       general.organization str              = Sometimesanotion
llama_model_loader: - kv   5:                           general.finetune str              = converge-della-linear
llama_model_loader: - kv   6:                           general.basename str              = lamarck
llama_model_loader: - kv   7:                         general.size_label str              = 14B
llama_model_loader: - kv   8:                            general.license str              = apache-2.0
llama_model_loader: - kv   9:                   general.base_model.count u32              = 7
llama_model_loader: - kv  10:                  general.base_model.0.name str              = Qwen2.5 14B Vimarckoso v3
llama_model_loader: - kv  11:               general.base_model.0.version str              = v3
llama_model_loader: - kv  12:          general.base_model.0.organization str              = Sometimesanotion
llama_model_loader: - kv  13:              general.base_model.0.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  14:                  general.base_model.1.name str              = Lamarck 14B v0.3
llama_model_loader: - kv  15:               general.base_model.1.version str              = v0.3
llama_model_loader: - kv  16:          general.base_model.1.organization str              = Sometimesanotion
llama_model_loader: - kv  17:              general.base_model.1.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  18:                  general.base_model.2.name str              = Qwenvergence 14B v3 Prose
llama_model_loader: - kv  19:               general.base_model.2.version str              = v3
llama_model_loader: - kv  20:          general.base_model.2.organization str              = Sometimesanotion
llama_model_loader: - kv  21:              general.base_model.2.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  22:                  general.base_model.3.name str              = DRT O1 14B
llama_model_loader: - kv  23:          general.base_model.3.organization str              = Krystalan
llama_model_loader: - kv  24:              general.base_model.3.repo_url str              = https://huggingface.co/Krystalan/DRT-...
llama_model_loader: - kv  25:                  general.base_model.4.name str              = Medius Erebus Magnum 14b
llama_model_loader: - kv  26:          general.base_model.4.organization str              = Underwoods
llama_model_loader: - kv  27:              general.base_model.4.repo_url str              = https://huggingface.co/underwoods/med...
llama_model_loader: - kv  28:                  general.base_model.5.name str              = Abliterate Qwenvergence
llama_model_loader: - kv  29:          general.base_model.5.organization str              = Sometimesanotion
llama_model_loader: - kv  30:              general.base_model.5.repo_url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  31:                  general.base_model.6.name str              = Qwen2.5 14B Instruct Abliterated v2
llama_model_loader: - kv  32:               general.base_model.6.version str              = v2
llama_model_loader: - kv  33:          general.base_model.6.organization str              = Huihui Ai
llama_model_loader: - kv  34:              general.base_model.6.repo_url str              = https://huggingface.co/huihui-ai/Qwen...
llama_model_loader: - kv  35:                               general.tags arr[str,3]       = ["mergekit", "merge", "text-generation"]
llama_model_loader: - kv  36:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  37:                          qwen2.block_count u32              = 48
llama_model_loader: - kv  38:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv  39:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv  40:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv  41:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  42:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  43:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  44:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  45:                          general.file_type u32              = 15
llama_model_loader: - kv  46:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  47:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  48:                      tokenizer.ggml.tokens arr[str,151665]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  49:                  tokenizer.ggml.token_type arr[i32,151665]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  50:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  51:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  52:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  53:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  54:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  55:               general.quantization_version u32              = 2
llama_model_loader: - kv  56:                                general.url str              = https://huggingface.co/mradermacher/L...
llama_model_loader: - kv  57:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  58:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  59:                  mradermacher.quantized_at str              = 2025-01-06T19:00:03+01:00
llama_model_loader: - kv  60:                  mradermacher.quantized_on str              = leia
llama_model_loader: - kv  61:                         general.source.url str              = https://huggingface.co/sometimesanoti...
llama_model_loader: - kv  62:                  mradermacher.convert_type str              = hf
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.36 GiB (4.86 BPW) 
load: printing all EOG tokens:
load:   - 151643 ('<|endoftext|>')
load:   - 151645 ('<|im_end|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 48
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 13824
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 14B
print_info: model params     = 14.77 B
print_info: general.name     = Lamarck 14b Converge Della Linear
print_info: vocab type       = BPE
print_info: n_vocab          = 151665
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 40 repeating layers to GPU
load_tensors: offloaded 40/49 layers to GPU
load_tensors:    CUDA_Host model buffer size =  1901.43 MiB
load_tensors:        CUDA0 model buffer size =  6245.35 MiB
load_tensors:          CPU model buffer size =   416.56 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 12200
llama_context: n_ctx_per_seq = 12200
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 1
llama_context: kv_unified    = false
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (12200) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.60 MiB
llama_kv_cache_unified:      CUDA0 KV buffer size =  1920.00 MiB
llama_kv_cache_unified:        CPU KV buffer size =   384.00 MiB
llama_kv_cache_unified: size = 2304.00 MiB ( 12288 cells,  48 layers,  1/1 seqs), K (f16): 1152.00 MiB, V (f16): 1152.00 MiB
llama_context:      CUDA0 compute buffer size =   913.70 MiB
llama_context:  CUDA_Host compute buffer size =    34.01 MiB
llama_context: graph nodes  = 1639
llama_context: graph splits = 116 (with bs=512), 3 (with bs=1)
time=2025-08-18T10:18:17.580Z level=INFO source=server.go:1270 msg="llama runner started in 81.70 seconds"
time=2025-08-18T10:18:17.580Z level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-08-18T10:18:17.580Z level=INFO source=server.go:1232 msg="waiting for llama runner to start responding"
time=2025-08-18T10:18:17.580Z level=INFO source=server.go:1270 msg="llama runner started in 81.70 seconds"
init: embeddings required but some input tokens were not marked as outputs -> overriding
output_reserve: reallocating output buffer from size 0.60 MiB to 306.22 MiB
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
init: embeddings required but some input tokens were not marked as outputs -> overriding
<!-- gh-comment-id:3196078093 --> @mcr-ksh commented on GitHub (Aug 18, 2025): I've tried 0.11.5-rc2 with the new env variable and there is no change for me: ``` OLLAMA_MODELS=O:\models OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NEW_ESTIMATES=1 OLLAMA_LOAD_TIMEOUT=30m OLLAMA_DEBUG=0 OLLAMA_KEEP_ALIVE=24h OLLAMA_FLASH_ATTENTION=1 llama_model_loader: loaded meta data with 63 key-value pairs and 579 tensors from O:\models\blobs\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Lamarck 14b Converge Della Linear llama_model_loader: - kv 3: general.version str = v0.6 llama_model_loader: - kv 4: general.organization str = Sometimesanotion llama_model_loader: - kv 5: general.finetune str = converge-della-linear llama_model_loader: - kv 6: general.basename str = lamarck llama_model_loader: - kv 7: general.size_label str = 14B llama_model_loader: - kv 8: general.license str = apache-2.0 llama_model_loader: - kv 9: general.base_model.count u32 = 7 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 14B Vimarckoso v3 llama_model_loader: - kv 11: general.base_model.0.version str = v3 llama_model_loader: - kv 12: general.base_model.0.organization str = Sometimesanotion llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 14: general.base_model.1.name str = Lamarck 14B v0.3 llama_model_loader: - kv 15: general.base_model.1.version str = v0.3 llama_model_loader: - kv 16: general.base_model.1.organization str = Sometimesanotion llama_model_loader: - kv 17: general.base_model.1.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 18: general.base_model.2.name str = Qwenvergence 14B v3 Prose llama_model_loader: - kv 19: general.base_model.2.version str = v3 llama_model_loader: - kv 20: general.base_model.2.organization str = Sometimesanotion llama_model_loader: - kv 21: general.base_model.2.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 22: general.base_model.3.name str = DRT O1 14B llama_model_loader: - kv 23: general.base_model.3.organization str = Krystalan llama_model_loader: - kv 24: general.base_model.3.repo_url str = https://huggingface.co/Krystalan/DRT-... llama_model_loader: - kv 25: general.base_model.4.name str = Medius Erebus Magnum 14b llama_model_loader: - kv 26: general.base_model.4.organization str = Underwoods llama_model_loader: - kv 27: general.base_model.4.repo_url str = https://huggingface.co/underwoods/med... llama_model_loader: - kv 28: general.base_model.5.name str = Abliterate Qwenvergence llama_model_loader: - kv 29: general.base_model.5.organization str = Sometimesanotion llama_model_loader: - kv 30: general.base_model.5.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 31: general.base_model.6.name str = Qwen2.5 14B Instruct Abliterated v2 llama_model_loader: - kv 32: general.base_model.6.version str = v2 llama_model_loader: - kv 33: general.base_model.6.organization str = Huihui Ai llama_model_loader: - kv 34: general.base_model.6.repo_url str = https://huggingface.co/huihui-ai/Qwen... llama_model_loader: - kv 35: general.tags arr[str,3] = ["mergekit", "merge", "text-generation"] llama_model_loader: - kv 36: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 37: qwen2.block_count u32 = 48 llama_model_loader: - kv 38: qwen2.context_length u32 = 131072 llama_model_loader: - kv 39: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 40: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 41: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 42: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 43: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 44: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 45: general.file_type u32 = 15 llama_model_loader: - kv 46: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 47: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 48: tokenizer.ggml.tokens arr[str,151665] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 49: tokenizer.ggml.token_type arr[i32,151665] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 50: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 51: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 52: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 53: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 54: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 55: general.quantization_version u32 = 2 llama_model_loader: - kv 56: general.url str = https://huggingface.co/mradermacher/L... llama_model_loader: - kv 57: mradermacher.quantize_version str = 2 llama_model_loader: - kv 58: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 59: mradermacher.quantized_at str = 2025-01-06T19:00:03+01:00 llama_model_loader: - kv 60: mradermacher.quantized_on str = leia llama_model_loader: - kv 61: general.source.url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 62: mradermacher.convert_type str = hf llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.36 GiB (4.86 BPW) load: printing all EOG tokens: load: - 151643 ('<|endoftext|>') load: - 151645 ('<|im_end|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = Lamarck 14b Converge Della Linear print_info: vocab type = BPE print_info: n_vocab = 151665 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-18T10:16:55.862Z level=INFO source=server.go:211 msg="enabling flash attention" time=2025-08-18T10:16:55.862Z level=WARN source=server.go:219 msg="kv cache type not supported by model" type="" time=2025-08-18T10:16:55.879Z level=INFO source=server.go:383 msg="starting runner" cmd="C:\\Users\\ksh.IRONSOFTWARE\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model O:\\models\\blobs\\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 --port 63215" time=2025-08-18T10:16:55.891Z level=INFO source=server.go:488 msg="system memory" total="48.0 GiB" free="39.1 GiB" free_swap="42.4 GiB" time=2025-08-18T10:16:55.912Z level=INFO source=server.go:531 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=40 layers.split=[40] memory.available="[9.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.1 GiB" memory.required.partial="9.8 GiB" memory.required.kv="2.2 GiB" memory.required.allocations="[9.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="607.5 MiB" memory.graph.full="997.0 MiB" memory.graph.partial="1.2 GiB" time=2025-08-18T10:16:55.956Z level=INFO source=runner.go:864 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-620f3fa0-47a8-a31e-eff8-c87476f589db load_backend: loaded CUDA backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-08-18T10:16:56.187Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-08-18T10:16:56.188Z level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:63215" time=2025-08-18T10:16:56.197Z level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12200 KvCacheType: NumThreads:32 GPULayers:40[ID:GPU-620f3fa0-47a8-a31e-eff8-c87476f589db Layers:40(8..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2080 Ti) - 10107 MiB free time=2025-08-18T10:16:56.368Z level=INFO source=server.go:1232 msg="waiting for llama runner to start responding" time=2025-08-18T10:16:56.369Z level=INFO source=server.go:1266 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 63 key-value pairs and 579 tensors from O:\models\blobs\sha256-657cef1d1ae3ab8b8c58622a8b01b12e364ebfc6777d23964edaa1b20ac979b8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Lamarck 14b Converge Della Linear llama_model_loader: - kv 3: general.version str = v0.6 llama_model_loader: - kv 4: general.organization str = Sometimesanotion llama_model_loader: - kv 5: general.finetune str = converge-della-linear llama_model_loader: - kv 6: general.basename str = lamarck llama_model_loader: - kv 7: general.size_label str = 14B llama_model_loader: - kv 8: general.license str = apache-2.0 llama_model_loader: - kv 9: general.base_model.count u32 = 7 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 14B Vimarckoso v3 llama_model_loader: - kv 11: general.base_model.0.version str = v3 llama_model_loader: - kv 12: general.base_model.0.organization str = Sometimesanotion llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 14: general.base_model.1.name str = Lamarck 14B v0.3 llama_model_loader: - kv 15: general.base_model.1.version str = v0.3 llama_model_loader: - kv 16: general.base_model.1.organization str = Sometimesanotion llama_model_loader: - kv 17: general.base_model.1.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 18: general.base_model.2.name str = Qwenvergence 14B v3 Prose llama_model_loader: - kv 19: general.base_model.2.version str = v3 llama_model_loader: - kv 20: general.base_model.2.organization str = Sometimesanotion llama_model_loader: - kv 21: general.base_model.2.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 22: general.base_model.3.name str = DRT O1 14B llama_model_loader: - kv 23: general.base_model.3.organization str = Krystalan llama_model_loader: - kv 24: general.base_model.3.repo_url str = https://huggingface.co/Krystalan/DRT-... llama_model_loader: - kv 25: general.base_model.4.name str = Medius Erebus Magnum 14b llama_model_loader: - kv 26: general.base_model.4.organization str = Underwoods llama_model_loader: - kv 27: general.base_model.4.repo_url str = https://huggingface.co/underwoods/med... llama_model_loader: - kv 28: general.base_model.5.name str = Abliterate Qwenvergence llama_model_loader: - kv 29: general.base_model.5.organization str = Sometimesanotion llama_model_loader: - kv 30: general.base_model.5.repo_url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 31: general.base_model.6.name str = Qwen2.5 14B Instruct Abliterated v2 llama_model_loader: - kv 32: general.base_model.6.version str = v2 llama_model_loader: - kv 33: general.base_model.6.organization str = Huihui Ai llama_model_loader: - kv 34: general.base_model.6.repo_url str = https://huggingface.co/huihui-ai/Qwen... llama_model_loader: - kv 35: general.tags arr[str,3] = ["mergekit", "merge", "text-generation"] llama_model_loader: - kv 36: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 37: qwen2.block_count u32 = 48 llama_model_loader: - kv 38: qwen2.context_length u32 = 131072 llama_model_loader: - kv 39: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 40: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 41: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 42: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 43: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 44: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 45: general.file_type u32 = 15 llama_model_loader: - kv 46: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 47: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 48: tokenizer.ggml.tokens arr[str,151665] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 49: tokenizer.ggml.token_type arr[i32,151665] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 50: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 51: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 52: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 53: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 54: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 55: general.quantization_version u32 = 2 llama_model_loader: - kv 56: general.url str = https://huggingface.co/mradermacher/L... llama_model_loader: - kv 57: mradermacher.quantize_version str = 2 llama_model_loader: - kv 58: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 59: mradermacher.quantized_at str = 2025-01-06T19:00:03+01:00 llama_model_loader: - kv 60: mradermacher.quantized_on str = leia llama_model_loader: - kv 61: general.source.url str = https://huggingface.co/sometimesanoti... llama_model_loader: - kv 62: mradermacher.convert_type str = hf llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.36 GiB (4.86 BPW) load: printing all EOG tokens: load: - 151643 ('<|endoftext|>') load: - 151645 ('<|im_end|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = Lamarck 14b Converge Della Linear print_info: vocab type = BPE print_info: n_vocab = 151665 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 40 repeating layers to GPU load_tensors: offloaded 40/49 layers to GPU load_tensors: CUDA_Host model buffer size = 1901.43 MiB load_tensors: CUDA0 model buffer size = 6245.35 MiB load_tensors: CPU model buffer size = 416.56 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 12200 llama_context: n_ctx_per_seq = 12200 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 1 llama_context: kv_unified = false llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (12200) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.60 MiB llama_kv_cache_unified: CUDA0 KV buffer size = 1920.00 MiB llama_kv_cache_unified: CPU KV buffer size = 384.00 MiB llama_kv_cache_unified: size = 2304.00 MiB ( 12288 cells, 48 layers, 1/1 seqs), K (f16): 1152.00 MiB, V (f16): 1152.00 MiB llama_context: CUDA0 compute buffer size = 913.70 MiB llama_context: CUDA_Host compute buffer size = 34.01 MiB llama_context: graph nodes = 1639 llama_context: graph splits = 116 (with bs=512), 3 (with bs=1) time=2025-08-18T10:18:17.580Z level=INFO source=server.go:1270 msg="llama runner started in 81.70 seconds" time=2025-08-18T10:18:17.580Z level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-08-18T10:18:17.580Z level=INFO source=server.go:1232 msg="waiting for llama runner to start responding" time=2025-08-18T10:18:17.580Z level=INFO source=server.go:1270 msg="llama runner started in 81.70 seconds" init: embeddings required but some input tokens were not marked as outputs -> overriding output_reserve: reallocating output buffer from size 0.60 MiB to 306.22 MiB init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding init: embeddings required but some input tokens were not marked as outputs -> overriding ```
Author
Owner

@rick-github commented on GitHub (Aug 18, 2025):

time=2025-08-18T10:16:55.912Z level=INFO source=server.go:531 msg=offload library=cuda layers.requested=-1 layers.model=49
 layers.offload=40 layers.split=[40] memory.available="[9.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.1 GiB"
 memory.required.partial="9.8 GiB" memory.required.kv="2.2 GiB" memory.required.allocations="[9.8 GiB]"
 memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="607.5 MiB"
 memory.graph.full="997.0 MiB" memory.graph.partial="1.2 GiB"

The size of the context window was increased from 2848 to 12200, which caused an increase in the size of the memory graph and the the KV cache. As a result, ollama estimates that it can fit only 40 of the 49 layers in VRAM.

You can reduce the memory footprint by setting OLLAMA_KV_CACHE_TYPE to q8_0 or q4_0 to reduce the size of the KV cache, and setting OLLAMA_NEW_ENGINE=1 so that the new estimation logic (from OLLAMA_NEW_ESTIMATES=1) is used.

<!-- gh-comment-id:3196186460 --> @rick-github commented on GitHub (Aug 18, 2025): ``` time=2025-08-18T10:16:55.912Z level=INFO source=server.go:531 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=40 layers.split=[40] memory.available="[9.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.1 GiB" memory.required.partial="9.8 GiB" memory.required.kv="2.2 GiB" memory.required.allocations="[9.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="607.5 MiB" memory.graph.full="997.0 MiB" memory.graph.partial="1.2 GiB" ``` The size of the context window was increased from 2848 to 12200, which caused an increase in the size of the memory graph and the the KV cache. As a result, ollama estimates that it can fit only 40 of the 49 layers in VRAM. You can reduce the memory footprint by setting `OLLAMA_KV_CACHE_TYPE` to q8_0 or q4_0 to reduce the size of the KV cache, and setting `OLLAMA_NEW_ENGINE=1` so that the new estimation logic (from `OLLAMA_NEW_ESTIMATES=1`) is used.
Author
Owner

@mcr-ksh commented on GitHub (Aug 18, 2025):

@rick-github correct. I've increased the size because before the bug the ctx_len was at 12288, which used to fit the entire model into the GPU. OLLAMA_NEW_ENGINE=1 did the trick. i've also set OLLAMA_KV_CACHE_TYPE=q4_0. Now all layers are back on the GPU as it used to be:

time=2025-08-18T13:37:30.204Z level=INFO source=ggml.go:130 msg="" architecture=qwen2 file_type=Q4_K_M name="Lamarck 14b Converge Della Linear" description="" num_tensors=579 num_key_values=64
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-620f3fa0-47a8-a31e-eff8-c87476f589db
load_backend: loaded CUDA backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-08-18T13:37:30.373Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-08-18T13:37:30.555Z level=INFO source=runner.go:925 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12200 KvCacheType:q4_0 NumThreads:32 GPULayers:49[ID:GPU-620f3fa0-47a8-a31e-eff8-c87476f589db Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-18T13:37:32.075Z level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12200 KvCacheType:q4_0 NumThreads:32 GPULayers:49[ID:GPU-620f3fa0-47a8-a31e-eff8-c87476f589db Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="8.0 GiB"
time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="416.6 MiB"
time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="648.0 MiB"
time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="86.0 MiB"
time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="10.0 MiB"
time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:342 msg="total memory" size="9.1 GiB"
time=2025-08-18T13:37:32.076Z level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-08-18T13:37:32.076Z level=INFO source=server.go:1232 msg="waiting for llama runner to start responding"
time=2025-08-18T13:37:32.076Z level=INFO source=server.go:1266 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-18T13:37:32.077Z level=INFO source=ggml.go:486 msg="offloading 48 repeating layers to GPU"
time=2025-08-18T13:37:32.077Z level=INFO source=ggml.go:492 msg="offloading output layer to GPU"
time=2025-08-18T13:37:32.077Z level=INFO source=ggml.go:497 msg="offloaded 49/49 layers to GPU"
<!-- gh-comment-id:3196967240 --> @mcr-ksh commented on GitHub (Aug 18, 2025): @rick-github correct. I've increased the size because before the bug the ctx_len was at 12288, which used to fit the entire model into the GPU. `OLLAMA_NEW_ENGINE=1` did the trick. i've also set `OLLAMA_KV_CACHE_TYPE=q4_0`. Now all layers are back on the GPU as it used to be: ``` time=2025-08-18T13:37:30.204Z level=INFO source=ggml.go:130 msg="" architecture=qwen2 file_type=Q4_K_M name="Lamarck 14b Converge Della Linear" description="" num_tensors=579 num_key_values=64 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-620f3fa0-47a8-a31e-eff8-c87476f589db load_backend: loaded CUDA backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\ksh.IRONSOFTWARE\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-08-18T13:37:30.373Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-08-18T13:37:30.555Z level=INFO source=runner.go:925 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12200 KvCacheType:q4_0 NumThreads:32 GPULayers:49[ID:GPU-620f3fa0-47a8-a31e-eff8-c87476f589db Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-18T13:37:32.075Z level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:12200 KvCacheType:q4_0 NumThreads:32 GPULayers:49[ID:GPU-620f3fa0-47a8-a31e-eff8-c87476f589db Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="8.0 GiB" time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="416.6 MiB" time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="648.0 MiB" time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="86.0 MiB" time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="10.0 MiB" time=2025-08-18T13:37:32.076Z level=INFO source=backend.go:342 msg="total memory" size="9.1 GiB" time=2025-08-18T13:37:32.076Z level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-08-18T13:37:32.076Z level=INFO source=server.go:1232 msg="waiting for llama runner to start responding" time=2025-08-18T13:37:32.076Z level=INFO source=server.go:1266 msg="waiting for server to become available" status="llm server loading model" time=2025-08-18T13:37:32.077Z level=INFO source=ggml.go:486 msg="offloading 48 repeating layers to GPU" time=2025-08-18T13:37:32.077Z level=INFO source=ggml.go:492 msg="offloading output layer to GPU" time=2025-08-18T13:37:32.077Z level=INFO source=ggml.go:497 msg="offloaded 49/49 layers to GPU" ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54411