[GH-ISSUE #13025] Apertus-70B-Instruct-2509 Full GPU layer allocation fails on multi-GPU setup works only when at least one layer is offloaded #8627

Open
opened 2026-04-12 21:22:09 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @chrisoutwright on GitHub (Nov 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13025

What is the issue?

Summary:
When trying to load Apertus-70B-Instruct-2509 with all layers on GPU (dual GPU RTX 4090/3090 setup), Ollama consistently fails with
cudaMalloc failed: out of memory.

The issue is not VRAM-related, since it happens across multiple quantizations that fit easily within 48 GB total GPU memory.
As soon as I offload even a single layer to CPU, the model loads and runs perfectly (using Context below 10k for testing)


Hardware & Environment

  • GPUs: RTX 4090/3090 (24 GB each)
  • RAM: 64 GB
  • OS: Windows 11
  • Ollama Version: latest (Nov 2025)

Models tested (GGUF V3):

Quantization VRAM Requirement Result
Q3_K_XL ~36.6 GB Fails (full GPU)
IQ4_XS ~38 GB Fails (full GPU)
Q4_K_S ~40.4 GB Fails (full GPU)
IQ4_NL ~40.1 GB Fails (full GPU)
Q4_0 ~40.2 GB Fails (full GPU)
Any of the above (1 layer CPU) Works

Settings:

  • FlashAttention: enabled
  • Context window: 1 000 → 5 536 (no difference)

Relevant log output

time=2025-11-09T05:16:25.846+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57207"
time=2025-11-09T05:16:26.114+01:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-09T05:16:26.115+01:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=8
llama_model_loader: loaded meta data with 43 key-value pairs and 804 tensors from D:\Ollama\models\blobs\sha256-391768201f80e7d337e67e2024cf0c4339529bb9d9938ec80f666bb160cf95e1 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = apertus
llama_model_loader: - kv   1:                              xielu.alpha_n arr[f32,80]      = [7.593750, 6.500000, 4.656250, 4.1250...
llama_model_loader: - kv   2:                              xielu.alpha_p arr[f32,80]      = [2.796875, 11.125000, 7.000000, 5.968...
llama_model_loader: - kv   3:                                 xielu.beta arr[f32,80]      = [0.500000, 0.500000, 0.500000, 0.5000...
llama_model_loader: - kv   4:                                  xielu.eps arr[f32,80]      = [-0.000001, -0.000001, -0.000001, -0....
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                               general.name str              = Apertus-70B-Instruct-2509
llama_model_loader: - kv   7:                            general.version str              = 2509
llama_model_loader: - kv   8:                           general.finetune str              = Instruct
llama_model_loader: - kv   9:                           general.basename str              = Apertus-70B-Instruct-2509
llama_model_loader: - kv  10:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv  11:                         general.size_label str              = 70B
llama_model_loader: - kv  12:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  13:                        apertus.block_count u32              = 80
llama_model_loader: - kv  14:                     apertus.context_length u32              = 65536
llama_model_loader: - kv  15:                   apertus.embedding_length u32              = 8192
llama_model_loader: - kv  16:                apertus.feed_forward_length u32              = 43008
llama_model_loader: - kv  17:               apertus.attention.head_count u32              = 64
llama_model_loader: - kv  18:            apertus.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                     apertus.rope.freq_base f32              = 12000000.000000
llama_model_loader: - kv  20:   apertus.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                         apertus.vocab_size u32              = 131072
llama_model_loader: - kv  22:               apertus.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "<pad>", "[/...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 68
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 3
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n{%- macr...
llama_model_loader: - kv  36:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                          general.file_type u32              = 12
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = Apertus-70B-Instruct-2509-GGUF/imatri...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = unsloth_calibration_Apertus-70B-Instr...
llama_model_loader: - kv  41:             quantize.imatrix.entries_count u32              = 480
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count u32              = 143
llama_model_loader: - type  f32:  322 tensors
llama_model_loader: - type q3_K:  192 tensors
llama_model_loader: - type q4_K:  151 tensors
llama_model_loader: - type q5_K:    5 tensors
llama_model_loader: - type q6_K:   86 tensors
llama_model_loader: - type iq3_xxs:   16 tensors
llama_model_loader: - type iq3_s:    8 tensors
llama_model_loader: - type iq4_xs:   24 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q3_K - Medium
print_info: file size   = 34.10 GiB (4.15 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 31 ('<reponame>')
load:   - 68 ('<|assistant_end|>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8499 MB
print_info: arch             = apertus
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 70.60 B
print_info: general.name     = Apertus-70B-Instruct-2509
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 68 '<|assistant_end|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 3 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 31 '<reponame>'
print_info: EOG token        = 68 '<|assistant_end|>'
print_info: max token length = 150
llama_model_load: vocab only - skipping tensors
time=2025-11-09T05:16:26.443+01:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-09T05:16:26.444+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\models\\blobs\\sha256-391768201f80e7d337e67e2024cf0c4339529bb9d9938ec80f666bb160cf95e1 --port 57211"
time=2025-11-09T05:16:26.446+01:00 level=INFO source=server.go:470 msg="system memory" total="63.9 GiB" free="55.8 GiB" free_swap="71.5 GiB"
time=2025-11-09T05:16:26.447+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=89 layers.model=81 layers.offload=81 layers.split="[41 40]" memory.available="[23.6 GiB 22.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="36.0 GiB" memory.required.partial="36.0 GiB" memory.required.kv="156.2 MiB" memory.required.allocations="[18.4 GiB 17.6 GiB]" memory.weights.total="33.5 GiB" memory.weights.repeating="32.7 GiB" memory.weights.nonrepeating="840.0 MiB" memory.graph.full="208.3 MiB" memory.graph.partial="208.3 MiB"
time=2025-11-09T05:16:26.473+01:00 level=INFO source=runner.go:910 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-971b407f-ae20-75ed-99c8-42c696057b0e
  Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-3752f260-9f8c-48e9-780e-12430a037c53
load_backend: loaded CUDA backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-09T05:16:26.581+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-11-09T05:16:26.582+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:57211"
time=2025-11-09T05:16:26.585+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:1000 KvCacheType:q8_0 NumThreads:8 GPULayers:81[ID:GPU-971b407f-ae20-75ed-99c8-42c696057b0e Layers:41(0..40) ID:GPU-3752f260-9f8c-48e9-780e-12430a037c53 Layers:40(41..80)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-09T05:16:26.585+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-09T05:16:26.585+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-971b407f-ae20-75ed-99c8-42c696057b0e utilizing NVML memory reporting free: 25314721792 total: 25757220864
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:02:00.0) - 24142 MiB free
ggml_backend_cuda_device_get_memory device GPU-3752f260-9f8c-48e9-780e-12430a037c53 utilizing NVML memory reporting free: 24613924864 total: 25769803776
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) (0000:01:00.0) - 23473 MiB free
llama_model_loader: loaded meta data with 43 key-value pairs and 804 tensors from D:\Ollama\models\blobs\sha256-391768201f80e7d337e67e2024cf0c4339529bb9d9938ec80f666bb160cf95e1 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = apertus
llama_model_loader: - kv   1:                              xielu.alpha_n arr[f32,80]      = [7.593750, 6.500000, 4.656250, 4.1250...
llama_model_loader: - kv   2:                              xielu.alpha_p arr[f32,80]      = [2.796875, 11.125000, 7.000000, 5.968...
llama_model_loader: - kv   3:                                 xielu.beta arr[f32,80]      = [0.500000, 0.500000, 0.500000, 0.5000...
llama_model_loader: - kv   4:                                  xielu.eps arr[f32,80]      = [-0.000001, -0.000001, -0.000001, -0....
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                               general.name str              = Apertus-70B-Instruct-2509
llama_model_loader: - kv   7:                            general.version str              = 2509
llama_model_loader: - kv   8:                           general.finetune str              = Instruct
llama_model_loader: - kv   9:                           general.basename str              = Apertus-70B-Instruct-2509
llama_model_loader: - kv  10:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv  11:                         general.size_label str              = 70B
llama_model_loader: - kv  12:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  13:                        apertus.block_count u32              = 80
llama_model_loader: - kv  14:                     apertus.context_length u32              = 65536
llama_model_loader: - kv  15:                   apertus.embedding_length u32              = 8192
llama_model_loader: - kv  16:                apertus.feed_forward_length u32              = 43008
llama_model_loader: - kv  17:               apertus.attention.head_count u32              = 64
llama_model_loader: - kv  18:            apertus.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                     apertus.rope.freq_base f32              = 12000000.000000
llama_model_loader: - kv  20:   apertus.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                         apertus.vocab_size u32              = 131072
llama_model_loader: - kv  22:               apertus.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "<pad>", "[/...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 68
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 3
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n{%- macr...
llama_model_loader: - kv  36:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                          general.file_type u32              = 12
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = Apertus-70B-Instruct-2509-GGUF/imatri...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = unsloth_calibration_Apertus-70B-Instr...
llama_model_loader: - kv  41:             quantize.imatrix.entries_count u32              = 480
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count u32              = 143
llama_model_loader: - type  f32:  322 tensors
llama_model_loader: - type q3_K:  192 tensors
llama_model_loader: - type q4_K:  151 tensors
llama_model_loader: - type q5_K:    5 tensors
llama_model_loader: - type q6_K:   86 tensors
llama_model_loader: - type iq3_xxs:   16 tensors
llama_model_loader: - type iq3_s:    8 tensors
llama_model_loader: - type iq4_xs:   24 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q3_K - Medium
print_info: file size   = 34.10 GiB (4.15 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 31 ('<reponame>')
load:   - 68 ('<|assistant_end|>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8499 MB
print_info: arch             = apertus
print_info: vocab_only       = 0
print_info: n_ctx_train      = 65536
print_info: n_embd           = 8192
print_info: n_layer          = 80
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 43008
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 12000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 65536
print_info: rope_finetuned   = unknown
print_info: model type       = ?B
print_info: model params     = 70.60 B
print_info: general.name     = Apertus-70B-Instruct-2509
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 68 '<|assistant_end|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 3 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 31 '<reponame>'
print_info: EOG token        = 68 '<|assistant_end|>'
print_info: max token length = 150
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 80 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 81/81 layers to GPU
load_tensors:        CUDA0 model buffer size = 17337.17 MiB
load_tensors:        CUDA1 model buffer size = 17005.57 MiB
load_tensors:          CPU model buffer size =   576.00 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 1000
llama_context: n_ctx_per_seq = 1000
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = enabled
llama_context: kv_unified    = false
llama_context: freq_base     = 12000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (1000) < n_ctx_train (65536) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.53 MiB
llama_kv_cache:      CUDA0 KV buffer size =    87.13 MiB
llama_kv_cache:      CUDA1 KV buffer size =    82.88 MiB
llama_kv_cache: size =  170.00 MiB (  1024 cells,  80 layers,  1/1 seqs), K (q8_0):   85.00 MiB, V (q8_0):   85.00 MiB
llama_context: pipeline parallelism enabled (n_copies=4)
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 13981.04 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 14660182016
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 13508.05 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 14164213760
ggml_cuda_host_malloc: failed to allocate 26972.05 MiB of pinned memory: out of memory
graph_reserve: failed to allocate compute buffers
llama_init_from_model: failed to initialize the context: failed to allocate compute pp buffers
panic: unable to create llama context

goroutine 24 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc00030e6e0, {0x51, 0x0, 0x0, {0xc0000ff118, 0x2, 0x2}, 0xc00059e440, 0x0}, {0xc0000a8000, ...}, ...)
        github.com/ollama/ollama/runner/llamarunner/runner.go:799 +0x353
created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 22
        github.com/ollama/ollama/runner/llamarunner/runner.go:879 +0x7ce
time=2025-11-09T05:16:36.922+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server error"
time=2025-11-09T05:16:36.990+01:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 2"
time=2025-11-09T05:16:37.174+01:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\models\blobs\sha256-391768201f80e7d337e67e2024cf0c4339529bb9d9938ec80f666bb160cf95e1 error="llama runner process has terminated: cudaMalloc failed: out of memory"
[GIN] 2025/11/09 - 05:16:37 | 500 |   11.4556928s |    192.168.1.88 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.10

Originally created by @chrisoutwright on GitHub (Nov 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13025 ### What is the issue? **Summary:** When trying to load **Apertus-70B-Instruct-2509** with *all layers on GPU* (dual GPU RTX 4090/3090 setup), Ollama consistently fails with `cudaMalloc failed: out of memory`. The issue is **not VRAM-related**, since it happens across multiple quantizations that fit easily within 48 GB total GPU memory. As soon as I offload **even a single layer** to CPU, the model loads and runs perfectly (using Context below 10k for testing) --- ### **Hardware & Environment** - **GPUs:** RTX 4090/3090 (24 GB each) - **RAM:** 64 GB - **OS:** Windows 11 - **Ollama Version:** latest (Nov 2025) **Models tested (GGUF V3):** | Quantization | VRAM Requirement | Result | |---------------|------------------|---------| | Q3_K_XL | ~36.6 GB | ❌ Fails (full GPU) | | IQ4_XS | ~38 GB | ❌ Fails (full GPU) | | Q4_K_S | ~40.4 GB | ❌ Fails (full GPU) | | IQ4_NL | ~40.1 GB | ❌ Fails (full GPU) | | Q4_0 | ~40.2 GB | ❌ Fails (full GPU) | | Any of the above (1 layer CPU) | — | ✅ Works | **Settings:** - FlashAttention: enabled - Context window: 1 000 → 5 536 (no difference) ### Relevant log output ```shell time=2025-11-09T05:16:25.846+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57207" time=2025-11-09T05:16:26.114+01:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-09T05:16:26.115+01:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=8 llama_model_loader: loaded meta data with 43 key-value pairs and 804 tensors from D:\Ollama\models\blobs\sha256-391768201f80e7d337e67e2024cf0c4339529bb9d9938ec80f666bb160cf95e1 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = apertus llama_model_loader: - kv 1: xielu.alpha_n arr[f32,80] = [7.593750, 6.500000, 4.656250, 4.1250... llama_model_loader: - kv 2: xielu.alpha_p arr[f32,80] = [2.796875, 11.125000, 7.000000, 5.968... llama_model_loader: - kv 3: xielu.beta arr[f32,80] = [0.500000, 0.500000, 0.500000, 0.5000... llama_model_loader: - kv 4: xielu.eps arr[f32,80] = [-0.000001, -0.000001, -0.000001, -0.... llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: general.name str = Apertus-70B-Instruct-2509 llama_model_loader: - kv 7: general.version str = 2509 llama_model_loader: - kv 8: general.finetune str = Instruct llama_model_loader: - kv 9: general.basename str = Apertus-70B-Instruct-2509 llama_model_loader: - kv 10: general.quantized_by str = Unsloth llama_model_loader: - kv 11: general.size_label str = 70B llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 13: apertus.block_count u32 = 80 llama_model_loader: - kv 14: apertus.context_length u32 = 65536 llama_model_loader: - kv 15: apertus.embedding_length u32 = 8192 llama_model_loader: - kv 16: apertus.feed_forward_length u32 = 43008 llama_model_loader: - kv 17: apertus.attention.head_count u32 = 64 llama_model_loader: - kv 18: apertus.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: apertus.rope.freq_base f32 = 12000000.000000 llama_model_loader: - kv 20: apertus.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: apertus.vocab_size u32 = 131072 llama_model_loader: - kv 22: apertus.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "<pad>", "[/... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 68 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 3 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 33: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 35: tokenizer.chat_template str = {# Unsloth template fixes #}\n{%- macr... llama_model_loader: - kv 36: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 12 llama_model_loader: - kv 39: quantize.imatrix.file str = Apertus-70B-Instruct-2509-GGUF/imatri... llama_model_loader: - kv 40: quantize.imatrix.dataset str = unsloth_calibration_Apertus-70B-Instr... llama_model_loader: - kv 41: quantize.imatrix.entries_count u32 = 480 llama_model_loader: - kv 42: quantize.imatrix.chunks_count u32 = 143 llama_model_loader: - type f32: 322 tensors llama_model_loader: - type q3_K: 192 tensors llama_model_loader: - type q4_K: 151 tensors llama_model_loader: - type q5_K: 5 tensors llama_model_loader: - type q6_K: 86 tensors llama_model_loader: - type iq3_xxs: 16 tensors llama_model_loader: - type iq3_s: 8 tensors llama_model_loader: - type iq4_xs: 24 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q3_K - Medium print_info: file size = 34.10 GiB (4.15 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 31 ('<reponame>') load: - 68 ('<|assistant_end|>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8499 MB print_info: arch = apertus print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 70.60 B print_info: general.name = Apertus-70B-Instruct-2509 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 68 '<|assistant_end|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 3 '<pad>' print_info: LF token = 1010 'Ċ' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 31 '<reponame>' print_info: EOG token = 68 '<|assistant_end|>' print_info: max token length = 150 llama_model_load: vocab only - skipping tensors time=2025-11-09T05:16:26.443+01:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-09T05:16:26.444+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\models\\blobs\\sha256-391768201f80e7d337e67e2024cf0c4339529bb9d9938ec80f666bb160cf95e1 --port 57211" time=2025-11-09T05:16:26.446+01:00 level=INFO source=server.go:470 msg="system memory" total="63.9 GiB" free="55.8 GiB" free_swap="71.5 GiB" time=2025-11-09T05:16:26.447+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=89 layers.model=81 layers.offload=81 layers.split="[41 40]" memory.available="[23.6 GiB 22.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="36.0 GiB" memory.required.partial="36.0 GiB" memory.required.kv="156.2 MiB" memory.required.allocations="[18.4 GiB 17.6 GiB]" memory.weights.total="33.5 GiB" memory.weights.repeating="32.7 GiB" memory.weights.nonrepeating="840.0 MiB" memory.graph.full="208.3 MiB" memory.graph.partial="208.3 MiB" time=2025-11-09T05:16:26.473+01:00 level=INFO source=runner.go:910 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-971b407f-ae20-75ed-99c8-42c696057b0e Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-3752f260-9f8c-48e9-780e-12430a037c53 load_backend: loaded CUDA backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-09T05:16:26.581+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-11-09T05:16:26.582+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:57211" time=2025-11-09T05:16:26.585+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:1000 KvCacheType:q8_0 NumThreads:8 GPULayers:81[ID:GPU-971b407f-ae20-75ed-99c8-42c696057b0e Layers:41(0..40) ID:GPU-3752f260-9f8c-48e9-780e-12430a037c53 Layers:40(41..80)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-09T05:16:26.585+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-11-09T05:16:26.585+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-971b407f-ae20-75ed-99c8-42c696057b0e utilizing NVML memory reporting free: 25314721792 total: 25757220864 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:02:00.0) - 24142 MiB free ggml_backend_cuda_device_get_memory device GPU-3752f260-9f8c-48e9-780e-12430a037c53 utilizing NVML memory reporting free: 24613924864 total: 25769803776 llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) (0000:01:00.0) - 23473 MiB free llama_model_loader: loaded meta data with 43 key-value pairs and 804 tensors from D:\Ollama\models\blobs\sha256-391768201f80e7d337e67e2024cf0c4339529bb9d9938ec80f666bb160cf95e1 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = apertus llama_model_loader: - kv 1: xielu.alpha_n arr[f32,80] = [7.593750, 6.500000, 4.656250, 4.1250... llama_model_loader: - kv 2: xielu.alpha_p arr[f32,80] = [2.796875, 11.125000, 7.000000, 5.968... llama_model_loader: - kv 3: xielu.beta arr[f32,80] = [0.500000, 0.500000, 0.500000, 0.5000... llama_model_loader: - kv 4: xielu.eps arr[f32,80] = [-0.000001, -0.000001, -0.000001, -0.... llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: general.name str = Apertus-70B-Instruct-2509 llama_model_loader: - kv 7: general.version str = 2509 llama_model_loader: - kv 8: general.finetune str = Instruct llama_model_loader: - kv 9: general.basename str = Apertus-70B-Instruct-2509 llama_model_loader: - kv 10: general.quantized_by str = Unsloth llama_model_loader: - kv 11: general.size_label str = 70B llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 13: apertus.block_count u32 = 80 llama_model_loader: - kv 14: apertus.context_length u32 = 65536 llama_model_loader: - kv 15: apertus.embedding_length u32 = 8192 llama_model_loader: - kv 16: apertus.feed_forward_length u32 = 43008 llama_model_loader: - kv 17: apertus.attention.head_count u32 = 64 llama_model_loader: - kv 18: apertus.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: apertus.rope.freq_base f32 = 12000000.000000 llama_model_loader: - kv 20: apertus.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: apertus.vocab_size u32 = 131072 llama_model_loader: - kv 22: apertus.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "<pad>", "[/... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 68 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 3 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 33: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 35: tokenizer.chat_template str = {# Unsloth template fixes #}\n{%- macr... llama_model_loader: - kv 36: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 12 llama_model_loader: - kv 39: quantize.imatrix.file str = Apertus-70B-Instruct-2509-GGUF/imatri... llama_model_loader: - kv 40: quantize.imatrix.dataset str = unsloth_calibration_Apertus-70B-Instr... llama_model_loader: - kv 41: quantize.imatrix.entries_count u32 = 480 llama_model_loader: - kv 42: quantize.imatrix.chunks_count u32 = 143 llama_model_loader: - type f32: 322 tensors llama_model_loader: - type q3_K: 192 tensors llama_model_loader: - type q4_K: 151 tensors llama_model_loader: - type q5_K: 5 tensors llama_model_loader: - type q6_K: 86 tensors llama_model_loader: - type iq3_xxs: 16 tensors llama_model_loader: - type iq3_s: 8 tensors llama_model_loader: - type iq4_xs: 24 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q3_K - Medium print_info: file size = 34.10 GiB (4.15 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 31 ('<reponame>') load: - 68 ('<|assistant_end|>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8499 MB print_info: arch = apertus print_info: vocab_only = 0 print_info: n_ctx_train = 65536 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 43008 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 12000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 65536 print_info: rope_finetuned = unknown print_info: model type = ?B print_info: model params = 70.60 B print_info: general.name = Apertus-70B-Instruct-2509 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 68 '<|assistant_end|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 3 '<pad>' print_info: LF token = 1010 'Ċ' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 31 '<reponame>' print_info: EOG token = 68 '<|assistant_end|>' print_info: max token length = 150 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 80 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 81/81 layers to GPU load_tensors: CUDA0 model buffer size = 17337.17 MiB load_tensors: CUDA1 model buffer size = 17005.57 MiB load_tensors: CPU model buffer size = 576.00 MiB llama_init_from_model: model default pooling_type is [0], but [-1] was specified llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 1000 llama_context: n_ctx_per_seq = 1000 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = enabled llama_context: kv_unified = false llama_context: freq_base = 12000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (1000) < n_ctx_train (65536) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.53 MiB llama_kv_cache: CUDA0 KV buffer size = 87.13 MiB llama_kv_cache: CUDA1 KV buffer size = 82.88 MiB llama_kv_cache: size = 170.00 MiB ( 1024 cells, 80 layers, 1/1 seqs), K (q8_0): 85.00 MiB, V (q8_0): 85.00 MiB llama_context: pipeline parallelism enabled (n_copies=4) ggml_backend_cuda_buffer_type_alloc_buffer: allocating 13981.04 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 14660182016 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 13508.05 MiB on device 1: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 14164213760 ggml_cuda_host_malloc: failed to allocate 26972.05 MiB of pinned memory: out of memory graph_reserve: failed to allocate compute buffers llama_init_from_model: failed to initialize the context: failed to allocate compute pp buffers panic: unable to create llama context goroutine 24 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc00030e6e0, {0x51, 0x0, 0x0, {0xc0000ff118, 0x2, 0x2}, 0xc00059e440, 0x0}, {0xc0000a8000, ...}, ...) github.com/ollama/ollama/runner/llamarunner/runner.go:799 +0x353 created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 22 github.com/ollama/ollama/runner/llamarunner/runner.go:879 +0x7ce time=2025-11-09T05:16:36.922+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server error" time=2025-11-09T05:16:36.990+01:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 2" time=2025-11-09T05:16:37.174+01:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\models\blobs\sha256-391768201f80e7d337e67e2024cf0c4339529bb9d9938ec80f666bb160cf95e1 error="llama runner process has terminated: cudaMalloc failed: out of memory" [GIN] 2025/11/09 - 05:16:37 | 500 | 11.4556928s | 192.168.1.88 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.10
GiteaMirror added the bug label 2026-04-12 21:22:09 -05:00
Author
Owner

@chrisoutwright commented on GitHub (Nov 9, 2025):

Similar-sized models like
hf.co/bartowski/huihui-ai_DeepSeek-R1-Distill-Llama-70B-abliterated-GGUF:IQ4_NL
I can run up to context 20k

logs for comparison where it works for above: DeepSeek-R1-Distill-Llama-70B

time=2025-11-09T05:27:05.009+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57447"
time=2025-11-09T05:27:05.280+01:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-09T05:27:05.280+01:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=8
llama_model_loader: loaded meta data with 41 key-value pairs and 724 tensors from D:\Ollama\models\blobs\sha256-0a389b67973b09bbc796b1eb4aaad5cea3b8b87f5fdc670d436b2f73ade2bc06 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 70B Abliter...
llama_model_loader: - kv   3:                           general.finetune str              = abliterated
llama_model_loader: - kv   4:                           general.basename str              = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv   5:                         general.size_label str              = 70B
llama_model_loader: - kv   6:                   general.base_model.count u32              = 1
llama_model_loader: - kv   7:                  general.base_model.0.name str              = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv   8:          general.base_model.0.organization str              = Deepseek Ai
llama_model_loader: - kv   9:              general.base_model.0.repo_url str              = https://huggingface.co/deepseek-ai/De...
llama_model_loader: - kv  10:                               general.tags arr[str,2]       = ["abliterated", "uncensored"]
llama_model_loader: - kv  11:                          llama.block_count u32              = 80
llama_model_loader: - kv  12:                       llama.context_length u32              = 131072
llama_model_loader: - kv  13:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv  14:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv  15:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  16:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  17:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  18:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  19:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  20:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  21:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  22:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  30:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  32:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  34:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  35:               general.quantization_version u32              = 2
llama_model_loader: - kv  36:                          general.file_type u32              = 25
llama_model_loader: - kv  37:                      quantize.imatrix.file str              = /models_out/DeepSeek-R1-Distill-Llama...
llama_model_loader: - kv  38:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav3.txt
llama_model_loader: - kv  39:             quantize.imatrix.entries_count i32              = 560
llama_model_loader: - kv  40:              quantize.imatrix.chunks_count i32              = 125
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q5_K:   80 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_nl:  481 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_NL - 4.5 bpw
print_info: file size   = 37.30 GiB (4.54 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 128001 ('<|end▁of▁sentence|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 70.55 B
print_info: general.name     = DeepSeek R1 Distill Llama 70B Abliterated
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token        = 128001 '<|end▁of▁sentence|>'
print_info: EOT token        = 128001 '<|end▁of▁sentence|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end▁of▁sentence|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-11-09T05:27:05.608+01:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-09T05:27:05.608+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\models\\blobs\\sha256-0a389b67973b09bbc796b1eb4aaad5cea3b8b87f5fdc670d436b2f73ade2bc06 --port 57451"
time=2025-11-09T05:27:05.610+01:00 level=INFO source=server.go:470 msg="system memory" total="63.9 GiB" free="55.8 GiB" free_swap="71.5 GiB"
time=2025-11-09T05:27:05.612+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=89 layers.model=81 layers.offload=77 layers.split="[39 38]" memory.available="[23.6 GiB 22.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="48.6 GiB" memory.required.partial="46.3 GiB" memory.required.kv="3.7 GiB" memory.required.allocations="[23.4 GiB 22.9 GiB]" memory.weights.total="36.7 GiB" memory.weights.repeating="35.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB"
time=2025-11-09T05:27:05.637+01:00 level=INFO source=runner.go:910 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-971b407f-ae20-75ed-99c8-42c696057b0e
  Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-3752f260-9f8c-48e9-780e-12430a037c53
load_backend: loaded CUDA backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-09T05:27:05.726+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-11-09T05:27:05.726+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:57451"
time=2025-11-09T05:27:05.729+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:24000 KvCacheType:q8_0 NumThreads:8 GPULayers:81[ID:GPU-971b407f-ae20-75ed-99c8-42c696057b0e Layers:42(0..41) ID:GPU-3752f260-9f8c-48e9-780e-12430a037c53 Layers:39(42..80)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-09T05:27:05.729+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-09T05:27:05.729+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-971b407f-ae20-75ed-99c8-42c696057b0e utilizing NVML memory reporting free: 25314721792 total: 25757220864
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:02:00.0) - 24142 MiB free
ggml_backend_cuda_device_get_memory device GPU-3752f260-9f8c-48e9-780e-12430a037c53 utilizing NVML memory reporting free: 24613928960 total: 25769803776
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) (0000:01:00.0) - 23473 MiB free
llama_model_loader: loaded meta data with 41 key-value pairs and 724 tensors from D:\Ollama\models\blobs\sha256-0a389b67973b09bbc796b1eb4aaad5cea3b8b87f5fdc670d436b2f73ade2bc06 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 70B Abliter...
llama_model_loader: - kv   3:                           general.finetune str              = abliterated
llama_model_loader: - kv   4:                           general.basename str              = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv   5:                         general.size_label str              = 70B
llama_model_loader: - kv   6:                   general.base_model.count u32              = 1
llama_model_loader: - kv   7:                  general.base_model.0.name str              = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv   8:          general.base_model.0.organization str              = Deepseek Ai
llama_model_loader: - kv   9:              general.base_model.0.repo_url str              = https://huggingface.co/deepseek-ai/De...
llama_model_loader: - kv  10:                               general.tags arr[str,2]       = ["abliterated", "uncensored"]
llama_model_loader: - kv  11:                          llama.block_count u32              = 80
llama_model_loader: - kv  12:                       llama.context_length u32              = 131072
llama_model_loader: - kv  13:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv  14:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv  15:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  16:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  17:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  18:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  19:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  20:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  21:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  22:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  30:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  32:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  34:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  35:               general.quantization_version u32              = 2
llama_model_loader: - kv  36:                          general.file_type u32              = 25
llama_model_loader: - kv  37:                      quantize.imatrix.file str              = /models_out/DeepSeek-R1-Distill-Llama...
llama_model_loader: - kv  38:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav3.txt
llama_model_loader: - kv  39:             quantize.imatrix.entries_count i32              = 560
llama_model_loader: - kv  40:              quantize.imatrix.chunks_count i32              = 125
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q5_K:   80 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_nl:  481 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_NL - 4.5 bpw
print_info: file size   = 37.30 GiB (4.54 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 128001 ('<|end▁of▁sentence|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 8192
print_info: n_layer          = 80
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 28672
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 70B
print_info: model params     = 70.55 B
print_info: general.name     = DeepSeek R1 Distill Llama 70B Abliterated
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token        = 128001 '<|end▁of▁sentence|>'
print_info: EOT token        = 128001 '<|end▁of▁sentence|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end▁of▁sentence|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 80 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 81/81 layers to GPU
load_tensors:        CUDA0 model buffer size = 19322.63 MiB
load_tensors:        CUDA1 model buffer size = 18304.36 MiB
load_tensors:          CPU model buffer size =   563.63 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 24000
llama_context: n_ctx_per_seq = 24000
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = enabled
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (24000) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.52 MiB
llama_kv_cache:      CUDA0 KV buffer size =  2097.38 MiB
llama_kv_cache:      CUDA1 KV buffer size =  1897.63 MiB
llama_kv_cache: size = 3995.00 MiB ( 24064 cells,  80 layers,  1/1 seqs), K (q8_0): 1997.50 MiB, V (q8_0): 1997.50 MiB
llama_context: pipeline parallelism enabled (n_copies=4)
llama_context:      CUDA0 compute buffer size =   475.54 MiB
llama_context:      CUDA1 compute buffer size =   488.55 MiB
llama_context:  CUDA_Host compute buffer size =   204.05 MiB
llama_context: graph nodes  = 2487
llama_context: graph splits = 3
time=2025-11-09T05:27:26.278+01:00 level=INFO source=server.go:1289 msg="llama runner started in 20.67 seconds"
time=2025-11-09T05:27:26.278+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
time=2025-11-09T05:27:26.278+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-09T05:27:26.278+01:00 level=INFO source=server.go:1289 msg="llama runner started in 20.67 seconds"

<!-- gh-comment-id:3507515250 --> @chrisoutwright commented on GitHub (Nov 9, 2025): Similar-sized models like hf.co/bartowski/huihui-ai_DeepSeek-R1-Distill-Llama-70B-abliterated-GGUF:IQ4_NL I can run up to context 20k logs for comparison where it works for above: DeepSeek-R1-Distill-Llama-70B ``` time=2025-11-09T05:27:05.009+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57447" time=2025-11-09T05:27:05.280+01:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-09T05:27:05.280+01:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=8 llama_model_loader: loaded meta data with 41 key-value pairs and 724 tensors from D:\Ollama\models\blobs\sha256-0a389b67973b09bbc796b1eb4aaad5cea3b8b87f5fdc670d436b2f73ade2bc06 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B Abliter... llama_model_loader: - kv 3: general.finetune str = abliterated llama_model_loader: - kv 4: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 5: general.size_label str = 70B llama_model_loader: - kv 6: general.base_model.count u32 = 1 llama_model_loader: - kv 7: general.base_model.0.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 8: general.base_model.0.organization str = Deepseek Ai llama_model_loader: - kv 9: general.base_model.0.repo_url str = https://huggingface.co/deepseek-ai/De... llama_model_loader: - kv 10: general.tags arr[str,2] = ["abliterated", "uncensored"] llama_model_loader: - kv 11: llama.block_count u32 = 80 llama_model_loader: - kv 12: llama.context_length u32 = 131072 llama_model_loader: - kv 13: llama.embedding_length u32 = 8192 llama_model_loader: - kv 14: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 15: llama.attention.head_count u32 = 64 llama_model_loader: - kv 16: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 17: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 18: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 19: llama.attention.key_length u32 = 128 llama_model_loader: - kv 20: llama.attention.value_length u32 = 128 llama_model_loader: - kv 21: llama.vocab_size u32 = 128256 llama_model_loader: - kv 22: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 32: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 34: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 35: general.quantization_version u32 = 2 llama_model_loader: - kv 36: general.file_type u32 = 25 llama_model_loader: - kv 37: quantize.imatrix.file str = /models_out/DeepSeek-R1-Distill-Llama... llama_model_loader: - kv 38: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 39: quantize.imatrix.entries_count i32 = 560 llama_model_loader: - kv 40: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q5_K: 80 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_nl: 481 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_NL - 4.5 bpw print_info: file size = 37.30 GiB (4.54 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 128001 ('<|end▁of▁sentence|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B Abliterated print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-11-09T05:27:05.608+01:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-09T05:27:05.608+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\models\\blobs\\sha256-0a389b67973b09bbc796b1eb4aaad5cea3b8b87f5fdc670d436b2f73ade2bc06 --port 57451" time=2025-11-09T05:27:05.610+01:00 level=INFO source=server.go:470 msg="system memory" total="63.9 GiB" free="55.8 GiB" free_swap="71.5 GiB" time=2025-11-09T05:27:05.612+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=89 layers.model=81 layers.offload=77 layers.split="[39 38]" memory.available="[23.6 GiB 22.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="48.6 GiB" memory.required.partial="46.3 GiB" memory.required.kv="3.7 GiB" memory.required.allocations="[23.4 GiB 22.9 GiB]" memory.weights.total="36.7 GiB" memory.weights.repeating="35.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB" time=2025-11-09T05:27:05.637+01:00 level=INFO source=runner.go:910 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-971b407f-ae20-75ed-99c8-42c696057b0e Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-3752f260-9f8c-48e9-780e-12430a037c53 load_backend: loaded CUDA backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-09T05:27:05.726+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-11-09T05:27:05.726+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:57451" time=2025-11-09T05:27:05.729+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:24000 KvCacheType:q8_0 NumThreads:8 GPULayers:81[ID:GPU-971b407f-ae20-75ed-99c8-42c696057b0e Layers:42(0..41) ID:GPU-3752f260-9f8c-48e9-780e-12430a037c53 Layers:39(42..80)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-09T05:27:05.729+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-11-09T05:27:05.729+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-971b407f-ae20-75ed-99c8-42c696057b0e utilizing NVML memory reporting free: 25314721792 total: 25757220864 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:02:00.0) - 24142 MiB free ggml_backend_cuda_device_get_memory device GPU-3752f260-9f8c-48e9-780e-12430a037c53 utilizing NVML memory reporting free: 24613928960 total: 25769803776 llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) (0000:01:00.0) - 23473 MiB free llama_model_loader: loaded meta data with 41 key-value pairs and 724 tensors from D:\Ollama\models\blobs\sha256-0a389b67973b09bbc796b1eb4aaad5cea3b8b87f5fdc670d436b2f73ade2bc06 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B Abliter... llama_model_loader: - kv 3: general.finetune str = abliterated llama_model_loader: - kv 4: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 5: general.size_label str = 70B llama_model_loader: - kv 6: general.base_model.count u32 = 1 llama_model_loader: - kv 7: general.base_model.0.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 8: general.base_model.0.organization str = Deepseek Ai llama_model_loader: - kv 9: general.base_model.0.repo_url str = https://huggingface.co/deepseek-ai/De... llama_model_loader: - kv 10: general.tags arr[str,2] = ["abliterated", "uncensored"] llama_model_loader: - kv 11: llama.block_count u32 = 80 llama_model_loader: - kv 12: llama.context_length u32 = 131072 llama_model_loader: - kv 13: llama.embedding_length u32 = 8192 llama_model_loader: - kv 14: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 15: llama.attention.head_count u32 = 64 llama_model_loader: - kv 16: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 17: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 18: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 19: llama.attention.key_length u32 = 128 llama_model_loader: - kv 20: llama.attention.value_length u32 = 128 llama_model_loader: - kv 21: llama.vocab_size u32 = 128256 llama_model_loader: - kv 22: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 32: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 34: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 35: general.quantization_version u32 = 2 llama_model_loader: - kv 36: general.file_type u32 = 25 llama_model_loader: - kv 37: quantize.imatrix.file str = /models_out/DeepSeek-R1-Distill-Llama... llama_model_loader: - kv 38: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 39: quantize.imatrix.entries_count i32 = 560 llama_model_loader: - kv 40: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q5_K: 80 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_nl: 481 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_NL - 4.5 bpw print_info: file size = 37.30 GiB (4.54 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 128001 ('<|end▁of▁sentence|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 28672 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 70B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B Abliterated print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 80 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 81/81 layers to GPU load_tensors: CUDA0 model buffer size = 19322.63 MiB load_tensors: CUDA1 model buffer size = 18304.36 MiB load_tensors: CPU model buffer size = 563.63 MiB llama_init_from_model: model default pooling_type is [0], but [-1] was specified llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 24000 llama_context: n_ctx_per_seq = 24000 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = enabled llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (24000) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.52 MiB llama_kv_cache: CUDA0 KV buffer size = 2097.38 MiB llama_kv_cache: CUDA1 KV buffer size = 1897.63 MiB llama_kv_cache: size = 3995.00 MiB ( 24064 cells, 80 layers, 1/1 seqs), K (q8_0): 1997.50 MiB, V (q8_0): 1997.50 MiB llama_context: pipeline parallelism enabled (n_copies=4) llama_context: CUDA0 compute buffer size = 475.54 MiB llama_context: CUDA1 compute buffer size = 488.55 MiB llama_context: CUDA_Host compute buffer size = 204.05 MiB llama_context: graph nodes = 2487 llama_context: graph splits = 3 time=2025-11-09T05:27:26.278+01:00 level=INFO source=server.go:1289 msg="llama runner started in 20.67 seconds" time=2025-11-09T05:27:26.278+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1 time=2025-11-09T05:27:26.278+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-11-09T05:27:26.278+01:00 level=INFO source=server.go:1289 msg="llama runner started in 20.67 seconds" ```
Author
Owner

@chrisoutwright commented on GitHub (Nov 9, 2025):

My Comparison so far:

Category Apertus (fails) DeepSeek (works) Observation
Architecture apertus llama Custom architecture — not fully supported by Ollama’s CUDA backend or something off?
Feed-forward size (n_ff) 43008 28672 Apertus layers are ~50% larger, leading to higher compute memory usage?
RoPE frequency base 12,000,000 500,000 Extremely high base frequency in Apertus may cause nonstandard tensor shapes or scaling?
Context length (train) 65,536 131,072 DeepSeek supports a longer context but still loads fine — not the root cause.
Pinned host memory allocation ~27 GiB ~0.2 GiB Apertus massively over-allocates pinned memory during graph reservation.
CUDA compute buffer allocation cudaMalloc failed: out of memory during graph_reserve Successfully allocates (CUDA0 compute buffer size = ~475.5 MiB) Failure occurs before context initialization — compute graph not built.
Feed-forward block count 80 80 Same number of layers; issue lies in per-layer memory, not count.
KV buffer sizes ~85 MiB per GPU ~2 GiB per GPU DeepSeek allocates more KV memory yet succeeds — confirms not a VRAM exhaustion issue.
Backend log output failed to allocate compute pp buffers compute buffer size = ~480 MiB Apertus crashes before graph buffer setup completes.
Quantization mix f32, q3_K, q4_K, q5_K, q6_K, iq3_xxs, iq3_s, iq4_xs (mixed) Mostly iq4_nl (uniform) Mixed quantization may break CUDA graph reuse or increase duplicate allocations?
Pipeline parallelism enabled (n_copies=4) enabled (n_copies=2) Possible over-allocation of compute buffers due to duplicated graphs on multiple GPU?.

Summary

Apertus triggers a compute graph memory over-allocation bug during graph_reserve, causing massive pinned-memory requests (~27 GiB) and cudaMalloc failures.
DeepSeek works fine with longer context and more KV memory. Is the issue in Apertus’s custom architecture and/or mixed quantization handling within Ollama’s CUDA backend?

<!-- gh-comment-id:3507517594 --> @chrisoutwright commented on GitHub (Nov 9, 2025): My Comparison so far: | Category | **Apertus (fails)** | **DeepSeek (works)** | **Observation** | |-----------|----------------------|----------------------|------------------| | **Architecture** | `apertus` | `llama` | Custom architecture — not fully supported by Ollama’s CUDA backend or something off? | | **Feed-forward size (`n_ff`)** | `43008` | `28672` | Apertus layers are ~50% larger, leading to higher compute memory usage?| | **RoPE frequency base** | `12,000,000` | `500,000` | Extremely high base frequency in Apertus may cause nonstandard tensor shapes or scaling? | | **Context length (train)** | `65,536` | `131,072` | DeepSeek supports a longer context but still loads fine — not the root cause. | | **Pinned host memory allocation** | `~27 GiB` | `~0.2 GiB` | Apertus massively over-allocates pinned memory during graph reservation. | | **CUDA compute buffer allocation** | `cudaMalloc failed: out of memory` during `graph_reserve` | Successfully allocates (`CUDA0 compute buffer size = ~475.5 MiB`) | Failure occurs before context initialization — compute graph not built. | | **Feed-forward block count** | `80` | `80` | Same number of layers; issue lies in per-layer memory, not count. | | **KV buffer sizes** | `~85 MiB per GPU` | `~2 GiB per GPU` | DeepSeek allocates more KV memory yet succeeds — confirms not a VRAM exhaustion issue. | | **Backend log output** | `failed to allocate compute pp buffers` | `compute buffer size = ~480 MiB` | Apertus crashes before graph buffer setup completes. | | **Quantization mix** | `f32, q3_K, q4_K, q5_K, q6_K, iq3_xxs, iq3_s, iq4_xs` (mixed) | Mostly `iq4_nl` (uniform) | Mixed quantization may break CUDA graph reuse or increase duplicate allocations?| | **Pipeline parallelism** | `enabled (n_copies=4)` | `enabled (n_copies=2)` | Possible over-allocation of compute buffers due to duplicated graphs on multiple GPU?. | --- ### Summary Apertus triggers a **compute graph memory over-allocation** bug during `graph_reserve`, causing massive pinned-memory requests (~27 GiB) and `cudaMalloc` failures. DeepSeek works fine with longer context and more KV memory. Is the issue in Apertus’s custom architecture and/or mixed quantization handling within Ollama’s CUDA backend?
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

time=2025-11-09T05:16:26.447+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=89
 layers.model=81 layers.offload=81 layers.split="[41 40]" memory.available="[23.6 GiB 22.9 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="36.0 GiB" memory.required.partial="36.0 GiB" memory.required.kv="156.2 MiB"
 memory.required.allocations="[18.4 GiB 17.6 GiB]" memory.weights.total="33.5 GiB" memory.weights.repeating="32.7 GiB"
 memory.weights.nonrepeating="840.0 MiB" memory.graph.full="208.3 MiB" memory.graph.partial="208.3 MiB"

The Apertus model is currently running on the old engine which is known to be somewhat inaccurate in memory estimation. There is work in progress to run the model in the new engine which is expected to improve memory estimation. In the meantime, mitigations for OOM can be found here.

Note that the current implementation is more expensive in regards to memory for context, compared to other models:

Image
<!-- gh-comment-id:3508875665 --> @rick-github commented on GitHub (Nov 9, 2025): ``` time=2025-11-09T05:16:26.447+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=89 layers.model=81 layers.offload=81 layers.split="[41 40]" memory.available="[23.6 GiB 22.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="36.0 GiB" memory.required.partial="36.0 GiB" memory.required.kv="156.2 MiB" memory.required.allocations="[18.4 GiB 17.6 GiB]" memory.weights.total="33.5 GiB" memory.weights.repeating="32.7 GiB" memory.weights.nonrepeating="840.0 MiB" memory.graph.full="208.3 MiB" memory.graph.partial="208.3 MiB" ``` The Apertus model is currently running on the old engine which is known to be somewhat inaccurate in memory estimation. There is [work in progress](https://github.com/ollama/ollama/pull/12607) to run the model in the new engine which is expected to improve memory estimation. In the meantime, mitigations for OOM can be found [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288). Note that the current implementation is more expensive in regards to memory for context, compared to other models: <img width="1165" height="600" alt="Image" src="https://github.com/user-attachments/assets/e6fe450c-107a-4381-9723-4029a708bc5c" />
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8627