[GH-ISSUE #14365] Poor heterogenous GPU VRAM usage with large models. #35094

Closed
opened 2026-04-22 19:18:26 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @cobrafast on GitHub (Feb 22, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14365

What is the issue?

I'm running a RTX 4080 (CUDA0) and RTX 3050-6GB (CUDA1) and Ollama seems very reluctant to offload anything to the 3050, especially with larger models.

As you can see in the log, there are some 3.5 GiB available on CUDA1 but they go completely unused.
Even CUDA0 has some leftover 3.7 GiB unused after everything is loaded and running.

(Setting num_gpu doesn't help because it seems to consider CUDA0 exclusively and fails to load the llama runner once CUDA0 is out of memory to hold all desired layers. Sad that an override for split isn't provided.)

Is this expected and correct behavior or something that can be improved?

Relevant log output

time=2026-02-23T00:27:33.592+01:00 level=INFO source=sched.go:491 msg="system memory" total="95.6 GiB" free="63.6 GiB" free_swap="70.5 GiB"
time=2026-02-23T00:27:33.592+01:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5 library=CUDA available="14.7 GiB" free="15.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-23T00:27:33.592+01:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-fb84db4c-9e7b-edf5-503a-f36b56145c4e library=CUDA available="3.0 GiB" free="3.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-23T00:27:33.592+01:00 level=INFO source=server.go:498 msg="loading model" "model layers"=89 requested=-1
time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="8.9 GiB"
time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="51.8 GiB"
time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="936.0 MiB"
time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="5.3 GiB"
time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="3.6 GiB"
time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="3.6 GiB"
time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:272 msg="total memory" size="74.1 GiB"
time=2026-02-23T00:27:33.626+01:00 level=INFO source=runner.go:965 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\Cobra_Fast\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes, ID: GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5
  Device 1: NVIDIA GeForce RTX 3050, compute capability 8.6, VMM: yes, ID: GPU-fb84db4c-9e7b-edf5-503a-f36b56145c4e
load_backend: loaded CUDA backend from C:\Users\Cobra_Fast\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-02-23T00:27:33.730+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-02-23T00:27:33.731+01:00 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:63989"
time=2026-02-23T00:27:33.732+01:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:18432 KvCacheType: NumThreads:16 GPULayers:13[ID:GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5 Layers:13(75..87) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-23T00:27:33.732+01:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-23T00:27:33.733+01:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5 utilizing NVML memory reporting free: 16215449600 total: 17171480576
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4080) (0000:01:00.0) - 15464 MiB free
ggml_backend_cuda_device_get_memory device GPU-fb84db4c-9e7b-edf5-503a-f36b56145c4e utilizing NVML memory reporting free: 3685982208 total: 6442450944
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3050) (0000:03:00.0) - 3515 MiB free
llama_model_loader: loaded meta data with 45 key-value pairs and 795 tensors from F:\ai\models\llama\ollama\models\blobs\sha256-e05140f0ce4f662ca5a4c0b4638d0265ca176d08ad8beebb46e3fccf84d8a25b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Behemoth R1 123B v2
llama_model_loader: - kv   3:                            general.version str              = v2
llama_model_loader: - kv   4:                           general.basename str              = Behemoth-R1
llama_model_loader: - kv   5:                         general.size_label str              = 123B
llama_model_loader: - kv   6:                   general.base_model.count u32              = 1
llama_model_loader: - kv   7:                  general.base_model.0.name str              = Mistral Large Instruct 2411
llama_model_loader: - kv   8:               general.base_model.0.version str              = 2411
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Mistralai
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/mistralai/Mist...
llama_model_loader: - kv  11:                          llama.block_count u32              = 88
llama_model_loader: - kv  12:                       llama.context_length u32              = 131072
llama_model_loader: - kv  13:                     llama.embedding_length u32              = 12288
llama_model_loader: - kv  14:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv  15:                 llama.attention.head_count u32              = 96
llama_model_loader: - kv  16:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  17:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  18:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  19:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  20:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  21:                           llama.vocab_size u32              = 32768
llama_model_loader: - kv  22:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,32768]   = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv  26:                      tokenizer.ggml.scores arr[f32,32768]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  27:                  tokenizer.ggml.token_type arr[i32,32768]   = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  32:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  33:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  34:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  35:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  36:               general.quantization_version u32              = 2
llama_model_loader: - kv  37:                          general.file_type u32              = 30
llama_model_loader: - kv  38:                      quantize.imatrix.file str              = /models_out/Behemoth-R1-123B-v2-GGUF/...
llama_model_loader: - kv  39:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav5.txt
llama_model_loader: - kv  40:             quantize.imatrix.entries_count u32              = 616
llama_model_loader: - kv  41:              quantize.imatrix.chunks_count u32              = 942
llama_model_loader: - kv  42:                                   split.no u16              = 0
llama_model_loader: - kv  43:                        split.tensors.count i32              = 795
llama_model_loader: - kv  44:                                split.count u16              = 0
llama_model_loader: - type  f32:  177 tensors
llama_model_loader: - type q5_K:   88 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_xs:  529 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_XS - 4.25 bpw
print_info: file size   = 60.94 GiB (4.27 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 2 ('</s>')
load: special tokens cache size = 771
load: token to piece cache size = 0.1732 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: no_alloc         = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 12288
print_info: n_embd_inp       = 12288
print_info: n_layer          = 88
print_info: n_head           = 96
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 12
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 28672
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: n_expert_groups  = 0
print_info: n_group_used     = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_yarn_log_mul= 0.0000
print_info: rope_finetuned   = unknown
print_info: model type       = ?B
print_info: model params     = 122.61 B
print_info: general.name     = Behemoth R1 123B v2
print_info: vocab type       = SPM
print_info: n_vocab          = 32768
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: LF token         = 781 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
ggml_cuda_host_malloc: failed to allocate 53056.45 MiB of pinned memory: out of memory
load_tensors: offloading 13 repeating layers to GPU
load_tensors: offloaded 13/89 layers to GPU
load_tensors:          CPU model buffer size =   204.00 MiB
load_tensors:        CUDA0 model buffer size =  9141.84 MiB
load_tensors:          CPU model buffer size = 53056.45 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 18432
llama_context: n_ctx_seq     = 18432
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = auto
llama_context: kv_unified    = false
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_seq (18432) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.17 MiB
llama_kv_cache:        CPU KV buffer size =  5400.00 MiB
llama_kv_cache:      CUDA0 KV buffer size =   936.00 MiB
llama_kv_cache: size = 6336.00 MiB ( 18432 cells,  88 layers,  1/1 seqs), K (f16): 3168.00 MiB, V (f16): 3168.00 MiB
llama_context: Flash Attention was auto, set to enabled
llama_context:      CUDA0 compute buffer size =   429.00 MiB
llama_context:  CUDA_Host compute buffer size =    60.01 MiB
llama_context: graph nodes  = 2735
llama_context: graph splits = 829 (with bs=512), 3 (with bs=1)
time=2026-02-23T00:28:03.798+01:00 level=INFO source=server.go:1388 msg="llama runner started in 30.21 seconds"
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 591.74                 Driver Version: 591.74         CUDA Version: 13.1     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4080      WDDM  |   00000000:01:00.0  On |                  N/A |
|  0%   35C    P2             27W /  355W |   11460MiB /  16376MiB |      3%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3050      WDDM  |   00000000:03:00.0  On |                  N/A |
|  0%   54C    P3             20W /   46W |    2408MiB /   6144MiB |     21%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.16.3

Originally created by @cobrafast on GitHub (Feb 22, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14365 ### What is the issue? I'm running a RTX 4080 (CUDA0) and RTX 3050-6GB (CUDA1) and Ollama seems very reluctant to offload anything to the 3050, especially with larger models. As you can see in the log, there are some 3.5 GiB available on CUDA1 but they go completely unused. Even CUDA0 has some leftover 3.7 GiB unused after everything is loaded and running. (Setting `num_gpu` doesn't help because it seems to consider CUDA0 exclusively and fails to load the llama runner once CUDA0 is out of memory to hold all desired layers. Sad that an override for split isn't provided.) Is this expected and correct behavior or something that can be improved? ### Relevant log output ```shell time=2026-02-23T00:27:33.592+01:00 level=INFO source=sched.go:491 msg="system memory" total="95.6 GiB" free="63.6 GiB" free_swap="70.5 GiB" time=2026-02-23T00:27:33.592+01:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5 library=CUDA available="14.7 GiB" free="15.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-23T00:27:33.592+01:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-fb84db4c-9e7b-edf5-503a-f36b56145c4e library=CUDA available="3.0 GiB" free="3.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-23T00:27:33.592+01:00 level=INFO source=server.go:498 msg="loading model" "model layers"=89 requested=-1 time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="8.9 GiB" time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="51.8 GiB" time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="936.0 MiB" time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="5.3 GiB" time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="3.6 GiB" time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="3.6 GiB" time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:272 msg="total memory" size="74.1 GiB" time=2026-02-23T00:27:33.626+01:00 level=INFO source=runner.go:965 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\Cobra_Fast\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes, ID: GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5 Device 1: NVIDIA GeForce RTX 3050, compute capability 8.6, VMM: yes, ID: GPU-fb84db4c-9e7b-edf5-503a-f36b56145c4e load_backend: loaded CUDA backend from C:\Users\Cobra_Fast\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-02-23T00:27:33.730+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-02-23T00:27:33.731+01:00 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:63989" time=2026-02-23T00:27:33.732+01:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:18432 KvCacheType: NumThreads:16 GPULayers:13[ID:GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5 Layers:13(75..87) ] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-23T00:27:33.732+01:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-23T00:27:33.733+01:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5 utilizing NVML memory reporting free: 16215449600 total: 17171480576 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4080) (0000:01:00.0) - 15464 MiB free ggml_backend_cuda_device_get_memory device GPU-fb84db4c-9e7b-edf5-503a-f36b56145c4e utilizing NVML memory reporting free: 3685982208 total: 6442450944 llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3050) (0000:03:00.0) - 3515 MiB free llama_model_loader: loaded meta data with 45 key-value pairs and 795 tensors from F:\ai\models\llama\ollama\models\blobs\sha256-e05140f0ce4f662ca5a4c0b4638d0265ca176d08ad8beebb46e3fccf84d8a25b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Behemoth R1 123B v2 llama_model_loader: - kv 3: general.version str = v2 llama_model_loader: - kv 4: general.basename str = Behemoth-R1 llama_model_loader: - kv 5: general.size_label str = 123B llama_model_loader: - kv 6: general.base_model.count u32 = 1 llama_model_loader: - kv 7: general.base_model.0.name str = Mistral Large Instruct 2411 llama_model_loader: - kv 8: general.base_model.0.version str = 2411 llama_model_loader: - kv 9: general.base_model.0.organization str = Mistralai llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/mistralai/Mist... llama_model_loader: - kv 11: llama.block_count u32 = 88 llama_model_loader: - kv 12: llama.context_length u32 = 131072 llama_model_loader: - kv 13: llama.embedding_length u32 = 12288 llama_model_loader: - kv 14: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 15: llama.attention.head_count u32 = 96 llama_model_loader: - kv 16: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 17: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 18: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 19: llama.attention.key_length u32 = 128 llama_model_loader: - kv 20: llama.attention.value_length u32 = 128 llama_model_loader: - kv 21: llama.vocab_size u32 = 32768 llama_model_loader: - kv 22: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = llama llama_model_loader: - kv 24: tokenizer.ggml.pre str = default llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 26: tokenizer.ggml.scores arr[f32,32768] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,32768] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 32: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 33: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 34: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 35: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 36: general.quantization_version u32 = 2 llama_model_loader: - kv 37: general.file_type u32 = 30 llama_model_loader: - kv 38: quantize.imatrix.file str = /models_out/Behemoth-R1-123B-v2-GGUF/... llama_model_loader: - kv 39: quantize.imatrix.dataset str = /training_dir/calibration_datav5.txt llama_model_loader: - kv 40: quantize.imatrix.entries_count u32 = 616 llama_model_loader: - kv 41: quantize.imatrix.chunks_count u32 = 942 llama_model_loader: - kv 42: split.no u16 = 0 llama_model_loader: - kv 43: split.tensors.count i32 = 795 llama_model_loader: - kv 44: split.count u16 = 0 llama_model_loader: - type f32: 177 tensors llama_model_loader: - type q5_K: 88 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_xs: 529 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_XS - 4.25 bpw print_info: file size = 60.94 GiB (4.27 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 2 ('</s>') load: special tokens cache size = 771 load: token to piece cache size = 0.1732 MB print_info: arch = llama print_info: vocab_only = 0 print_info: no_alloc = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 12288 print_info: n_embd_inp = 12288 print_info: n_layer = 88 print_info: n_head = 96 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 12 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 28672 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: n_expert_groups = 0 print_info: n_group_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_yarn_log_mul= 0.0000 print_info: rope_finetuned = unknown print_info: model type = ?B print_info: model params = 122.61 B print_info: general.name = Behemoth R1 123B v2 print_info: vocab type = SPM print_info: n_vocab = 32768 print_info: n_merges = 0 print_info: BOS token = 1 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 0 '<unk>' print_info: LF token = 781 '<0x0A>' print_info: EOG token = 2 '</s>' print_info: max token length = 48 load_tensors: loading model tensors, this can take a while... (mmap = false) ggml_cuda_host_malloc: failed to allocate 53056.45 MiB of pinned memory: out of memory load_tensors: offloading 13 repeating layers to GPU load_tensors: offloaded 13/89 layers to GPU load_tensors: CPU model buffer size = 204.00 MiB load_tensors: CUDA0 model buffer size = 9141.84 MiB load_tensors: CPU model buffer size = 53056.45 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 18432 llama_context: n_ctx_seq = 18432 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = auto llama_context: kv_unified = false llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_seq (18432) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.17 MiB llama_kv_cache: CPU KV buffer size = 5400.00 MiB llama_kv_cache: CUDA0 KV buffer size = 936.00 MiB llama_kv_cache: size = 6336.00 MiB ( 18432 cells, 88 layers, 1/1 seqs), K (f16): 3168.00 MiB, V (f16): 3168.00 MiB llama_context: Flash Attention was auto, set to enabled llama_context: CUDA0 compute buffer size = 429.00 MiB llama_context: CUDA_Host compute buffer size = 60.01 MiB llama_context: graph nodes = 2735 llama_context: graph splits = 829 (with bs=512), 3 (with bs=1) time=2026-02-23T00:28:03.798+01:00 level=INFO source=server.go:1388 msg="llama runner started in 30.21 seconds" ``` ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 591.74 Driver Version: 591.74 CUDA Version: 13.1 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4080 WDDM | 00000000:01:00.0 On | N/A | | 0% 35C P2 27W / 355W | 11460MiB / 16376MiB | 3% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3050 WDDM | 00000000:03:00.0 On | N/A | | 0% 54C P3 20W / 46W | 2408MiB / 6144MiB | 21% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.16.3
GiteaMirror added the bug label 2026-04-22 19:18:26 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 23, 2026):

A device must have a minimum amount of space available to be used for inference. That minimum amount must be able to hold the graph, at least one layer, and some ancillary data structures.

time=2026-02-23T00:27:33.592+01:00 level=INFO source=sched.go:498 msg="gpu memory"
 id=GPU-fb84db4c-9e7b-edf5-503a-f36b56145c4e library=CUDA available="3.0 GiB" free="3.4 GiB"
 minimum="457.0 MiB" overhead="0 B"

time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:262 msg="compute graph"
 device=CUDA1 size="3.6 GiB"

The 3050 has 3.0G available, but the compute graph is 3.6G, hence the device cannot be used for hosting any layers.

The memory footprint of the model can be reduced by using a smaller context, or enabling flash attention (on by default for this model) and setting KV cache quantization.

<!-- gh-comment-id:3942048685 --> @rick-github commented on GitHub (Feb 23, 2026): A device must have a minimum amount of space available to be used for inference. That minimum amount must be able to hold the graph, at least one layer, and some ancillary data structures. ``` time=2026-02-23T00:27:33.592+01:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-fb84db4c-9e7b-edf5-503a-f36b56145c4e library=CUDA available="3.0 GiB" free="3.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-23T00:27:33.593+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="3.6 GiB" ``` The 3050 has 3.0G available, but the compute graph is 3.6G, hence the device cannot be used for hosting any layers. The memory footprint of the model can be reduced by using a smaller context, or enabling flash attention (on by default for this model) and [setting KV cache quantization](https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-can-i-set-the-quantization-type-for-the-kv-cache).
Author
Owner

@cobrafast commented on GitHub (Feb 23, 2026):

Thank you, in that case then it seems this is expected behavior.

<!-- gh-comment-id:3942056016 --> @cobrafast commented on GitHub (Feb 23, 2026): Thank you, in that case then it seems this is expected behavior.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35094