[GH-ISSUE #10835] Large Model Not Splitting Between SXM Cards #7116

Closed
opened 2026-04-12 19:07:00 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @sempervictus on GitHub (May 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10835

What is the issue?

Loading command-a on a 4x32 SXM2 V100 setup somehow fails to recognize the overall available capacity of VRAM and loads everything into host memory. The same Ollama versions load command-r fine

Relevant log output

time=2025-05-23T15:08:22.645Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-05-23T15:08:22.645Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:45491"
llama_model_loader: loaded meta data with 36 key-value pairs and 514 tensors from /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = cohere2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = A Model
llama_model_loader: - kv   3:                         general.size_label str              = 111B
llama_model_loader: - kv   4:                        cohere2.block_count u32              = 64
llama_model_loader: - kv   5:                     cohere2.context_length u32              = 16384
llama_model_loader: - kv   6:                   cohere2.embedding_length u32              = 12288
llama_model_loader: - kv   7:                cohere2.feed_forward_length u32              = 36864
llama_model_loader: - kv   8:               cohere2.attention.head_count u32              = 96
llama_model_loader: - kv   9:            cohere2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                     cohere2.rope.freq_base f32              = 50000.000000
llama_model_loader: - kv  11:       cohere2.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  12:               cohere2.attention.key_length u32              = 128
llama_model_loader: - kv  13:             cohere2.attention.value_length u32              = 128
llama_model_loader: - kv  14:                        cohere2.logit_scale f32              = 0.250000
llama_model_loader: - kv  15:           cohere2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  16:                         cohere2.vocab_size u32              = 256000
llama_model_loader: - kv  17:               cohere2.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:                  cohere2.rope.scaling.type str              = none
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = command-r
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,256000]  = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,253333]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 5
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 255001
llama_model_loader: - kv  26:            tokenizer.ggml.unknown_token_id u32              = 1
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  30:           tokenizer.chat_template.tool_use str              = {%- macro document_turn(documents) -%...
llama_model_loader: - kv  31:                tokenizer.chat_template.rag str              = {% set tools = [] %}\n{%- macro docume...
llama_model_loader: - kv  32:                   tokenizer.chat_templates arr[str,2]       = ["tool_use", "rag"]
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% if documents %}\n{% set tools = [] ...
llama_model_loader: - kv  34:               general.quantization_version u32              = 2
llama_model_loader: - kv  35:                          general.file_type u32              = 15
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  384 tensors
llama_model_loader: - type q6_K:   65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 62.51 GiB (4.84 BPW) 
time=2025-05-23T15:08:22.837Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 41
load: token to piece cache size = 1.8428 MB
print_info: arch             = cohere2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 16384
print_info: n_embd           = 12288
print_info: n_layer          = 64
print_info: n_head           = 96
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 4096
print_info: n_swa_pattern    = 4
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 12
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 1.0e-05
print_info: f_norm_rms_eps   = 0.0e+00
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 2.5e-01
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 36864
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = none
print_info: freq_base_train  = 50000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 16384
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = ?B
print_info: model params     = 111.06 B
print_info: general.name     = A Model
print_info: vocab type       = BPE
print_info: n_vocab          = 256000
print_info: n_merges         = 253333
print_info: BOS token        = 5 '<BOS_TOKEN>'
print_info: EOS token        = 255001 '<|END_OF_TURN_TOKEN|>'
print_info: UNK token        = 1 '<UNK>'
print_info: PAD token        = 0 '<PAD>'
print_info: LF token         = 206 'Ċ'
print_info: FIM PAD token    = 0 '<PAD>'
print_info: EOG token        = 0 '<PAD>'
print_info: EOG token        = 255001 '<|END_OF_TURN_TOKEN|>'
print_info: max token length = 1024
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors:          CPU model buffer size = 64014.98 MiB
time=2025-05-23T15:09:00.386Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 131072
llama_context: n_ctx_per_seq = 131072
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 50000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (131072) > n_ctx_train (16384) -- possible training context overflow
llama_context:        CPU  output buffer size =     1.02 MiB
llama_kv_cache_unified: kv_size = 131072, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32
time=2025-05-23T15:09:15.966Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_kv_cache_unified:        CPU KV buffer size = 32768.00 MiB
llama_kv_cache_unified: KV self size  = 32768.00 MiB, K (f16): 16384.00 MiB, V (f16): 16384.00 MiB
llama_context:        CPU compute buffer size = 25184.01 MiB
llama_context: graph nodes  = 2024
llama_context: graph splits = 1
time=2025-05-23T15:09:26.493Z level=INFO source=server.go:630 msg="llama runner started in 63.91 seconds"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.7.1-rc2

Originally created by @sempervictus on GitHub (May 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10835 ### What is the issue? Loading `command-a` on a 4x32 SXM2 V100 setup somehow fails to recognize the overall available capacity of VRAM and loads everything into host memory. The same Ollama versions load `command-r` fine ### Relevant log output ```shell time=2025-05-23T15:08:22.645Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-05-23T15:08:22.645Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:45491" llama_model_loader: loaded meta data with 36 key-value pairs and 514 tensors from /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = cohere2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = A Model llama_model_loader: - kv 3: general.size_label str = 111B llama_model_loader: - kv 4: cohere2.block_count u32 = 64 llama_model_loader: - kv 5: cohere2.context_length u32 = 16384 llama_model_loader: - kv 6: cohere2.embedding_length u32 = 12288 llama_model_loader: - kv 7: cohere2.feed_forward_length u32 = 36864 llama_model_loader: - kv 8: cohere2.attention.head_count u32 = 96 llama_model_loader: - kv 9: cohere2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: cohere2.rope.freq_base f32 = 50000.000000 llama_model_loader: - kv 11: cohere2.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 12: cohere2.attention.key_length u32 = 128 llama_model_loader: - kv 13: cohere2.attention.value_length u32 = 128 llama_model_loader: - kv 14: cohere2.logit_scale f32 = 0.250000 llama_model_loader: - kv 15: cohere2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 16: cohere2.vocab_size u32 = 256000 llama_model_loader: - kv 17: cohere2.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: cohere2.rope.scaling.type str = none llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = command-r llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a... llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 5 llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 255001 llama_model_loader: - kv 26: tokenizer.ggml.unknown_token_id u32 = 1 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 29: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template.tool_use str = {%- macro document_turn(documents) -%... llama_model_loader: - kv 31: tokenizer.chat_template.rag str = {% set tools = [] %}\n{%- macro docume... llama_model_loader: - kv 32: tokenizer.chat_templates arr[str,2] = ["tool_use", "rag"] llama_model_loader: - kv 33: tokenizer.chat_template str = {% if documents %}\n{% set tools = [] ... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - kv 35: general.file_type u32 = 15 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 384 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 62.51 GiB (4.84 BPW) time=2025-05-23T15:08:22.837Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 41 load: token to piece cache size = 1.8428 MB print_info: arch = cohere2 print_info: vocab_only = 0 print_info: n_ctx_train = 16384 print_info: n_embd = 12288 print_info: n_layer = 64 print_info: n_head = 96 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 4096 print_info: n_swa_pattern = 4 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 12 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 1.0e-05 print_info: f_norm_rms_eps = 0.0e+00 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 2.5e-01 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 36864 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = none print_info: freq_base_train = 50000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 16384 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = ?B print_info: model params = 111.06 B print_info: general.name = A Model print_info: vocab type = BPE print_info: n_vocab = 256000 print_info: n_merges = 253333 print_info: BOS token = 5 '<BOS_TOKEN>' print_info: EOS token = 255001 '<|END_OF_TURN_TOKEN|>' print_info: UNK token = 1 '<UNK>' print_info: PAD token = 0 '<PAD>' print_info: LF token = 206 'Ċ' print_info: FIM PAD token = 0 '<PAD>' print_info: EOG token = 0 '<PAD>' print_info: EOG token = 255001 '<|END_OF_TURN_TOKEN|>' print_info: max token length = 1024 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 64014.98 MiB time=2025-05-23T15:09:00.386Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 131072 llama_context: n_ctx_per_seq = 131072 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 50000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (131072) > n_ctx_train (16384) -- possible training context overflow llama_context: CPU output buffer size = 1.02 MiB llama_kv_cache_unified: kv_size = 131072, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32 time=2025-05-23T15:09:15.966Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_kv_cache_unified: CPU KV buffer size = 32768.00 MiB llama_kv_cache_unified: KV self size = 32768.00 MiB, K (f16): 16384.00 MiB, V (f16): 16384.00 MiB llama_context: CPU compute buffer size = 25184.01 MiB llama_context: graph nodes = 2024 llama_context: graph splits = 1 time=2025-05-23T15:09:26.493Z level=INFO source=server.go:630 msg="llama runner started in 63.91 seconds" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.7.1-rc2
GiteaMirror added the bug label 2026-04-12 19:07:00 -05:00
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

Need a full log, particularly the memory estimation logic.

<!-- gh-comment-id:2904803812 --> @rick-github commented on GitHub (May 23, 2025): Need a full log, particularly the memory estimation logic.
Author
Owner

@sempervictus commented on GitHub (May 23, 2025):

Roger, all the preceding lines - did this on a clean start:

time=2025-05-23T15:07:34.828Z level=INFO source=types.go:130 msg="inference compute" id=GPU-W library=cuda variant=v12 compute=7.0 driver=12.8 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-05-23T15:07:34.828Z level=INFO source=types.go:130 msg="inference compute" id=GPU-X library=cuda variant=v12 compute=7.0 driver=12.8 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-05-23T15:07:34.828Z level=INFO source=types.go:130 msg="inference compute" id=GPU-Y library=cuda variant=v12 compute=7.0 driver=12.8 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-05-23T15:07:34.828Z level=INFO source=types.go:130 msg="inference compute" id=GPU-Z library=cuda variant=v12 compute=7.0 driver=12.8 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
[GIN] 2025/05/23 - 15:08:13 | 200 |     638.537µs |     172.18.0.19 | GET      "/api/tags"
[GIN] 2025/05/23 - 15:08:15 | 200 |      82.343µs |     172.18.0.19 | GET      "/api/version"
time=2025-05-23T15:08:21.951Z level=INFO source=server.go:135 msg="system memory" total="754.6 GiB" free="730.3 GiB" free_swap="0 B"
time=2025-05-23T15:08:21.951Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="94.5 GiB" memory.required.partial="0 B" memory.required.kv="32.0 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B]" memory.weights.total="62.5 GiB" memory.weights.repeating="60.1 GiB" memory.weights.nonrepeating="2.4 GiB" memory.graph.full="64.0 GiB" memory.graph.partial="64.0 GiB"
llama_model_loader: loaded meta data with 36 key-value pairs and 514 tensors from /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = cohere2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = A Model
llama_model_loader: - kv   3:                         general.size_label str              = 111B
llama_model_loader: - kv   4:                        cohere2.block_count u32              = 64
llama_model_loader: - kv   5:                     cohere2.context_length u32              = 16384
llama_model_loader: - kv   6:                   cohere2.embedding_length u32              = 12288
llama_model_loader: - kv   7:                cohere2.feed_forward_length u32              = 36864
llama_model_loader: - kv   8:               cohere2.attention.head_count u32              = 96
llama_model_loader: - kv   9:            cohere2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                     cohere2.rope.freq_base f32              = 50000.000000
llama_model_loader: - kv  11:       cohere2.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  12:               cohere2.attention.key_length u32              = 128
llama_model_loader: - kv  13:             cohere2.attention.value_length u32              = 128
llama_model_loader: - kv  14:                        cohere2.logit_scale f32              = 0.250000
llama_model_loader: - kv  15:           cohere2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  16:                         cohere2.vocab_size u32              = 256000
llama_model_loader: - kv  17:               cohere2.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:                  cohere2.rope.scaling.type str              = none
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = command-r
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,256000]  = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,253333]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 5
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 255001
llama_model_loader: - kv  26:            tokenizer.ggml.unknown_token_id u32              = 1
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  30:           tokenizer.chat_template.tool_use str              = {%- macro document_turn(documents) -%...
llama_model_loader: - kv  31:                tokenizer.chat_template.rag str              = {% set tools = [] %}\n{%- macro docume...
llama_model_loader: - kv  32:                   tokenizer.chat_templates arr[str,2]       = ["tool_use", "rag"]
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% if documents %}\n{% set tools = [] ...
llama_model_loader: - kv  34:               general.quantization_version u32              = 2
llama_model_loader: - kv  35:                          general.file_type u32              = 15
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  384 tensors
llama_model_loader: - type q6_K:   65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 62.51 GiB (4.84 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 41
load: token to piece cache size = 1.8428 MB
print_info: arch             = cohere2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 111.06 B
print_info: general.name     = A Model
print_info: vocab type       = BPE
print_info: n_vocab          = 256000
print_info: n_merges         = 253333
print_info: BOS token        = 5 '<BOS_TOKEN>'
print_info: EOS token        = 255001 '<|END_OF_TURN_TOKEN|>'
print_info: UNK token        = 1 '<UNK>'
print_info: PAD token        = 0 '<PAD>'
print_info: LF token         = 206 'Ċ'
print_info: FIM PAD token    = 0 '<PAD>'
print_info: EOG token        = 0 '<PAD>'
print_info: EOG token        = 255001 '<|END_OF_TURN_TOKEN|>'
print_info: max token length = 1024
llama_model_load: vocab only - skipping tensors
time=2025-05-23T15:08:22.585Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f --ctx-size 131072 --batch-size 512 --threads 40 --no-mmap --parallel 1 --multiuser-cache --port 45491"
time=2025-05-23T15:08:22.585Z level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-23T15:08:22.585Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-23T15:08:22.586Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-23T15:08:22.600Z level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
time=2025-05-23T15:08:22.645Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-05-23T15:08:22.645Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:45491"
<!-- gh-comment-id:2904811269 --> @sempervictus commented on GitHub (May 23, 2025): Roger, all the preceding lines - did this on a clean start: ``` time=2025-05-23T15:07:34.828Z level=INFO source=types.go:130 msg="inference compute" id=GPU-W library=cuda variant=v12 compute=7.0 driver=12.8 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-05-23T15:07:34.828Z level=INFO source=types.go:130 msg="inference compute" id=GPU-X library=cuda variant=v12 compute=7.0 driver=12.8 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-05-23T15:07:34.828Z level=INFO source=types.go:130 msg="inference compute" id=GPU-Y library=cuda variant=v12 compute=7.0 driver=12.8 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-05-23T15:07:34.828Z level=INFO source=types.go:130 msg="inference compute" id=GPU-Z library=cuda variant=v12 compute=7.0 driver=12.8 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" [GIN] 2025/05/23 - 15:08:13 | 200 | 638.537µs | 172.18.0.19 | GET "/api/tags" [GIN] 2025/05/23 - 15:08:15 | 200 | 82.343µs | 172.18.0.19 | GET "/api/version" time=2025-05-23T15:08:21.951Z level=INFO source=server.go:135 msg="system memory" total="754.6 GiB" free="730.3 GiB" free_swap="0 B" time=2025-05-23T15:08:21.951Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="94.5 GiB" memory.required.partial="0 B" memory.required.kv="32.0 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B]" memory.weights.total="62.5 GiB" memory.weights.repeating="60.1 GiB" memory.weights.nonrepeating="2.4 GiB" memory.graph.full="64.0 GiB" memory.graph.partial="64.0 GiB" llama_model_loader: loaded meta data with 36 key-value pairs and 514 tensors from /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = cohere2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = A Model llama_model_loader: - kv 3: general.size_label str = 111B llama_model_loader: - kv 4: cohere2.block_count u32 = 64 llama_model_loader: - kv 5: cohere2.context_length u32 = 16384 llama_model_loader: - kv 6: cohere2.embedding_length u32 = 12288 llama_model_loader: - kv 7: cohere2.feed_forward_length u32 = 36864 llama_model_loader: - kv 8: cohere2.attention.head_count u32 = 96 llama_model_loader: - kv 9: cohere2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: cohere2.rope.freq_base f32 = 50000.000000 llama_model_loader: - kv 11: cohere2.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 12: cohere2.attention.key_length u32 = 128 llama_model_loader: - kv 13: cohere2.attention.value_length u32 = 128 llama_model_loader: - kv 14: cohere2.logit_scale f32 = 0.250000 llama_model_loader: - kv 15: cohere2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 16: cohere2.vocab_size u32 = 256000 llama_model_loader: - kv 17: cohere2.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: cohere2.rope.scaling.type str = none llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = command-r llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a... llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 5 llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 255001 llama_model_loader: - kv 26: tokenizer.ggml.unknown_token_id u32 = 1 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 29: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template.tool_use str = {%- macro document_turn(documents) -%... llama_model_loader: - kv 31: tokenizer.chat_template.rag str = {% set tools = [] %}\n{%- macro docume... llama_model_loader: - kv 32: tokenizer.chat_templates arr[str,2] = ["tool_use", "rag"] llama_model_loader: - kv 33: tokenizer.chat_template str = {% if documents %}\n{% set tools = [] ... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - kv 35: general.file_type u32 = 15 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 384 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 62.51 GiB (4.84 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 41 load: token to piece cache size = 1.8428 MB print_info: arch = cohere2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 111.06 B print_info: general.name = A Model print_info: vocab type = BPE print_info: n_vocab = 256000 print_info: n_merges = 253333 print_info: BOS token = 5 '<BOS_TOKEN>' print_info: EOS token = 255001 '<|END_OF_TURN_TOKEN|>' print_info: UNK token = 1 '<UNK>' print_info: PAD token = 0 '<PAD>' print_info: LF token = 206 'Ċ' print_info: FIM PAD token = 0 '<PAD>' print_info: EOG token = 0 '<PAD>' print_info: EOG token = 255001 '<|END_OF_TURN_TOKEN|>' print_info: max token length = 1024 llama_model_load: vocab only - skipping tensors time=2025-05-23T15:08:22.585Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f --ctx-size 131072 --batch-size 512 --threads 40 --no-mmap --parallel 1 --multiuser-cache --port 45491" time=2025-05-23T15:08:22.585Z level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-23T15:08:22.585Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-23T15:08:22.586Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-23T15:08:22.600Z level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so time=2025-05-23T15:08:22.645Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-05-23T15:08:22.645Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:45491" ```
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

time=2025-05-23T15:08:21.951Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65
 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="94.5 GiB" memory.required.partial="0 B" memory.required.kv="32.0 GiB"
 memory.required.allocations="[0 B 0 B 0 B 0 B]" memory.weights.total="62.5 GiB" memory.weights.repeating="60.1 GiB"
 memory.weights.nonrepeating="2.4 GiB" memory.graph.full="64.0 GiB" memory.graph.partial="64.0 GiB"

Context size of 131072 tokens has increased the size of the memory graph (64G) to where it will not fit on a device (31.4G). As a result, the model is run in system RAM.

<!-- gh-comment-id:2904823163 --> @rick-github commented on GitHub (May 23, 2025): ``` time=2025-05-23T15:08:21.951Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="94.5 GiB" memory.required.partial="0 B" memory.required.kv="32.0 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B]" memory.weights.total="62.5 GiB" memory.weights.repeating="60.1 GiB" memory.weights.nonrepeating="2.4 GiB" memory.graph.full="64.0 GiB" memory.graph.partial="64.0 GiB" ``` Context size of 131072 tokens has increased the size of the memory graph (64G) to where it will not fit on a device (31.4G). As a result, the model is run in system RAM.
Author
Owner

@sempervictus commented on GitHub (May 23, 2025):

@rick-github - meaning that ollama cannot actually utilize multiple GPUs to load different layers of a single model?

<!-- gh-comment-id:2904891326 --> @sempervictus commented on GitHub (May 23, 2025): @rick-github - meaning that ollama cannot actually utilize multiple GPUs to load different layers of a single model?
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

ollama can utilize multiple GPUs to load different layers of a single model, but there are constraints on resources. If a GPU cannot fit the memory graph, the KV cache and at least one layer on a GPU, the model is run in system RAM. If you want to use GPUs for this model, you need to reduce its VRAM footprint. There are only two knobs available for tweaking the size of the graph - context and batch sizes (num_ctx and num_batch). Reducing either of these will reduce required resources. Other things to try are flash attention and K/V cache quantization.

<!-- gh-comment-id:2904913619 --> @rick-github commented on GitHub (May 23, 2025): ollama can utilize multiple GPUs to load different layers of a single model, but there are constraints on resources. If a GPU cannot fit the memory graph, the KV cache and at least one layer on a GPU, the model is run in system RAM. If you want to use GPUs for this model, you need to reduce its VRAM footprint. There are only two knobs available for tweaking the size of the graph - context and batch sizes (`num_ctx` and `num_batch`). Reducing either of these will reduce required resources. Other things to try are [flash attention](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-enable-flash-attention) and [K/V cache quantization](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache).
Author
Owner

@sempervictus commented on GitHub (May 23, 2025):

Thank you - digging into those adjustments

<!-- gh-comment-id:2904989455 --> @sempervictus commented on GitHub (May 23, 2025): Thank you - digging into those adjustments
Author
Owner

@sempervictus commented on GitHub (May 23, 2025):

Unfortunately it seems this doesn't help much with V100s. I halved the ctx size, dropped the KV to q4, and tried both with flash-attention enabled and disabled... to unfortunately still get:

time=2025-05-23T16:31:31.581Z level=INFO source=server.go:135 msg="system memory" total="754.6 GiB" free="731.2 GiB" free_swap="0 B"
time=2025-05-23T16:31:31.581Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="78.5 GiB" memory.required.partial="0 B" memory.required.kv="16.0 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B]" memory.weights.total="62.5 GiB" memory.weights.repeating="60.1 GiB" memory.weights.nonrepeating="2.4 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB"
time=2025-05-23T16:31:31.581Z level=WARN source=server.go:222 msg="quantized kv cache requested but flash attention disabled" type=q4_0

even at 32k context size and q4 KV.

Is there any way to break up the context memory between GPUs to localize with the layers of model being placed there (reaching here, clearly 😄)?

<!-- gh-comment-id:2905033626 --> @sempervictus commented on GitHub (May 23, 2025): Unfortunately it seems this doesn't help much with V100s. I halved the ctx size, dropped the KV to q4, and tried both with flash-attention enabled and disabled... to unfortunately still get: ``` time=2025-05-23T16:31:31.581Z level=INFO source=server.go:135 msg="system memory" total="754.6 GiB" free="731.2 GiB" free_swap="0 B" time=2025-05-23T16:31:31.581Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="78.5 GiB" memory.required.partial="0 B" memory.required.kv="16.0 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B]" memory.weights.total="62.5 GiB" memory.weights.repeating="60.1 GiB" memory.weights.nonrepeating="2.4 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB" time=2025-05-23T16:31:31.581Z level=WARN source=server.go:222 msg="quantized kv cache requested but flash attention disabled" type=q4_0 ``` even at 32k context size and q4 KV. Is there any way to break up the context memory between GPUs to localize with the layers of model being placed there (reaching here, clearly 😄)?
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

The problem is the graph, just that alone is more than the available VRAM. Perversely, the graph is smaller if it can fit on one device - you would think that spreading the model over multiple devices would make it easier, but it in fact uses more VRAM than just on a single device. AFAIK, there's no way to partition it further to make it fit, other than lowering the values already discussed. Flash attention would help but it looks like it's either not supported for the model or not supported on the hardware.

<!-- gh-comment-id:2905067753 --> @rick-github commented on GitHub (May 23, 2025): The problem is the graph, just that alone is more than the available VRAM. Perversely, the graph is smaller if it can fit on one device - you would think that spreading the model over multiple devices would make it easier, but it in fact uses more VRAM than just on a single device. AFAIK, there's no way to partition it further to make it fit, other than lowering the values already discussed. Flash attention would help but it looks like it's either not supported for the model or not supported on the hardware.
Author
Owner

@sempervictus commented on GitHub (May 23, 2025):

AFAIK flash attention comes in v8 and volta is v7 :-\. Thank you for the explanation. Logs looked like flash attention is needed for KV cache quantization as well; is that correct?

<!-- gh-comment-id:2905854368 --> @sempervictus commented on GitHub (May 23, 2025): AFAIK flash attention comes in v8 and volta is v7 :-\\. Thank you for the explanation. Logs looked like flash attention is needed for KV cache quantization as well; is that correct?
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

Logs looked like flash attention is needed for KV cache quantization as well; is that correct?

Correct.

<!-- gh-comment-id:2905856220 --> @rick-github commented on GitHub (May 23, 2025): > Logs looked like flash attention is needed for KV cache quantization as well; is that correct? Correct.
Author
Owner

@sempervictus commented on GitHub (May 27, 2025):

@rick-github: #10859 may offer a solution

<!-- gh-comment-id:2911043917 --> @sempervictus commented on GitHub (May 27, 2025): @rick-github: #10859 may offer a solution
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7116