[GH-ISSUE #7047] Uneven split across GPUs #66529

Open
opened 2026-05-04 07:17:16 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @KMouratidis on GitHub (Sep 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7047

Originally assigned to: @dhiltgen on GitHub.

When loading a model across 2 GPUs, the layers are split evenly, but the GPU memory usage is quite a bit higher on the first GPU:

|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090        On  |   00000000:01:00.0 Off |                  N/A |
| 55%   58C    P0            188W /  275W |   23747MiB /  24576MiB |     27%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3090        On  |   00000000:02:00.0 Off |                  N/A |
| 53%   49C    P0            179W /  275W |   22519MiB /  24576MiB |     26%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

This seems to be due do CUDA0 compute buffer size being ~1GB higher than the CUDA1 equivalent. When using text-generation-webui & llamacpp I'm able to specify a 50,51 split which results in the second GPU getting a layer or two more, thus balancing the memory usage, and allowing bigger models to run (or more layers to get offloaded). Does this exist? If not, is it possible?

Server log
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.780Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=78 layers.model=81 layers.offload=70 layers.split=35,35 memory.available="[23.3 GiB 23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="52.4 GiB" memory.required.partial="45.9 GiB" memory.required.kv="5.0 GiB" memory.required.allocations="[23.0 GiB 22.9 GiB]" memory.weights.total="44.3 GiB" memory.weights.repeating="43.3 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="2.6 GiB" memory.graph.partial="2.6 GiB"
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-c9ff230988a3c90f5beec5da2ebbd8b77d953389b587cb7398c6abd671b7562f --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 78 --flash-attn --parallel 1 --tensor-split 35,35 --port 37883"
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] build info | build=10 commit="3f6ec33" tid="139699938238464" timestamp=1727708425
Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] system info | n_threads=16 n_threads_batch=16 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139699938238464" timestamp=1727708425 total_threads=32
Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="37883" tid="139699938238464" timestamp=1727708425
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: loaded meta data with 35 key-value pairs and 963 tensors from /root/.ollama/models/blobs/sha256-c9ff230988a3c90f5beec5da2ebbd8b77d953389b587cb7398c6abd671b7562f (version GGUF V3 (latest))
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   1:                               general.type str              = model
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 72B Instruct
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   5:                         general.size_label str              = 72B
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   6:                            general.license str              = other
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   7:                       general.license.name str              = qwen
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   8:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-7...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv   9:                   general.base_model.count u32              = 1
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  10:                  general.base_model.0.name str              = Qwen2.5 72B
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  11:          general.base_model.0.organization str              = Qwen
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  12:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-72B
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  13:                               general.tags arr[str,2]       = ["chat", "text-generation"]
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  14:                          general.languages arr[str,1]       = ["en"]
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  15:                          qwen2.block_count u32              = 80
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  16:                       qwen2.context_length u32              = 32768
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  17:                     qwen2.embedding_length u32              = 8192
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  18:                  qwen2.feed_forward_length u32              = 29568
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  19:                 qwen2.attention.head_count u32              = 64
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  20:              qwen2.attention.head_count_kv u32              = 8
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  21:                       qwen2.rope.freq_base f32              = 1000000.000000
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  22:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  23:                          general.file_type u32              = 14
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  24:                       tokenizer.ggml.model str              = gpt2
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  25:                         tokenizer.ggml.pre str              = qwen2
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  26:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  27:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  28:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 151645
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  30:            tokenizer.ggml.padding_token_id u32              = 151643
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  31:                tokenizer.ggml.bos_token_id u32              = 151643
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = false
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv  34:               general.quantization_version u32              = 2
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type  f32:  401 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_0:   70 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_1:   10 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q4_K:  401 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_K:   80 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q6_K:    1 tensors
Sep 30 15:00:26 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:26.034Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_vocab: special tokens cache size = 22
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_vocab: token to piece cache size = 0.9310 MB
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: format           = GGUF V3 (latest)
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: arch             = qwen2
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab type       = BPE
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_vocab          = 152064
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_merges         = 151387
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab_only       = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_train      = 32768
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd           = 8192
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_layer          = 80
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head           = 64
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head_kv        = 8
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_rot            = 128
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_swa            = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_k    = 128
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_v    = 128
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_gqa            = 8
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_k_gqa     = 1024
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_v_gqa     = 1024
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ff             = 29568
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert         = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert_used    = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: causal attn      = 1
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: pooling type     = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope type        = 2
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope scaling     = linear
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_base_train  = 1000000.0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_scale_train = 1
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_orig_yarn  = 32768
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope_finetuned   = unknown
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_conv       = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_inner      = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_state      = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_rank      = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model type       = 70B
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model ftype      = Q4_K - Small
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model params     = 72.71 B
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model size       = 40.87 GiB (4.83 BPW)
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: general.name     = Qwen2.5 72B Instruct
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: max token length = 256
Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: found 2 CUDA devices:
Sep 30 15:00:26 dying-love 6f929011afab[1234]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Sep 30 15:00:26 dying-love 6f929011afab[1234]:   Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_tensors: ggml ctx size =    1.27 MiB
Sep 30 15:00:27 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:27.490Z level=INFO source=server.go:621 msg="waiting for server to become available" s>
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: offloading 78 repeating layers to GPU
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: offloaded 78/81 layers to GPU
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors:        CPU buffer size = 41850.31 MiB
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors:      CUDA0 buffer size = 19646.28 MiB
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors:      CUDA1 buffer size = 19530.78 MiB
Sep 30 15:00:29 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:29.545Z level=INFO source=server.go:621 msg="waiting for server to become available" s>
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ctx      = 16384
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_batch    = 512
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ubatch   = 512
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: flash_attn = 1
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_base  = 1000000.0
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_scale = 1
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init:  CUDA_Host KV buffer size =   128.00 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init:      CUDA0 KV buffer size =  2496.00 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init:      CUDA1 KV buffer size =  2496.00 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: KV self size  = 5120.00 MiB, K (f16): 2560.00 MiB, V (f16): 2560.00 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model:  CUDA_Host  output buffer size =     0.61 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model:      CUDA0 compute buffer size =  1287.53 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model:      CUDA1 compute buffer size =   163.50 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model:  CUDA_Host compute buffer size =    48.01 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph nodes  = 2487
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph splits = 33
Originally created by @KMouratidis on GitHub (Sep 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7047 Originally assigned to: @dhiltgen on GitHub. When loading a model across 2 GPUs, the layers are split evenly, but the GPU memory usage is quite a bit higher on the first GPU: ``` |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 On | 00000000:01:00.0 Off | N/A | | 55% 58C P0 188W / 275W | 23747MiB / 24576MiB | 27% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3090 On | 00000000:02:00.0 Off | N/A | | 53% 49C P0 179W / 275W | 22519MiB / 24576MiB | 26% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ ``` This seems to be due do `CUDA0 compute buffer size` being ~1GB higher than the CUDA1 equivalent. When using text-generation-webui & llamacpp I'm able to specify a `50,51` split which results in the second GPU getting a layer or two more, thus balancing the memory usage, and allowing bigger models to run (or more layers to get offloaded). Does this exist? If not, is it possible? <details> <summary>Server log</summary> ``` Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.780Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=78 layers.model=81 layers.offload=70 layers.split=35,35 memory.available="[23.3 GiB 23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="52.4 GiB" memory.required.partial="45.9 GiB" memory.required.kv="5.0 GiB" memory.required.allocations="[23.0 GiB 22.9 GiB]" memory.weights.total="44.3 GiB" memory.weights.repeating="43.3 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="2.6 GiB" memory.graph.partial="2.6 GiB" Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-c9ff230988a3c90f5beec5da2ebbd8b77d953389b587cb7398c6abd671b7562f --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 78 --flash-attn --parallel 1 --tensor-split 35,35 --port 37883" Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=sched.go:449 msg="loaded runners" count=1 Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] build info | build=10 commit="3f6ec33" tid="139699938238464" timestamp=1727708425 Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] system info | n_threads=16 n_threads_batch=16 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139699938238464" timestamp=1727708425 total_threads=32 Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="37883" tid="139699938238464" timestamp=1727708425 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: loaded meta data with 35 key-value pairs and 963 tensors from /root/.ollama/models/blobs/sha256-c9ff230988a3c90f5beec5da2ebbd8b77d953389b587cb7398c6abd671b7562f (version GGUF V3 (latest)) Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 0: general.architecture str = qwen2 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 1: general.type str = model Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 2: general.name str = Qwen2.5 72B Instruct Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 3: general.finetune str = Instruct Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 4: general.basename str = Qwen2.5 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 5: general.size_label str = 72B Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 6: general.license str = other Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 7: general.license.name str = qwen Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 9: general.base_model.count u32 = 1 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 72B Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-72B Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"] Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 15: qwen2.block_count u32 = 80 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 16: qwen2.context_length u32 = 32768 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 17: qwen2.embedding_length u32 = 8192 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 29568 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 64 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 8 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 23: general.file_type u32 = 14 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 34: general.quantization_version u32 = 2 Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type f32: 401 tensors Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_0: 70 tensors Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_1: 10 tensors Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q4_K: 401 tensors Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_K: 80 tensors Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q6_K: 1 tensors Sep 30 15:00:26 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:26.034Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_vocab: special tokens cache size = 22 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_vocab: token to piece cache size = 0.9310 MB Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: format = GGUF V3 (latest) Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: arch = qwen2 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab type = BPE Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_vocab = 152064 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_merges = 151387 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab_only = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_train = 32768 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd = 8192 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_layer = 80 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head = 64 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head_kv = 8 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_rot = 128 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_swa = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_k = 128 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_v = 128 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_gqa = 8 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_k_gqa = 1024 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_v_gqa = 1024 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_eps = 0.0e+00 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_logit_scale = 0.0e+00 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ff = 29568 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert_used = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: causal attn = 1 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: pooling type = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope type = 2 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope scaling = linear Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_base_train = 1000000.0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_scale_train = 1 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_orig_yarn = 32768 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope_finetuned = unknown Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_conv = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_inner = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_state = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_rank = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model type = 70B Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model ftype = Q4_K - Small Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model params = 72.71 B Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model size = 40.87 GiB (4.83 BPW) Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: general.name = Qwen2.5 72B Instruct Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: LF token = 148848 'ÄĬ' Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: max token length = 256 Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: found 2 CUDA devices: Sep 30 15:00:26 dying-love 6f929011afab[1234]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Sep 30 15:00:26 dying-love 6f929011afab[1234]: Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_tensors: ggml ctx size = 1.27 MiB Sep 30 15:00:27 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:27.490Z level=INFO source=server.go:621 msg="waiting for server to become available" s> Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: offloading 78 repeating layers to GPU Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: offloaded 78/81 layers to GPU Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: CPU buffer size = 41850.31 MiB Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: CUDA0 buffer size = 19646.28 MiB Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: CUDA1 buffer size = 19530.78 MiB Sep 30 15:00:29 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:29.545Z level=INFO source=server.go:621 msg="waiting for server to become available" s> Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ctx = 16384 Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_batch = 512 Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ubatch = 512 Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: flash_attn = 1 Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_base = 1000000.0 Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_scale = 1 Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init: CUDA_Host KV buffer size = 128.00 MiB Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init: CUDA0 KV buffer size = 2496.00 MiB Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init: CUDA1 KV buffer size = 2496.00 MiB Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: KV self size = 5120.00 MiB, K (f16): 2560.00 MiB, V (f16): 2560.00 MiB Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA_Host output buffer size = 0.61 MiB Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA0 compute buffer size = 1287.53 MiB Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA1 compute buffer size = 163.50 MiB Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA_Host compute buffer size = 48.01 MiB Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph nodes = 2487 Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph splits = 33 ``` </details>
GiteaMirror added the bugmemory labels 2026-05-04 07:17:16 -05:00
Author
Owner

@dhiltgen commented on GitHub (Sep 30, 2024):

You're correct that GPU0 gets a set of allocations that are singletons. The intent on our algorithm is to maximize the number of layers taking that into consideration, but from what you're describing it sounds like we may be able to squeeze one or two more layers on the second GPU. Could you share the log line just a little before what you shared showing our memory prediction? offload to cuda...

<!-- gh-comment-id:2383694746 --> @dhiltgen commented on GitHub (Sep 30, 2024): You're correct that GPU0 gets a set of allocations that are singletons. The intent on our algorithm is to maximize the number of layers taking that into consideration, but from what you're describing it sounds like we may be able to squeeze one or two more layers on the second GPU. Could you share the log line just a little before what you shared showing our memory prediction? `offload to cuda`...
Author
Owner

@KMouratidis commented on GitHub (Sep 30, 2024):

Sure! I've edited the logs in my previous comment. And yes, using this with llama3.1-70B (q4_k_m, I think) I was able to fit the final layer with llamacpp only.

Since posting this, I discovered yet another usage for this feature (and potentially a bug?): when a model can fully fit the first GPU it gets allocated there, but then it might end up crashing with an OOM due to context size, even though there is plenty of room left on the second GPU. Here are two examples where being able to define a split across GPUs could help:

phi3.5-3.8b-mini-instruct-q8 @ 128K context fails
Sep 30 17:43:03 dying-love 6f929011afab[1234]: [GIN] 2024/09/30 - 17:43:03 | 200 |    1.542621ms |    192.168.1.55 | GET      "/api/tags"
Sep 30 17:43:06 dying-love 6f929011afab[1234]: [GIN] 2024/09/30 - 17:43:06 | 200 |      32.356µs |    192.168.1.55 | GET      "/api/version"
Sep 30 17:43:13 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:13.531Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-43c5178f-58cc-bc56-98c7-a070fe538d85 library=cuda total="23.6 GiB" available="524.8 MiB"
Sep 30 17:43:13 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:13.531Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-69a26f7a-5e2a-4349-b872-1b672c7fc845 library=cuda total="23.6 GiB" available="23.3 GiB"
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.223Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-43c5178f-58cc-bc56-98c7-a070fe538d85 library=cuda total="23.6 GiB" available="23.3 GiB"
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.223Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-69a26f7a-5e2a-4349-b872-1b672c7fc845 library=cuda total="23.6 GiB" available="23.3 GiB"
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.643Z level=INFO source=server.go:103 msg="system memory" total="124.9 GiB" free="122.1 GiB" free_swap="0 B"
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.643Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=32 layers.model=33 layers.offload=16 layers.split=8,8 memory.available="[23.3 GiB 23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="71.8 GiB" memory.required.partial="45.9 GiB" memory.required.kv="48.0 GiB" memory.required.allocations="[23.0 GiB 23.0 GiB]" memory.weights.total="51.6 GiB" memory.weights.repeating="51.5 GiB" memory.weights.nonrepeating="99.8 MiB" memory.graph.full="8.0 GiB" memory.graph.partial="8.0 GiB"
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.645Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150 --ctx-size 131072 --batch-size 512 --embedding --log-disable --n-gpu-layers 32 --flash-attn --parallel 1 --tensor-split 8,8 --port 45151"
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.647Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.647Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.647Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
Sep 30 17:43:15 dying-love 6f929011afab[1234]: INFO [main] build info | build=10 commit="3f6ec33" tid="139813476728832" timestamp=1727718195
Sep 30 17:43:15 dying-love 6f929011afab[1234]: INFO [main] system info | n_threads=16 n_threads_batch=16 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139813476728832" timestamp=1727718195 total_threads=32
Sep 30 17:43:15 dying-love 6f929011afab[1234]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="45151" tid="139813476728832" timestamp=1727718195
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: loaded meta data with 36 key-value pairs and 197 tensors from /root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150 (version GGUF V3 (latest))
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   0:                       general.architecture str              = phi3
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   1:                               general.type str              = model
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   2:                               general.name str              = Phi 3.5 Mini Instruct
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   3:                           general.finetune str              = instruct
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   4:                           general.basename str              = Phi-3.5
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   5:                         general.size_label str              = mini
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   6:                            general.license str              = mit
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/microsoft/Phi-...
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   8:                               general.tags arr[str,3]       = ["nlp", "code", "text-generation"]
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv   9:                          general.languages arr[str,1]       = ["multilingual"]
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  10:                        phi3.context_length u32              = 131072
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  11:  phi3.rope.scaling.original_context_length u32              = 4096
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  12:                      phi3.embedding_length u32              = 3072
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  13:                   phi3.feed_forward_length u32              = 8192
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  14:                           phi3.block_count u32              = 32
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  15:                  phi3.attention.head_count u32              = 32
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  16:               phi3.attention.head_count_kv u32              = 32
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  17:      phi3.attention.layer_norm_rms_epsilon f32              = 0.000010
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  18:                  phi3.rope.dimension_count u32              = 96
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  19:                        phi3.rope.freq_base f32              = 10000.000000
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  20:                          general.file_type u32              = 7
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  21:              phi3.attention.sliding_window u32              = 262144
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  22:              phi3.rope.scaling.attn_factor f32              = 1.190238
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = llama
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = default
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,32064]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  26:                      tokenizer.ggml.scores arr[f32,32064]   = [-1000.000000, -1000.000000, -1000.00...
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  27:                  tokenizer.ggml.token_type arr[i32,32064]   = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 32000
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 32000
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = false
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  33:               tokenizer.ggml.add_eos_token bool             = false
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  34:                    tokenizer.chat_template str              = {% for message in messages %}{% if me...
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv  35:               general.quantization_version u32              = 2
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - type  f32:   67 tensors
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - type q8_0:  130 tensors
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_vocab: special tokens cache size = 14
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_vocab: token to piece cache size = 0.1685 MB
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: format           = GGUF V3 (latest)
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: arch             = phi3
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab type       = SPM
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_vocab          = 32064
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_merges         = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab_only       = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_train      = 131072
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd           = 3072
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_layer          = 32
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head           = 32
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head_kv        = 32
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_rot            = 96
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_swa            = 262144
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_k    = 96
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_v    = 96
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_gqa            = 1
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_k_gqa     = 3072
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_v_gqa     = 3072
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ff             = 8192
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert         = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert_used    = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: causal attn      = 1
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: pooling type     = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: rope type        = 2
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: rope scaling     = linear
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_base_train  = 10000.0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_scale_train = 1
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_orig_yarn  = 4096
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: rope_finetuned   = unknown
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_conv       = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_inner      = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_state      = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_rank      = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: model type       = 3B
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: model ftype      = Q8_0
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: model params     = 3.82 B
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: model size       = 3.78 GiB (8.50 BPW)
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: general.name     = Phi 3.5 Mini Instruct
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: BOS token        = 1 '<s>'
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: EOS token        = 32000 '<|endoftext|>'
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: UNK token        = 0 '<unk>'
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: PAD token        = 32000 '<|endoftext|>'
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: LF token         = 13 '<0x0A>'
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: EOT token        = 32007 '<|end|>'
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: max token length = 48
Sep 30 17:43:15 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Sep 30 17:43:15 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 30 17:43:15 dying-love 6f929011afab[1234]: ggml_cuda_init: found 2 CUDA devices:
Sep 30 17:43:15 dying-love 6f929011afab[1234]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Sep 30 17:43:15 dying-love 6f929011afab[1234]:   Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_tensors: ggml ctx size =    0.31 MiB
Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.898Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors: offloading 32 repeating layers to GPU
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors: offloaded 32/33 layers to GPU
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors:        CPU buffer size =  3872.38 MiB
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors:      CUDA0 buffer size =  1836.38 MiB
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors:      CUDA1 buffer size =  1836.38 MiB
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ctx      = 131072
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_batch    = 512
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ubatch   = 512
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: flash_attn = 1
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_base  = 10000.0
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_scale = 1
Sep 30 17:43:17 dying-love 6f929011afab[1234]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 24576.00 MiB on device 0: cudaMalloc failed: out of memory
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_kv_cache_init: failed to allocate buffer for kv cache
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_init_from_gpt_params: error: failed to create context with model '/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150'
Sep 30 17:43:18 dying-love 6f929011afab[1234]: ERROR [load_model] unable to load model | model="/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150" tid="139813476728832" timestamp=1727718198
Sep 30 17:43:18 dying-love 6f929011afab[1234]: terminate called without an active exception
Sep 30 17:43:18 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:18.607Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
Sep 30 17:43:18 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:18.858Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error:failed to create context with model '/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150'"
Sep 30 17:43:18 dying-love 6f929011afab[1234]: [GIN] 2024/09/30 - 17:43:18 | 500 |  5.721814198s |    192.168.1.55 | POST     "/api/chat"
Sep 30 17:43:24 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:24.127Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.269005881 model=/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150
Sep 30 17:43:24 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:24.442Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.584453607 model=/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150
Sep 30 17:43:24 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:24.763Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.904900211 model=/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150
phi3-14b-medium-instruct-q8 @ 128K context succeeds!
Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.875Z level=INFO source=server.go:103 msg="system memory" total="124.9 GiB" free="122.1 GiB" free_swap="0 B"
Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.875Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=40 layers.model=41 layers.offload=10 layers.split=5,5 memory.available="[23.3 GiB 23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="74.8 GiB" memory.required.partial="45.8 GiB" memory.required.kv="25.0 GiB" memory.required.allocations="[22.9 GiB 22.9 GiB]" memory.weights.total="38.5 GiB" memory.weights.repeating="38.3 GiB" memory.weights.nonrepeating="166.4 MiB" memory.graph.full="16.7 GiB" memory.graph.partial="16.7 GiB"
Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.877Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-f0eb4f9afc3b0a295aa719a627ef8e221a378174b0c467b8daf239ddb3f58e15 --ctx-size 131072 --batch-size 512 --embedding --log-disable --n-gpu-layers 40 --flash-attn --parallel 1 --tensor-split 5,5 --port 37277"
Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.878Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.878Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.878Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
Sep 30 17:45:46 dying-love 6f929011afab[1234]: INFO [main] build info | build=10 commit="3f6ec33" tid="139677525848064" timestamp=1727718346
Sep 30 17:45:46 dying-love 6f929011afab[1234]: INFO [main] system info | n_threads=16 n_threads_batch=16 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139677525848064" timestamp=1727718346 total_threads=32
Sep 30 17:45:46 dying-love 6f929011afab[1234]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="37277" tid="139677525848064" timestamp=1727718346
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: loaded meta data with 36 key-value pairs and 245 tensors from /root/.ollama/models/blobs/sha256-f0eb4f9afc3b0a295aa719a627ef8e221a378174b0c467b8daf239ddb3f58e15 (version GGUF V3 (latest))
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   0:                       general.architecture str              = phi3
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   1:                               general.type str              = model
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   2:                               general.name str              = Phi 3 Medium 128k Instruct
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   3:                           general.finetune str              = 128k-instruct
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   4:                           general.basename str              = Phi-3
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   5:                         general.size_label str              = medium
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   6:                            general.license str              = mit
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/microsoft/Phi-...
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   8:                               general.tags arr[str,3]       = ["nlp", "code", "text-generation"]
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv   9:                          general.languages arr[str,1]       = ["multilingual"]
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  10:                        phi3.context_length u32              = 131072
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  11:  phi3.rope.scaling.original_context_length u32              = 4096
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  12:                      phi3.embedding_length u32              = 5120
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  13:                   phi3.feed_forward_length u32              = 17920
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  14:                           phi3.block_count u32              = 40
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  15:                  phi3.attention.head_count u32              = 40
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  16:               phi3.attention.head_count_kv u32              = 10
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  17:      phi3.attention.layer_norm_rms_epsilon f32              = 0.000010
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  18:                  phi3.rope.dimension_count u32              = 128
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  19:                        phi3.rope.freq_base f32              = 10000.000000
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  20:                          general.file_type u32              = 7
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  21:              phi3.attention.sliding_window u32              = 131072
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  22:              phi3.rope.scaling.attn_factor f32              = 1.190238
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = llama
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = default
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,32064]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  26:                      tokenizer.ggml.scores arr[f32,32064]   = [-1000.000000, -1000.000000, -1000.00...
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  27:                  tokenizer.ggml.token_type arr[i32,32064]   = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 32000
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 32000
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = false
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  33:               tokenizer.ggml.add_eos_token bool             = false
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  34:                    tokenizer.chat_template str              = {% for message in messages %}{% if (m...
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv  35:               general.quantization_version u32              = 2
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - type  f32:   83 tensors
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - type q8_0:  162 tensors
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_vocab: special tokens cache size = 14
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_vocab: token to piece cache size = 0.1685 MB
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: format           = GGUF V3 (latest)
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: arch             = phi3
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab type       = SPM
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_vocab          = 32064
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_merges         = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab_only       = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_train      = 131072
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd           = 5120
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_layer          = 40
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head           = 40
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head_kv        = 10
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_rot            = 128
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_swa            = 131072
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_k    = 128
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_v    = 128
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_gqa            = 4
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_k_gqa     = 1280
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_v_gqa     = 1280
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ff             = 17920
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert         = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert_used    = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: causal attn      = 1
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: pooling type     = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: rope type        = 2
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: rope scaling     = linear
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_base_train  = 10000.0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_scale_train = 1
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_orig_yarn  = 4096
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: rope_finetuned   = unknown
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_conv       = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_inner      = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_state      = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_rank      = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: model type       = 14B
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: model ftype      = Q8_0
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: model params     = 13.96 B
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: model size       = 13.82 GiB (8.50 BPW)
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: general.name     = Phi 3 Medium 128k Instruct
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: BOS token        = 1 '<s>'
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: EOS token        = 32000 '<|endoftext|>'
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: UNK token        = 0 '<unk>'
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: PAD token        = 32000 '<|endoftext|>'
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: LF token         = 13 '<0x0A>'
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: EOT token        = 32007 '<|end|>'
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: max token length = 48
Sep 30 17:45:46 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Sep 30 17:45:46 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 30 17:45:46 dying-love 6f929011afab[1234]: ggml_cuda_init: found 2 CUDA devices:
Sep 30 17:45:46 dying-love 6f929011afab[1234]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Sep 30 17:45:46 dying-love 6f929011afab[1234]:   Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_tensors: ggml ctx size =    0.39 MiB
Sep 30 17:45:47 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:47.129Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"

Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors: offloading 40 repeating layers to GPU
Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors: offloaded 40/41 layers to GPU
Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors:        CPU buffer size = 14146.78 MiB
Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors:      CUDA0 buffer size =  6907.04 MiB
Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors:      CUDA1 buffer size =  6907.04 MiB
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ctx      = 131072
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_batch    = 512
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ubatch   = 512
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: flash_attn = 1
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_base  = 10000.0
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_scale = 1
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_kv_cache_init:      CUDA0 KV buffer size = 12800.00 MiB
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_kv_cache_init:      CUDA1 KV buffer size = 12800.00 MiB
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: KV self size  = 25600.00 MiB, K (f16): 12800.00 MiB, V (f16): 12800.00 MiB
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model:  CUDA_Host  output buffer size =     0.14 MiB
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model:      CUDA0 compute buffer size =   429.00 MiB
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model:      CUDA1 compute buffer size =   298.00 MiB
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model:  CUDA_Host compute buffer size =   266.01 MiB
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph nodes  = 1447
Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph splits = 5
Sep 30 17:45:54 dying-love 6f929011afab[1234]: INFO [main] model loaded | tid="139677525848064" timestamp=1727718354
Sep 30 17:45:54 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:54.655Z level=INFO source=server.go:626 msg="llama runner started in 7.78 seconds"
<!-- gh-comment-id:2383818315 --> @KMouratidis commented on GitHub (Sep 30, 2024): Sure! I've edited the logs in my previous comment. And yes, using this with llama3.1-70B (q4_k_m, I think) I was able to fit the final layer with llamacpp only. Since posting this, I discovered yet another usage for this feature (and potentially a bug?): when a model can fully fit the first GPU it gets allocated there, but then it might end up crashing with an OOM due to context size, even though there is plenty of room left on the second GPU. Here are two examples where being able to define a split across GPUs could help: <details> <summary>phi3.5-3.8b-mini-instruct-q8 @ 128K context fails</summary> ``` Sep 30 17:43:03 dying-love 6f929011afab[1234]: [GIN] 2024/09/30 - 17:43:03 | 200 | 1.542621ms | 192.168.1.55 | GET "/api/tags" Sep 30 17:43:06 dying-love 6f929011afab[1234]: [GIN] 2024/09/30 - 17:43:06 | 200 | 32.356µs | 192.168.1.55 | GET "/api/version" Sep 30 17:43:13 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:13.531Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-43c5178f-58cc-bc56-98c7-a070fe538d85 library=cuda total="23.6 GiB" available="524.8 MiB" Sep 30 17:43:13 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:13.531Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-69a26f7a-5e2a-4349-b872-1b672c7fc845 library=cuda total="23.6 GiB" available="23.3 GiB" Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.223Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-43c5178f-58cc-bc56-98c7-a070fe538d85 library=cuda total="23.6 GiB" available="23.3 GiB" Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.223Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-69a26f7a-5e2a-4349-b872-1b672c7fc845 library=cuda total="23.6 GiB" available="23.3 GiB" Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.643Z level=INFO source=server.go:103 msg="system memory" total="124.9 GiB" free="122.1 GiB" free_swap="0 B" Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.643Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=32 layers.model=33 layers.offload=16 layers.split=8,8 memory.available="[23.3 GiB 23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="71.8 GiB" memory.required.partial="45.9 GiB" memory.required.kv="48.0 GiB" memory.required.allocations="[23.0 GiB 23.0 GiB]" memory.weights.total="51.6 GiB" memory.weights.repeating="51.5 GiB" memory.weights.nonrepeating="99.8 MiB" memory.graph.full="8.0 GiB" memory.graph.partial="8.0 GiB" Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.645Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150 --ctx-size 131072 --batch-size 512 --embedding --log-disable --n-gpu-layers 32 --flash-attn --parallel 1 --tensor-split 8,8 --port 45151" Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.647Z level=INFO source=sched.go:449 msg="loaded runners" count=1 Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.647Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.647Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" Sep 30 17:43:15 dying-love 6f929011afab[1234]: INFO [main] build info | build=10 commit="3f6ec33" tid="139813476728832" timestamp=1727718195 Sep 30 17:43:15 dying-love 6f929011afab[1234]: INFO [main] system info | n_threads=16 n_threads_batch=16 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139813476728832" timestamp=1727718195 total_threads=32 Sep 30 17:43:15 dying-love 6f929011afab[1234]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="45151" tid="139813476728832" timestamp=1727718195 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: loaded meta data with 36 key-value pairs and 197 tensors from /root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150 (version GGUF V3 (latest)) Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 0: general.architecture str = phi3 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 1: general.type str = model Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 2: general.name str = Phi 3.5 Mini Instruct Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 3: general.finetune str = instruct Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 4: general.basename str = Phi-3.5 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 5: general.size_label str = mini Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 6: general.license str = mit Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/microsoft/Phi-... Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 8: general.tags arr[str,3] = ["nlp", "code", "text-generation"] Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 9: general.languages arr[str,1] = ["multilingual"] Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 10: phi3.context_length u32 = 131072 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 11: phi3.rope.scaling.original_context_length u32 = 4096 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 12: phi3.embedding_length u32 = 3072 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 13: phi3.feed_forward_length u32 = 8192 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 14: phi3.block_count u32 = 32 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 15: phi3.attention.head_count u32 = 32 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 16: phi3.attention.head_count_kv u32 = 32 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 17: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 18: phi3.rope.dimension_count u32 = 96 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 19: phi3.rope.freq_base f32 = 10000.000000 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 20: general.file_type u32 = 7 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 21: phi3.attention.sliding_window u32 = 262144 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 22: phi3.rope.scaling.attn_factor f32 = 1.190238 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 23: tokenizer.ggml.model str = llama Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = default Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<... Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 26: tokenizer.ggml.scores arr[f32,32064] = [-1000.000000, -1000.000000, -1000.00... Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,32064] = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 32000 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 32000 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 33: tokenizer.ggml.add_eos_token bool = false Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 34: tokenizer.chat_template str = {% for message in messages %}{% if me... Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - kv 35: general.quantization_version u32 = 2 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - type f32: 67 tensors Sep 30 17:43:15 dying-love 6f929011afab[1234]: llama_model_loader: - type q8_0: 130 tensors Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_vocab: special tokens cache size = 14 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_vocab: token to piece cache size = 0.1685 MB Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: format = GGUF V3 (latest) Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: arch = phi3 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab type = SPM Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_vocab = 32064 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_merges = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab_only = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_train = 131072 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd = 3072 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_layer = 32 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head = 32 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head_kv = 32 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_rot = 96 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_swa = 262144 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_k = 96 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_v = 96 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_gqa = 1 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_k_gqa = 3072 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_v_gqa = 3072 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_eps = 0.0e+00 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: f_logit_scale = 0.0e+00 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ff = 8192 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert_used = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: causal attn = 1 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: pooling type = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: rope type = 2 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: rope scaling = linear Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_base_train = 10000.0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_scale_train = 1 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_orig_yarn = 4096 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: rope_finetuned = unknown Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_conv = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_inner = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_state = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_rank = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: model type = 3B Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: model ftype = Q8_0 Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: model params = 3.82 B Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: model size = 3.78 GiB (8.50 BPW) Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: general.name = Phi 3.5 Mini Instruct Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: BOS token = 1 '<s>' Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: EOS token = 32000 '<|endoftext|>' Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: UNK token = 0 '<unk>' Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: PAD token = 32000 '<|endoftext|>' Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: LF token = 13 '<0x0A>' Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: EOT token = 32007 '<|end|>' Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_print_meta: max token length = 48 Sep 30 17:43:15 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Sep 30 17:43:15 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Sep 30 17:43:15 dying-love 6f929011afab[1234]: ggml_cuda_init: found 2 CUDA devices: Sep 30 17:43:15 dying-love 6f929011afab[1234]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Sep 30 17:43:15 dying-love 6f929011afab[1234]: Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Sep 30 17:43:15 dying-love 6f929011afab[1234]: llm_load_tensors: ggml ctx size = 0.31 MiB Sep 30 17:43:15 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:15.898Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors: offloading 32 repeating layers to GPU Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors: offloaded 32/33 layers to GPU Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors: CPU buffer size = 3872.38 MiB Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors: CUDA0 buffer size = 1836.38 MiB Sep 30 17:43:17 dying-love 6f929011afab[1234]: llm_load_tensors: CUDA1 buffer size = 1836.38 MiB Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ctx = 131072 Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_batch = 512 Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ubatch = 512 Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: flash_attn = 1 Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_base = 10000.0 Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_scale = 1 Sep 30 17:43:17 dying-love 6f929011afab[1234]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 24576.00 MiB on device 0: cudaMalloc failed: out of memory Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_kv_cache_init: failed to allocate buffer for kv cache Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache Sep 30 17:43:17 dying-love 6f929011afab[1234]: llama_init_from_gpt_params: error: failed to create context with model '/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150' Sep 30 17:43:18 dying-love 6f929011afab[1234]: ERROR [load_model] unable to load model | model="/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150" tid="139813476728832" timestamp=1727718198 Sep 30 17:43:18 dying-love 6f929011afab[1234]: terminate called without an active exception Sep 30 17:43:18 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:18.607Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" Sep 30 17:43:18 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:18.858Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error:failed to create context with model '/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150'" Sep 30 17:43:18 dying-love 6f929011afab[1234]: [GIN] 2024/09/30 - 17:43:18 | 500 | 5.721814198s | 192.168.1.55 | POST "/api/chat" Sep 30 17:43:24 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:24.127Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.269005881 model=/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150 Sep 30 17:43:24 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:24.442Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.584453607 model=/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150 Sep 30 17:43:24 dying-love 6f929011afab[1234]: time=2024-09-30T17:43:24.763Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.904900211 model=/root/.ollama/models/blobs/sha256-81fd0b829a6ff15b3df3cc792de3f3e75a9fb2f1f55c0059af55a698b1173150 ``` </details> <details> <summary>phi3-14b-medium-instruct-q8 @ 128K context succeeds!</summary> ``` Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.875Z level=INFO source=server.go:103 msg="system memory" total="124.9 GiB" free="122.1 GiB" free_swap="0 B" Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.875Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=40 layers.model=41 layers.offload=10 layers.split=5,5 memory.available="[23.3 GiB 23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="74.8 GiB" memory.required.partial="45.8 GiB" memory.required.kv="25.0 GiB" memory.required.allocations="[22.9 GiB 22.9 GiB]" memory.weights.total="38.5 GiB" memory.weights.repeating="38.3 GiB" memory.weights.nonrepeating="166.4 MiB" memory.graph.full="16.7 GiB" memory.graph.partial="16.7 GiB" Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.877Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-f0eb4f9afc3b0a295aa719a627ef8e221a378174b0c467b8daf239ddb3f58e15 --ctx-size 131072 --batch-size 512 --embedding --log-disable --n-gpu-layers 40 --flash-attn --parallel 1 --tensor-split 5,5 --port 37277" Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.878Z level=INFO source=sched.go:449 msg="loaded runners" count=1 Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.878Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" Sep 30 17:45:46 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:46.878Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" Sep 30 17:45:46 dying-love 6f929011afab[1234]: INFO [main] build info | build=10 commit="3f6ec33" tid="139677525848064" timestamp=1727718346 Sep 30 17:45:46 dying-love 6f929011afab[1234]: INFO [main] system info | n_threads=16 n_threads_batch=16 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139677525848064" timestamp=1727718346 total_threads=32 Sep 30 17:45:46 dying-love 6f929011afab[1234]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="37277" tid="139677525848064" timestamp=1727718346 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: loaded meta data with 36 key-value pairs and 245 tensors from /root/.ollama/models/blobs/sha256-f0eb4f9afc3b0a295aa719a627ef8e221a378174b0c467b8daf239ddb3f58e15 (version GGUF V3 (latest)) Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 0: general.architecture str = phi3 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 1: general.type str = model Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 2: general.name str = Phi 3 Medium 128k Instruct Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 3: general.finetune str = 128k-instruct Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 4: general.basename str = Phi-3 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 5: general.size_label str = medium Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 6: general.license str = mit Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/microsoft/Phi-... Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 8: general.tags arr[str,3] = ["nlp", "code", "text-generation"] Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 9: general.languages arr[str,1] = ["multilingual"] Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 10: phi3.context_length u32 = 131072 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 11: phi3.rope.scaling.original_context_length u32 = 4096 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 12: phi3.embedding_length u32 = 5120 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 13: phi3.feed_forward_length u32 = 17920 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 14: phi3.block_count u32 = 40 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 15: phi3.attention.head_count u32 = 40 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 16: phi3.attention.head_count_kv u32 = 10 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 17: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 18: phi3.rope.dimension_count u32 = 128 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 19: phi3.rope.freq_base f32 = 10000.000000 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 20: general.file_type u32 = 7 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 21: phi3.attention.sliding_window u32 = 131072 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 22: phi3.rope.scaling.attn_factor f32 = 1.190238 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 23: tokenizer.ggml.model str = llama Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = default Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<... Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 26: tokenizer.ggml.scores arr[f32,32064] = [-1000.000000, -1000.000000, -1000.00... Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,32064] = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 32000 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 32000 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 33: tokenizer.ggml.add_eos_token bool = false Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 34: tokenizer.chat_template str = {% for message in messages %}{% if (m... Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - kv 35: general.quantization_version u32 = 2 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - type f32: 83 tensors Sep 30 17:45:46 dying-love 6f929011afab[1234]: llama_model_loader: - type q8_0: 162 tensors Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_vocab: special tokens cache size = 14 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_vocab: token to piece cache size = 0.1685 MB Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: format = GGUF V3 (latest) Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: arch = phi3 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab type = SPM Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_vocab = 32064 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_merges = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab_only = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_train = 131072 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd = 5120 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_layer = 40 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head = 40 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head_kv = 10 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_rot = 128 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_swa = 131072 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_k = 128 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_v = 128 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_gqa = 4 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_k_gqa = 1280 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_v_gqa = 1280 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_eps = 0.0e+00 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: f_logit_scale = 0.0e+00 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ff = 17920 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert_used = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: causal attn = 1 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: pooling type = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: rope type = 2 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: rope scaling = linear Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_base_train = 10000.0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_scale_train = 1 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_orig_yarn = 4096 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: rope_finetuned = unknown Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_conv = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_inner = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_state = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_rank = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: model type = 14B Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: model ftype = Q8_0 Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: model params = 13.96 B Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: model size = 13.82 GiB (8.50 BPW) Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: general.name = Phi 3 Medium 128k Instruct Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: BOS token = 1 '<s>' Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: EOS token = 32000 '<|endoftext|>' Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: UNK token = 0 '<unk>' Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: PAD token = 32000 '<|endoftext|>' Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: LF token = 13 '<0x0A>' Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: EOT token = 32007 '<|end|>' Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_print_meta: max token length = 48 Sep 30 17:45:46 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Sep 30 17:45:46 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Sep 30 17:45:46 dying-love 6f929011afab[1234]: ggml_cuda_init: found 2 CUDA devices: Sep 30 17:45:46 dying-love 6f929011afab[1234]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Sep 30 17:45:46 dying-love 6f929011afab[1234]: Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Sep 30 17:45:46 dying-love 6f929011afab[1234]: llm_load_tensors: ggml ctx size = 0.39 MiB Sep 30 17:45:47 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:47.129Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors: offloading 40 repeating layers to GPU Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors: offloaded 40/41 layers to GPU Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors: CPU buffer size = 14146.78 MiB Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors: CUDA0 buffer size = 6907.04 MiB Sep 30 17:45:52 dying-love 6f929011afab[1234]: llm_load_tensors: CUDA1 buffer size = 6907.04 MiB Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ctx = 131072 Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_batch = 512 Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ubatch = 512 Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: flash_attn = 1 Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_base = 10000.0 Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_scale = 1 Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_kv_cache_init: CUDA0 KV buffer size = 12800.00 MiB Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_kv_cache_init: CUDA1 KV buffer size = 12800.00 MiB Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: KV self size = 25600.00 MiB, K (f16): 12800.00 MiB, V (f16): 12800.00 MiB Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA0 compute buffer size = 429.00 MiB Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA1 compute buffer size = 298.00 MiB Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA_Host compute buffer size = 266.01 MiB Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph nodes = 1447 Sep 30 17:45:54 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph splits = 5 Sep 30 17:45:54 dying-love 6f929011afab[1234]: INFO [main] model loaded | tid="139677525848064" timestamp=1727718354 Sep 30 17:45:54 dying-love 6f929011afab[1234]: time=2024-09-30T17:45:54.655Z level=INFO source=server.go:626 msg="llama runner started in 7.78 seconds" ``` </details>
Author
Owner

@dhiltgen commented on GitHub (Oct 17, 2024):

In the case of crashing, until we fix the memory prediction logic, you can leverage OLLAMA_GPU_OVERHEAD as a workaround to set aside some VRAM so we allocate fewer layers.

<!-- gh-comment-id:2420292617 --> @dhiltgen commented on GitHub (Oct 17, 2024): In the case of crashing, until we fix the memory prediction logic, you can leverage OLLAMA_GPU_OVERHEAD as a workaround to set aside some VRAM so we allocate fewer layers.
Author
Owner

@chrisoutwright commented on GitHub (Sep 28, 2025):

After some while, I noticed a very uneven split with: llama-3.3-nemotron-super-v1.5-q4km:49b

Image Image

logs:

llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22994 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23306 MiB free
llama_model_loader: loaded meta data with 36 key-value pairs and 569 tensors from D:\Ollama\models\blobs\sha256-af76b98a909c86431d499ce2ec7d9bd9108f1719d6ab27809ca7494ac5b156f6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deci
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama_Nemotron_Super_V1_5
llama_model_loader: - kv   3:                           general.finetune str              = 3_3-Nemotron-Super-v1_5
llama_model_loader: - kv   4:                           general.basename str              = Llama
llama_model_loader: - kv   5:                         general.size_label str              = 49B
llama_model_loader: - kv   6:                            general.license str              = other
llama_model_loader: - kv   7:                       general.license.name str              = nvidia-open-model-license
llama_model_loader: - kv   8:                       general.license.link str              = https://www.nvidia.com/en-us/agreemen...
llama_model_loader: - kv   9:                               general.tags arr[str,4]       = ["nvidia", "llama-3", "pytorch", "tex...
llama_model_loader: - kv  10:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  11:                        deci.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:               deci.attention.head_count_kv arr[i32,80]      = [8, 8, 8, 8, 8, 8, 0, 0, 8, 8, 8, 0, ...
llama_model_loader: - kv  13:                  deci.attention.head_count arr[i32,80]      = [64, 64, 64, 64, 64, 64, 0, 0, 64, 64...
llama_model_loader: - kv  14:                   deci.feed_forward_length arr[i32,80]      = [14336, 28672, 28672, 28672, 28672, 2...
llama_model_loader: - kv  15:                           deci.block_count u32              = 80
llama_model_loader: - kv  16:                        deci.context_length u32              = 131072
llama_model_loader: - kv  17:                      deci.embedding_length u32              = 8192
llama_model_loader: - kv  18:      deci.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  19:                  deci.attention.key_length u32              = 128
llama_model_loader: - kv  20:                deci.attention.value_length u32              = 128
llama_model_loader: - kv  21:                            deci.vocab_size u32              = 128256
llama_model_loader: - kv  22:                  deci.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  30:            tokenizer.ggml.padding_token_id u32              = 128009
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  32:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% set bos = "<|begin_of_text|>" %}{%...
llama_model_loader: - kv  34:               general.quantization_version u32              = 2
llama_model_loader: - kv  35:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  131 tensors
llama_model_loader: - type q4_K:  348 tensors
llama_model_loader: - type q5_K:   24 tensors
llama_model_loader: - type q6_K:   66 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 28.13 GiB (4.85 BPW)
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = deci
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 8192
print_info: n_layer          = 80
print_info: n_head           = [64, 64, 64, 64, 64, 64, 0, 0, 64, 64, 64, 0, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 64, 64, 64, 64, 64, 64, 64, 64, 64]
print_info: n_head_kv        = [8, 8, 8, 8, 8, 8, 0, 0, 8, 8, 8, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8]
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = [8, 8, 8, 8, 8, 8, 0, 0, 8, 8, 8, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8]
print_info: n_embd_k_gqa     = [1024, 1024, 1024, 1024, 1024, 1024, 0, 0, 1024, 1024, 1024, 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024]
print_info: n_embd_v_gqa     = [1024, 1024, 1024, 1024, 1024, 1024, 0, 0, 1024, 1024, 1024, 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024]
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = [14336, 28672, 28672, 28672, 28672, 28672, 14336, 14336, 28672, 28672, 28672, 17920, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 7168, 14336, 14336, 7168, 28672, 7168, 14336, 7168, 7168, 7168, 28672, 7168, 5632, 5632, 7168, 5632, 5632, 5632, 7168, 7168, 2816, 2816, 5632, 5632, 2816, 2816, 5632, 2816, 2816, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672]
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 70B
print_info: model params     = 49.87 B
print_info: general.name     = Llama_Nemotron_Super_V1_5
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128009 '<|eot_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 80 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 81/81 layers to GPU
load_tensors:        CUDA0 model buffer size = 18993.09 MiB
load_tensors:        CUDA1 model buffer size =  9251.61 MiB
load_tensors:          CPU model buffer size =   563.62 MiB
[GIN] 2025/09/28 - 04:10:25 | 200 |     10.3678ms |    192.168.1.88 | GET      "/api/tags"
[GIN] 2025/09/28 - 04:10:25 | 200 |            0s |    192.168.1.88 | GET      "/api/ps"
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 20072
llama_context: n_ctx_per_seq = 20072
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 1
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (20072) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.52 MiB
llama_kv_cache_unified:      CUDA0 KV buffer size =  1594.81 MiB
llama_kv_cache_unified:      CUDA1 KV buffer size =   461.66 MiB
llama_kv_cache_unified: size = 2056.47 MiB ( 20224 cells,  80 layers,  1/1 seqs), K (q8_0): 1028.23 MiB, V (q8_0): 1028.23 MiB
llama_context: pipeline parallelism enabled (n_copies=4)
llama_context:      CUDA0 compute buffer size =   457.79 MiB
llama_context:      CUDA1 compute buffer size =   409.55 MiB
llama_context:  CUDA_Host compute buffer size =   174.05 MiB
llama_context: graph nodes  = 1743
llama_context: graph splits = 3
time=2025-09-28T04:10:38.771+02:00 level=INFO source=server.go:1289 msg="llama runner started in 15.67 seconds"
time=2025-09-28T04:10:38.771+02:00 level=INFO source=sched.go:470 msg="loaded runners" count=1
time=2025-09-28T04:10:38.772+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-09-28T04:10:38.772+02:00 level=INFO source=server.go:1289 msg="llama runner started in 15.67 seconds"

How something be done about that to split it more evenly? I cannot increase the CTX Size due to this uneven split.

<!-- gh-comment-id:3342197917 --> @chrisoutwright commented on GitHub (Sep 28, 2025): After some while, I noticed a very uneven split with: llama-3.3-nemotron-super-v1.5-q4km:49b <img width="738" height="157" alt="Image" src="https://github.com/user-attachments/assets/c3c91f6b-d2ff-42f0-be66-501d681c25c7" /> <img width="755" height="158" alt="Image" src="https://github.com/user-attachments/assets/ca48660a-2f8a-41a6-8aae-8c4a518d776e" /> logs: ``` llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22994 MiB free llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23306 MiB free llama_model_loader: loaded meta data with 36 key-value pairs and 569 tensors from D:\Ollama\models\blobs\sha256-af76b98a909c86431d499ce2ec7d9bd9108f1719d6ab27809ca7494ac5b156f6 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deci llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama_Nemotron_Super_V1_5 llama_model_loader: - kv 3: general.finetune str = 3_3-Nemotron-Super-v1_5 llama_model_loader: - kv 4: general.basename str = Llama llama_model_loader: - kv 5: general.size_label str = 49B llama_model_loader: - kv 6: general.license str = other llama_model_loader: - kv 7: general.license.name str = nvidia-open-model-license llama_model_loader: - kv 8: general.license.link str = https://www.nvidia.com/en-us/agreemen... llama_model_loader: - kv 9: general.tags arr[str,4] = ["nvidia", "llama-3", "pytorch", "tex... llama_model_loader: - kv 10: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 11: deci.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: deci.attention.head_count_kv arr[i32,80] = [8, 8, 8, 8, 8, 8, 0, 0, 8, 8, 8, 0, ... llama_model_loader: - kv 13: deci.attention.head_count arr[i32,80] = [64, 64, 64, 64, 64, 64, 0, 0, 64, 64... llama_model_loader: - kv 14: deci.feed_forward_length arr[i32,80] = [14336, 28672, 28672, 28672, 28672, 2... llama_model_loader: - kv 15: deci.block_count u32 = 80 llama_model_loader: - kv 16: deci.context_length u32 = 131072 llama_model_loader: - kv 17: deci.embedding_length u32 = 8192 llama_model_loader: - kv 18: deci.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 19: deci.attention.key_length u32 = 128 llama_model_loader: - kv 20: deci.attention.value_length u32 = 128 llama_model_loader: - kv 21: deci.vocab_size u32 = 128256 llama_model_loader: - kv 22: deci.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 128009 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 32: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {% set bos = "<|begin_of_text|>" %}{%... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - kv 35: general.file_type u32 = 15 llama_model_loader: - type f32: 131 tensors llama_model_loader: - type q4_K: 348 tensors llama_model_loader: - type q5_K: 24 tensors llama_model_loader: - type q6_K: 66 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 28.13 GiB (4.85 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = deci print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = [64, 64, 64, 64, 64, 64, 0, 0, 64, 64, 64, 0, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 64, 64, 64, 64, 64, 64, 64, 64, 64] print_info: n_head_kv = [8, 8, 8, 8, 8, 8, 0, 0, 8, 8, 8, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8] print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = [8, 8, 8, 8, 8, 8, 0, 0, 8, 8, 8, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8] print_info: n_embd_k_gqa = [1024, 1024, 1024, 1024, 1024, 1024, 0, 0, 1024, 1024, 1024, 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024] print_info: n_embd_v_gqa = [1024, 1024, 1024, 1024, 1024, 1024, 0, 0, 1024, 1024, 1024, 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024] print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = [14336, 28672, 28672, 28672, 28672, 28672, 14336, 14336, 28672, 28672, 28672, 17920, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 7168, 14336, 14336, 7168, 28672, 7168, 14336, 7168, 7168, 7168, 28672, 7168, 5632, 5632, 7168, 5632, 5632, 5632, 7168, 7168, 2816, 2816, 5632, 5632, 2816, 2816, 5632, 2816, 2816, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672, 28672] print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 70B print_info: model params = 49.87 B print_info: general.name = Llama_Nemotron_Super_V1_5 print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128009 '<|eot_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 80 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 81/81 layers to GPU load_tensors: CUDA0 model buffer size = 18993.09 MiB load_tensors: CUDA1 model buffer size = 9251.61 MiB load_tensors: CPU model buffer size = 563.62 MiB [GIN] 2025/09/28 - 04:10:25 | 200 | 10.3678ms | 192.168.1.88 | GET "/api/tags" [GIN] 2025/09/28 - 04:10:25 | 200 | 0s | 192.168.1.88 | GET "/api/ps" llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 20072 llama_context: n_ctx_per_seq = 20072 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 1 llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (20072) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.52 MiB llama_kv_cache_unified: CUDA0 KV buffer size = 1594.81 MiB llama_kv_cache_unified: CUDA1 KV buffer size = 461.66 MiB llama_kv_cache_unified: size = 2056.47 MiB ( 20224 cells, 80 layers, 1/1 seqs), K (q8_0): 1028.23 MiB, V (q8_0): 1028.23 MiB llama_context: pipeline parallelism enabled (n_copies=4) llama_context: CUDA0 compute buffer size = 457.79 MiB llama_context: CUDA1 compute buffer size = 409.55 MiB llama_context: CUDA_Host compute buffer size = 174.05 MiB llama_context: graph nodes = 1743 llama_context: graph splits = 3 time=2025-09-28T04:10:38.771+02:00 level=INFO source=server.go:1289 msg="llama runner started in 15.67 seconds" time=2025-09-28T04:10:38.771+02:00 level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-09-28T04:10:38.772+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-09-28T04:10:38.772+02:00 level=INFO source=server.go:1289 msg="llama runner started in 15.67 seconds" ```` How something be done about that to split it more evenly? I cannot increase the CTX Size due to this uneven split.
Author
Owner

@jessegross commented on GitHub (Sep 29, 2025):

@chrisoutwright Models that run in the Ollama engine have better memory allocation, especially with multi GPU setups. However, the model that you are trying to use is not currently supported in the Ollama engine. You could try another model, such as gpt-oss.

<!-- gh-comment-id:3349056585 --> @jessegross commented on GitHub (Sep 29, 2025): @chrisoutwright Models that run in the Ollama engine have better memory allocation, especially with multi GPU setups. However, the model that you are trying to use is not currently supported in the Ollama engine. You could try another model, such as gpt-oss.
Author
Owner

@gordan-bobic commented on GitHub (Nov 19, 2025):

Here is a patch that implements a manual override feature. Tested against 0.12.11.
https://github.com/shatteredsilicon/ollama-noavx/blob/master/rpmbuild/SOURCES/support-overriding-tensor-split.patch
Test cases to help ensure that this feature doesn't accidentally rot are also in the patch.
Documentation details to be discussed.

A colleague of mine is currently setting up a PR to upstream this.

<!-- gh-comment-id:3554843931 --> @gordan-bobic commented on GitHub (Nov 19, 2025): Here is a patch that implements a manual override feature. Tested against 0.12.11. https://github.com/shatteredsilicon/ollama-noavx/blob/master/rpmbuild/SOURCES/support-overriding-tensor-split.patch Test cases to help ensure that this feature doesn't accidentally rot are also in the patch. Documentation details to be discussed. A colleague of mine is currently setting up a PR to upstream this.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66529