[GH-ISSUE #9311] llama3.2-vision really slow when already in VRAM - high load duration #6076

Closed
opened 2026-04-12 17:24:18 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @ribbles on GitHub (Feb 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9311

What is the issue?

When chatting with llama3.2-vision subsequent requests are not as fast as I would expect as the model is already 100% loaded into VRAM.

I don't know if it's relevant but the "load_duration" is really high on calls after the model is already loaded, when I would expect it to be zero.
I also see the logs reporting 8 cores but the GPU has 3072 cores - no idea if relevant.

First request:

{'total_duration': 57086256700, 'load_duration': 54109228500, 'prompt_eval_count': 383, 'prompt_eval_duration': 15000000, 'eval_count': 65, 'eval_duration': 2958000000}

Second request:

{'total_duration': 91264439700, 'load_duration': 87957377600, 'prompt_eval_count': 469, 'prompt_eval_duration': 234000000, 'eval_count': 64, 'eval_duration': 3063000000}

Relevant log output

2025/02/23 22:16:30 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\User\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-23T22:16:30.279-08:00 level=INFO source=images.go:432 msg="total blobs: 19"
time=2025-02-23T22:16:30.286-08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-23T22:16:30.293-08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)"
time=2025-02-23T22:16:30.294-08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-23T22:16:30.294-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-02-23T22:16:30.294-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-02-23T22:16:30.294-08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=8 efficiency=0 threads=16
time=2025-02-23T22:16:30.723-08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b library=cuda compute=5.2 driver=12.7 name="Tesla M40 24GB" overhead="716.4 MiB"
time=2025-02-23T22:16:30.729-08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b library=cuda variant=v11 compute=5.2 driver=12.7 name="Tesla M40 24GB" total="22.5 GiB" available="21.6 GiB"
[GIN] 2025/02/23 - 22:16:32 | 200 |       538.1µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/23 - 22:21:20 | 200 |       157.8µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/23 - 22:21:20 | 200 |     75.5925ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/23 - 22:21:29 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/23 - 22:21:29 | 200 |    125.2651ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-23T22:21:29.557-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
time=2025-02-23T22:21:29.758-08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\User\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b parallel=1 available=23197925376 required="11.3 GiB"
time=2025-02-23T22:21:29.798-08:00 level=INFO source=server.go:100 msg="system memory" total="63.9 GiB" free="58.8 GiB" free_swap="68.0 GiB"
time=2025-02-23T22:21:29.810-08:00 level=INFO source=memory.go:356 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[21.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
time=2025-02-23T22:21:29.840-08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\User\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\User\\.ollama\\models\\blobs\\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj C:\\Users\\User\\.ollama\\models\\blobs\\sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 16 --no-mmap --parallel 1 --port 58093"
time=2025-02-23T22:21:29.851-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-23T22:21:29.851-08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-02-23T22:21:29.853-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-23T22:21:29.946-08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-23T22:21:29.948-08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=16
time=2025-02-23T22:21:29.950-08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:58093"
time=2025-02-23T22:21:30.107-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla M40 24GB, compute capability 5.2, VMM: yes
load_backend: loaded CUDA backend from C:\Users\User\AppData\Local\Programs\Ollama\lib\ollama\cuda_v11\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\User\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
llama_load_model_from_file: using device CUDA0 (Tesla M40 24GB) - 22081 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from C:\Users\User\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = mllama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Model
llama_model_loader: - kv   3:                         general.size_label str              = 10B
llama_model_loader: - kv   4:                         mllama.block_count u32              = 40
llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 4096
llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 14336
llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 32
llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,8]       = [3, 8, 13, 18, 23, 28, 33, 38]
llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  114 tensors
llama_model_loader: - type q4_K:  245 tensors
llama_model_loader: - type q6_K:   37 tensors
llm_load_vocab: special tokens cache size = 257
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = mllama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 40
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 11B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 9.78 B
llm_load_print_meta: model size       = 5.55 GiB (4.87 BPW) 
llm_load_print_meta: general.name     = Model
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab mismatch 128256 !- 128257 ...
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 41/41 layers to GPU
llm_load_tensors:        CUDA0 model buffer size =  5397.50 MiB
llm_load_tensors:          CPU model buffer size =   281.83 MiB
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 500000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   656.25 MiB
llama_new_context_with_model: KV self size  =  656.25 MiB, K (f16):  328.12 MiB, V (f16):  328.12 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.50 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   258.50 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    12.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
mllama_model_load: model name:   Llama-3.2-11B-Vision-Instruct
mllama_model_load: description:  vision encoder for Mllama
mllama_model_load: GGUF version: 3
mllama_model_load: alignment:    32
mllama_model_load: n_tensors:    512
mllama_model_load: n_kv:         17
mllama_model_load: ftype:        f16
mllama_model_load: 
mllama_model_load: mllama_model_load: using CUDA0 backend

mllama_model_load: compute allocated memory: 2853.34 MB
time=2025-02-23T22:21:42.170-08:00 level=INFO source=server.go:596 msg="llama runner started in 12.32 seconds"
[GIN] 2025/02/23 - 22:21:42 | 200 |   12.6969004s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/02/23 - 22:22:19 | 200 |       118.8µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/23 - 22:22:19 | 200 |     10.6673ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/23 - 22:22:21 | 200 |          86µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/23 - 22:22:21 | 200 |       629.9µs |       127.0.0.1 | GET      "/api/ps"
time=2025-02-23T22:23:38.035-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
[GIN] 2025/02/23 - 22:23:48 | 200 |          1m2s |      10.0.0.216 | POST     "/api/chat"

OS

Windows 10

GPU

Nvidia

CPU

Intel sandybridge

Ollama version

0.5.11

Originally created by @ribbles on GitHub (Feb 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9311 ### What is the issue? When chatting with llama3.2-vision subsequent requests are not as fast as I would expect as the model is already 100% loaded into VRAM. I don't know if it's relevant but the "load_duration" is really high on calls after the model is already loaded, when I would expect it to be zero. I also see the logs reporting 8 cores but the GPU has 3072 cores - no idea if relevant. First request: ``` {'total_duration': 57086256700, 'load_duration': 54109228500, 'prompt_eval_count': 383, 'prompt_eval_duration': 15000000, 'eval_count': 65, 'eval_duration': 2958000000} ``` Second request: ``` {'total_duration': 91264439700, 'load_duration': 87957377600, 'prompt_eval_count': 469, 'prompt_eval_duration': 234000000, 'eval_count': 64, 'eval_duration': 3063000000} ``` ### Relevant log output ```shell 2025/02/23 22:16:30 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\User\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-23T22:16:30.279-08:00 level=INFO source=images.go:432 msg="total blobs: 19" time=2025-02-23T22:16:30.286-08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-23T22:16:30.293-08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)" time=2025-02-23T22:16:30.294-08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-23T22:16:30.294-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-02-23T22:16:30.294-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-02-23T22:16:30.294-08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=8 efficiency=0 threads=16 time=2025-02-23T22:16:30.723-08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b library=cuda compute=5.2 driver=12.7 name="Tesla M40 24GB" overhead="716.4 MiB" time=2025-02-23T22:16:30.729-08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b library=cuda variant=v11 compute=5.2 driver=12.7 name="Tesla M40 24GB" total="22.5 GiB" available="21.6 GiB" [GIN] 2025/02/23 - 22:16:32 | 200 | 538.1µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/23 - 22:21:20 | 200 | 157.8µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/23 - 22:21:20 | 200 | 75.5925ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/02/23 - 22:21:29 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/23 - 22:21:29 | 200 | 125.2651ms | 127.0.0.1 | POST "/api/show" time=2025-02-23T22:21:29.557-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" time=2025-02-23T22:21:29.758-08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\User\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b parallel=1 available=23197925376 required="11.3 GiB" time=2025-02-23T22:21:29.798-08:00 level=INFO source=server.go:100 msg="system memory" total="63.9 GiB" free="58.8 GiB" free_swap="68.0 GiB" time=2025-02-23T22:21:29.810-08:00 level=INFO source=memory.go:356 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[21.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" time=2025-02-23T22:21:29.840-08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\User\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\User\\.ollama\\models\\blobs\\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj C:\\Users\\User\\.ollama\\models\\blobs\\sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 16 --no-mmap --parallel 1 --port 58093" time=2025-02-23T22:21:29.851-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-23T22:21:29.851-08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-02-23T22:21:29.853-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-23T22:21:29.946-08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-23T22:21:29.948-08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=16 time=2025-02-23T22:21:29.950-08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:58093" time=2025-02-23T22:21:30.107-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla M40 24GB, compute capability 5.2, VMM: yes load_backend: loaded CUDA backend from C:\Users\User\AppData\Local\Programs\Ollama\lib\ollama\cuda_v11\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\User\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll llama_load_model_from_file: using device CUDA0 (Tesla M40 24GB) - 22081 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from C:\Users\User\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 10B llama_model_loader: - kv 4: mllama.block_count u32 = 40 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 4096 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 14336 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 32 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 15 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,8] = [3, 8, 13, 18, 23, 28, 33, 38] llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 114 tensors llama_model_loader: - type q4_K: 245 tensors llama_model_loader: - type q6_K: 37 tensors llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 11B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 9.78 B llm_load_print_meta: model size = 5.55 GiB (4.87 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... llm_load_tensors: offloading 40 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 41/41 layers to GPU llm_load_tensors: CUDA0 model buffer size = 5397.50 MiB llm_load_tensors: CPU model buffer size = 281.83 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 656.25 MiB llama_new_context_with_model: KV self size = 656.25 MiB, K (f16): 328.12 MiB, V (f16): 328.12 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB llama_new_context_with_model: CUDA0 compute buffer size = 258.50 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 mllama_model_load: model name: Llama-3.2-11B-Vision-Instruct mllama_model_load: description: vision encoder for Mllama mllama_model_load: GGUF version: 3 mllama_model_load: alignment: 32 mllama_model_load: n_tensors: 512 mllama_model_load: n_kv: 17 mllama_model_load: ftype: f16 mllama_model_load: mllama_model_load: mllama_model_load: using CUDA0 backend mllama_model_load: compute allocated memory: 2853.34 MB time=2025-02-23T22:21:42.170-08:00 level=INFO source=server.go:596 msg="llama runner started in 12.32 seconds" [GIN] 2025/02/23 - 22:21:42 | 200 | 12.6969004s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/02/23 - 22:22:19 | 200 | 118.8µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/23 - 22:22:19 | 200 | 10.6673ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/02/23 - 22:22:21 | 200 | 86µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/23 - 22:22:21 | 200 | 629.9µs | 127.0.0.1 | GET "/api/ps" time=2025-02-23T22:23:38.035-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" [GIN] 2025/02/23 - 22:23:48 | 200 | 1m2s | 10.0.0.216 | POST "/api/chat" ``` ### OS Windows 10 ### GPU Nvidia ### CPU Intel sandybridge ### Ollama version 0.5.11
GiteaMirror added the bug label 2026-04-12 17:24:18 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 24, 2025):

I also see the logs reporting 8 cores but the GPU has 3072 cores - no idea if relevant.

CPU cores.

{'total_duration': 57086256700, 'load_duration': 54109228500, 'prompt_eval_count': 383, 'prompt_eval_duration': 15000000, 'eval_count': 65, 'eval_duration': 2958000000}
{'total_duration': 91264439700, 'load_duration': 87957377600, 'prompt_eval_count': 469, 'prompt_eval_duration': 234000000, 'eval_count': 64, 'eval_duration': 3063000000}

There's no indication of wall clock time, is it possible that the model was unloaded/reloaded between requests? See keep_alive.

<!-- gh-comment-id:2678028285 --> @rick-github commented on GitHub (Feb 24, 2025): > I also see the logs reporting 8 cores but the GPU has 3072 cores - no idea if relevant. CPU cores. ``` {'total_duration': 57086256700, 'load_duration': 54109228500, 'prompt_eval_count': 383, 'prompt_eval_duration': 15000000, 'eval_count': 65, 'eval_duration': 2958000000} {'total_duration': 91264439700, 'load_duration': 87957377600, 'prompt_eval_count': 469, 'prompt_eval_duration': 234000000, 'eval_count': 64, 'eval_duration': 3063000000} ``` There's no indication of wall clock time, is it possible that the model was unloaded/reloaded between requests? See [`keep_alive`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately).
Author
Owner

@ribbles commented on GitHub (Feb 24, 2025):

keep_alive is set to 20m (1200) and the requests are serial and immediate - when the first request completed the next request was submitted.

<!-- gh-comment-id:2678920229 --> @ribbles commented on GitHub (Feb 24, 2025): `keep_alive` is set to 20m (1200) and the requests are serial and immediate - when the first request completed the next request was submitted.
Author
Owner

@rick-github commented on GitHub (Feb 24, 2025):

Can you post logs for two requests?

<!-- gh-comment-id:2678926153 --> @rick-github commented on GitHub (Feb 24, 2025): Can you post logs for two requests?
Author
Owner

@ribbles commented on GitHub (Feb 24, 2025):

Requests:

2025-02-24 08:34:34.988 Chat request: {"model": "llama3.2-vision", "messages": [{"role": "user", "content": "truncated"}], "stream": false, "options": {"temperature": 1.0}, "keep_alive": 1200, "format": {"type": "object", "properties": {"message": {"type": "string"}, "tool_calls": {"type": "array", "items": {"type": "object", "properties": {"servo_id": {"type": "integer", "minimum": 1, "maximum": 6}, "position": {"type": "integer", "minimum": 500, "maximum": 2500}}, "required": ["servo_id", "position"]}}}}, "images": ["truncated"]}
2025-02-24 08:35:59.800 Chat response: {"model": "llama3.2-vision", "created_at": "2025-02-24T16:35:59.7807453Z", "message": {"role": "assistant", "content": "truncated"}, "done_reason": "stop", "done": true, "total_duration": 84673957800, "load_duration": 79065586400, "prompt_eval_count": 383, "prompt_eval_duration": 1521000000, "eval_count": 71, "eval_duration": 4069000000}       

2025-02-24 08:36:10.736 Chat request: {"model": "llama3.2-vision", "messages": [{"role": "user", "content": "truncated"}, {"role": "assistant", "content": "truncated"}, {"role": "user", "content": "truncated"}], "stream": false, "options": {"temperature": 1.0}, "keep_alive": 1200, "format": {"type": "object", "properties": {"message": {"type": "string"}, "tool_calls": {"type": "array", "items": {"type": "object", "properties": {"servo_id": {"type": "integer", "minimum": 1, "maximum": 6}, "position": {"type": "integer", "minimum": 500, "maximum": 2500}}, "required": ["servo_id", "position"]}}}}, "images": ["truncated"]}
2025-02-24 08:37:31.445 Chat response: {"model": "llama3.2-vision", "created_at": "2025-02-24T16:37:31.4638142Z", "message": {"role": "assistant", "content": "truncated"}, "done_reason": "stop", "done": true, "total_duration": 80591115900, "load_duration": 75593078600, "prompt_eval_count": 475, "prompt_eval_duration": 259000000, "eval_count": 67, "eval_duration": 3281000000}

Logs for both requests (after ollama restart)

2025/02/24 08:33:27 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\luser\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-24T08:33:27.983-08:00 level=INFO source=images.go:432 msg="total blobs: 19"
time=2025-02-24T08:33:27.990-08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-24T08:33:27.997-08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)"
time=2025-02-24T08:33:27.997-08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-24T08:33:27.997-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-02-24T08:33:27.997-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-02-24T08:33:27.998-08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=8 efficiency=0 threads=16
time=2025-02-24T08:33:28.370-08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b library=cuda compute=5.2 driver=12.7 name="Tesla M40 24GB" overhead="671.2 MiB"
time=2025-02-24T08:33:28.377-08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b library=cuda variant=v11 compute=5.2 driver=12.7 name="Tesla M40 24GB" total="22.5 GiB" available="21.6 GiB"
time=2025-02-24T08:35:41.082-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
time=2025-02-24T08:35:41.290-08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\luser\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b parallel=1 available=23195418624 required="11.3 GiB"
time=2025-02-24T08:35:41.307-08:00 level=INFO source=server.go:100 msg="system memory" total="63.9 GiB" free="58.7 GiB" free_swap="67.9 GiB"
time=2025-02-24T08:35:41.322-08:00 level=INFO source=memory.go:356 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[21.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
time=2025-02-24T08:35:41.354-08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\luser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\luser\\.ollama\\models\\blobs\\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj C:\\Users\\luser\\.ollama\\models\\blobs\\sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 16 --no-mmap --parallel 1 --port 59625"
time=2025-02-24T08:35:41.366-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-24T08:35:41.366-08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-02-24T08:35:41.368-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-24T08:35:41.453-08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-24T08:35:41.453-08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=16
time=2025-02-24T08:35:41.464-08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:59625"
time=2025-02-24T08:35:41.621-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla M40 24GB, compute capability 5.2, VMM: yes
load_backend: loaded CUDA backend from C:\Users\luser\AppData\Local\Programs\Ollama\lib\ollama\cuda_v11\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\luser\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
llama_load_model_from_file: using device CUDA0 (Tesla M40 24GB) - 22081 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from C:\Users\luser\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = mllama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Model
llama_model_loader: - kv   3:                         general.size_label str              = 10B
llama_model_loader: - kv   4:                         mllama.block_count u32              = 40
llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 4096
llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 14336
llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 32
llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,8]       = [3, 8, 13, 18, 23, 28, 33, 38]
llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  114 tensors
llama_model_loader: - type q4_K:  245 tensors
llama_model_loader: - type q6_K:   37 tensors
llm_load_vocab: special tokens cache size = 257
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = mllama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 40
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 11B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 9.78 B
llm_load_print_meta: model size       = 5.55 GiB (4.87 BPW) 
llm_load_print_meta: general.name     = Model
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab mismatch 128256 !- 128257 ...
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 41/41 layers to GPU
llm_load_tensors:        CUDA0 model buffer size =  5397.50 MiB
llm_load_tensors:          CPU model buffer size =   281.83 MiB
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 500000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   656.25 MiB
llama_new_context_with_model: KV self size  =  656.25 MiB, K (f16):  328.12 MiB, V (f16):  328.12 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.50 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   258.50 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    12.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
mllama_model_load: model name:   Llama-3.2-11B-Vision-Instruct
mllama_model_load: description:  vision encoder for Mllama
mllama_model_load: GGUF version: 3
mllama_model_load: alignment:    32
mllama_model_load: n_tensors:    512
mllama_model_load: n_kv:         17
mllama_model_load: ftype:        f16
mllama_model_load: 
mllama_model_load: mllama_model_load: using CUDA0 backend

mllama_model_load: compute allocated memory: 2853.34 MB
time=2025-02-24T08:35:54.171-08:00 level=INFO source=server.go:596 msg="llama runner started in 12.81 seconds"
[GIN] 2025/02/24 - 08:35:59 | 200 |         1m24s |      10.0.0.216 | POST     "/api/chat"
time=2025-02-24T08:37:26.463-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from C:\Users\luser\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = mllama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Model
llama_model_loader: - kv   3:                         general.size_label str              = 10B
llama_model_loader: - kv   4:                         mllama.block_count u32              = 40
llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 4096
llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 14336
llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 32
llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,8]       = [3, 8, 13, 18, 23, 28, 33, 38]
llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  114 tensors
llama_model_loader: - type q4_K:  245 tensors
llama_model_loader: - type q6_K:   37 tensors
llm_load_vocab: special tokens cache size = 257
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = mllama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 1
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 9.78 B
llm_load_print_meta: model size       = 5.55 GiB (4.87 BPW) 
llm_load_print_meta: general.name     = Model
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab mismatch 128256 !- 128257 ...
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/24 - 08:37:31 | 200 |         1m20s |      10.0.0.216 | POST     "/api/chat"
<!-- gh-comment-id:2679042011 --> @ribbles commented on GitHub (Feb 24, 2025): Requests: ``` 2025-02-24 08:34:34.988 Chat request: {"model": "llama3.2-vision", "messages": [{"role": "user", "content": "truncated"}], "stream": false, "options": {"temperature": 1.0}, "keep_alive": 1200, "format": {"type": "object", "properties": {"message": {"type": "string"}, "tool_calls": {"type": "array", "items": {"type": "object", "properties": {"servo_id": {"type": "integer", "minimum": 1, "maximum": 6}, "position": {"type": "integer", "minimum": 500, "maximum": 2500}}, "required": ["servo_id", "position"]}}}}, "images": ["truncated"]} 2025-02-24 08:35:59.800 Chat response: {"model": "llama3.2-vision", "created_at": "2025-02-24T16:35:59.7807453Z", "message": {"role": "assistant", "content": "truncated"}, "done_reason": "stop", "done": true, "total_duration": 84673957800, "load_duration": 79065586400, "prompt_eval_count": 383, "prompt_eval_duration": 1521000000, "eval_count": 71, "eval_duration": 4069000000} 2025-02-24 08:36:10.736 Chat request: {"model": "llama3.2-vision", "messages": [{"role": "user", "content": "truncated"}, {"role": "assistant", "content": "truncated"}, {"role": "user", "content": "truncated"}], "stream": false, "options": {"temperature": 1.0}, "keep_alive": 1200, "format": {"type": "object", "properties": {"message": {"type": "string"}, "tool_calls": {"type": "array", "items": {"type": "object", "properties": {"servo_id": {"type": "integer", "minimum": 1, "maximum": 6}, "position": {"type": "integer", "minimum": 500, "maximum": 2500}}, "required": ["servo_id", "position"]}}}}, "images": ["truncated"]} 2025-02-24 08:37:31.445 Chat response: {"model": "llama3.2-vision", "created_at": "2025-02-24T16:37:31.4638142Z", "message": {"role": "assistant", "content": "truncated"}, "done_reason": "stop", "done": true, "total_duration": 80591115900, "load_duration": 75593078600, "prompt_eval_count": 475, "prompt_eval_duration": 259000000, "eval_count": 67, "eval_duration": 3281000000} ``` Logs for both requests (after ollama restart) ``` 2025/02/24 08:33:27 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\luser\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-24T08:33:27.983-08:00 level=INFO source=images.go:432 msg="total blobs: 19" time=2025-02-24T08:33:27.990-08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-24T08:33:27.997-08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)" time=2025-02-24T08:33:27.997-08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-24T08:33:27.997-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-02-24T08:33:27.997-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-02-24T08:33:27.998-08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=8 efficiency=0 threads=16 time=2025-02-24T08:33:28.370-08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b library=cuda compute=5.2 driver=12.7 name="Tesla M40 24GB" overhead="671.2 MiB" time=2025-02-24T08:33:28.377-08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b library=cuda variant=v11 compute=5.2 driver=12.7 name="Tesla M40 24GB" total="22.5 GiB" available="21.6 GiB" time=2025-02-24T08:35:41.082-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" time=2025-02-24T08:35:41.290-08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\luser\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-538d5233-15ee-c83a-d8e6-e7881e199a8b parallel=1 available=23195418624 required="11.3 GiB" time=2025-02-24T08:35:41.307-08:00 level=INFO source=server.go:100 msg="system memory" total="63.9 GiB" free="58.7 GiB" free_swap="67.9 GiB" time=2025-02-24T08:35:41.322-08:00 level=INFO source=memory.go:356 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[21.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" time=2025-02-24T08:35:41.354-08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\luser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\luser\\.ollama\\models\\blobs\\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj C:\\Users\\luser\\.ollama\\models\\blobs\\sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 16 --no-mmap --parallel 1 --port 59625" time=2025-02-24T08:35:41.366-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-24T08:35:41.366-08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-02-24T08:35:41.368-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-24T08:35:41.453-08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-24T08:35:41.453-08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=16 time=2025-02-24T08:35:41.464-08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:59625" time=2025-02-24T08:35:41.621-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla M40 24GB, compute capability 5.2, VMM: yes load_backend: loaded CUDA backend from C:\Users\luser\AppData\Local\Programs\Ollama\lib\ollama\cuda_v11\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\luser\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll llama_load_model_from_file: using device CUDA0 (Tesla M40 24GB) - 22081 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from C:\Users\luser\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 10B llama_model_loader: - kv 4: mllama.block_count u32 = 40 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 4096 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 14336 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 32 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 15 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,8] = [3, 8, 13, 18, 23, 28, 33, 38] llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 114 tensors llama_model_loader: - type q4_K: 245 tensors llama_model_loader: - type q6_K: 37 tensors llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 11B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 9.78 B llm_load_print_meta: model size = 5.55 GiB (4.87 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... llm_load_tensors: offloading 40 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 41/41 layers to GPU llm_load_tensors: CUDA0 model buffer size = 5397.50 MiB llm_load_tensors: CPU model buffer size = 281.83 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 656.25 MiB llama_new_context_with_model: KV self size = 656.25 MiB, K (f16): 328.12 MiB, V (f16): 328.12 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB llama_new_context_with_model: CUDA0 compute buffer size = 258.50 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 mllama_model_load: model name: Llama-3.2-11B-Vision-Instruct mllama_model_load: description: vision encoder for Mllama mllama_model_load: GGUF version: 3 mllama_model_load: alignment: 32 mllama_model_load: n_tensors: 512 mllama_model_load: n_kv: 17 mllama_model_load: ftype: f16 mllama_model_load: mllama_model_load: mllama_model_load: using CUDA0 backend mllama_model_load: compute allocated memory: 2853.34 MB time=2025-02-24T08:35:54.171-08:00 level=INFO source=server.go:596 msg="llama runner started in 12.81 seconds" [GIN] 2025/02/24 - 08:35:59 | 200 | 1m24s | 10.0.0.216 | POST "/api/chat" time=2025-02-24T08:37:26.463-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from C:\Users\luser\.ollama\models\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 10B llama_model_loader: - kv 4: mllama.block_count u32 = 40 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 4096 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 14336 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 32 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 15 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,8] = [3, 8, 13, 18, 23, 28, 33, 38] llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 114 tensors llama_model_loader: - type q4_K: 245 tensors llama_model_loader: - type q6_K: 37 tensors llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 9.78 B llm_load_print_meta: model size = 5.55 GiB (4.87 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... llama_model_load: vocab only - skipping tensors [GIN] 2025/02/24 - 08:37:31 | 200 | 1m20s | 10.0.0.216 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Feb 24, 2025):

I cannot replicate a long load_duration. What's the result of

curl http://localhost:11434/api/chat -d "{\"model\":\"llama3.2-vision\",\"messages\":[{\"role\":\"user\",\"content\":\"hello\"}],\"stream\":false}"
<!-- gh-comment-id:2679201822 --> @rick-github commented on GitHub (Feb 24, 2025): I cannot replicate a long load_duration. What's the result of ``` curl http://localhost:11434/api/chat -d "{\"model\":\"llama3.2-vision\",\"messages\":[{\"role\":\"user\",\"content\":\"hello\"}],\"stream\":false}" ```
Author
Owner

@ribbles commented on GitHub (Feb 24, 2025):

{
    "model": "llama3.2-vision",
    "created_at": "2025-02-24T18:05:44.3856736Z",
    "message": {
        "role": "assistant",
        "content": "Hello! How are you today? Is there something I can help you with or would you like to chat?"
    },
    "done_reason": "stop",
    "done": true,
    "total_duration": 16876372500,
    "load_duration": 13873535400,
    "prompt_eval_count": 11,
    "prompt_eval_duration": 1807000000,
    "eval_count": 23,
    "eval_duration": 1180000000
}
<!-- gh-comment-id:2679268577 --> @ribbles commented on GitHub (Feb 24, 2025): ``` { "model": "llama3.2-vision", "created_at": "2025-02-24T18:05:44.3856736Z", "message": { "role": "assistant", "content": "Hello! How are you today? Is there something I can help you with or would you like to chat?" }, "done_reason": "stop", "done": true, "total_duration": 16876372500, "load_duration": 13873535400, "prompt_eval_count": 11, "prompt_eval_duration": 1807000000, "eval_count": 23, "eval_duration": 1180000000 } ```
Author
Owner

@ribbles commented on GitHub (Feb 25, 2025):

It's the dimension of the image (3200x2400), even though it was only 200KB. I reduced the size to 1280x1024 (still 200KB) and responses are under 6 seconds. Makes sense that the compressed file size doesn't matter, just the pixel count and color depth.

I'm still uncertain why it shows up as "load_duration" in the telemetry.

<!-- gh-comment-id:2680789483 --> @ribbles commented on GitHub (Feb 25, 2025): It's the dimension of the image (3200x2400), even though it was only 200KB. I reduced the size to 1280x1024 (still 200KB) and responses are under 6 seconds. Makes sense that the compressed file size doesn't matter, just the pixel count and color depth. I'm still uncertain why it shows up as "load_duration" in the telemetry.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6076