[GH-ISSUE #10011] OLLAMA_NUM_PARALLEL not working #6563

Closed
opened 2026-04-12 18:11:28 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @forReason on GitHub (Mar 27, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10011

What is the issue?

OLLAMA_NUM_PARALLEL is not working. It just loads one instance.

Image

Image

Relevant log output

2025/03/27 08:03:56 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\devops01\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-27T08:03:56.510+01:00 level=INFO source=images.go:432 msg="total blobs: 12"
time=2025-03-27T08:03:56.511+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-27T08:03:56.512+01:00 level=INFO source=routes.go:1256 msg="Listening on [::]:11434 (version 0.5.12)"
time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=4
time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=2 efficiency=0 threads=2
time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=2 efficiency=0 threads=2
time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:214 msg="" package=2 cores=2 efficiency=0 threads=2
time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:214 msg="" package=3 cores=2 efficiency=0 threads=2
time=2025-03-27T08:03:56.910+01:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-cae06f2c-27fb-11b2-9c39-98c46e11c22e library=cuda compute=8.6 driver=11.4 name="NVIDIA A40-48Q" overhead="907.2 MiB"
time=2025-03-27T08:03:57.222+01:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-cdb4fbea-27fb-11b2-847d-42035a3548d7 library=cuda compute=8.6 driver=11.4 name="NVIDIA A40-48Q" overhead="932.5 MiB"
time=2025-03-27T08:03:57.226+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cae06f2c-27fb-11b2-9c39-98c46e11c22e library=cuda variant=v11 compute=8.6 driver=11.4 name="NVIDIA A40-48Q" total="48.0 GiB" available="42.9 GiB"
time=2025-03-27T08:03:57.226+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cdb4fbea-27fb-11b2-847d-42035a3548d7 library=cuda variant=v11 compute=8.6 driver=11.4 name="NVIDIA A40-48Q" total="48.0 GiB" available="42.9 GiB"
[GIN] 2025/03/27 - 08:03:57 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-03-27T08:03:57.229+01:00 level=WARN source=routes.go:901 msg="bad manifest filepath" name=registry.ollama.ai/library/llama_vision_fp6_custom:latest error="open C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-069235addef1211ea9efd620985e88763b48096fffe53803d9e61a39af9fc866: The system cannot find the file specified."
[GIN] 2025/03/27 - 08:03:57 | 200 |      2.2242ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/03/27 - 08:04:01 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 08:05:22 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 08:05:22 | 200 |     33.1633ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/27 - 08:05:22 | 200 |      22.717ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2025/03/27 - 10:24:46 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:24:46 | 200 |     19.0364ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/03/27 - 10:24:57 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:24:57 | 404 |       2.786ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-27T10:25:09.966+01:00 level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama_vision_fp6_custom/manifests/latest\": dial tcp: lookup registry.ollama.ai: no such host"
[GIN] 2025/03/27 - 10:25:09 | 200 |   12.0138758s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/03/27 - 10:26:41 | 200 |        65.3µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:26:41 | 404 |      4.4727ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-27T10:26:53.952+01:00 level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama_vision_fp6_custom/manifests/latest\": dial tcp: lookup registry.ollama.ai: no such host"
[GIN] 2025/03/27 - 10:26:53 | 200 |   12.0130116s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/03/27 - 10:32:12 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:32:12 | 200 |     74.9073ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-27T10:32:12.953+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
time=2025-03-27T10:32:13.040+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128
time=2025-03-27T10:32:13.040+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128
time=2025-03-27T10:32:13.043+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128
time=2025-03-27T10:32:13.043+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128
time=2025-03-27T10:32:13.051+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128
time=2025-03-27T10:32:13.051+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128
time=2025-03-27T10:32:13.053+01:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=C:\Users\devops01\.ollama\models\blobs\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 library=cuda parallel=1 required="76.8 GiB"
time=2025-03-27T10:32:13.084+01:00 level=INFO source=server.go:97 msg="system memory" total="256.0 GiB" free="250.0 GiB" free_swap="284.7 GiB"
time=2025-03-27T10:32:13.087+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128
time=2025-03-27T10:32:13.087+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128
time=2025-03-27T10:32:13.087+01:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=101 layers.offload=101 layers.split=51,50 memory.available="[42.9 GiB 42.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="76.8 GiB" memory.required.partial="76.8 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[41.1 GiB 35.7 GiB]" memory.weights.total="67.0 GiB" memory.weights.repeating="66.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" projector.weights="1.9 GiB" projector.graph="2.8 GiB"
time=2025-03-27T10:32:13.095+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\devops01\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 --ctx-size 2048 --batch-size 512 --n-gpu-layers 101 --mmproj C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 8 --no-mmap --parallel 1 --tensor-split 51,50 --port 62648"
time=2025-03-27T10:32:13.102+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-27T10:32:13.102+01:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-27T10:32:13.103+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-27T10:32:13.139+01:00 level=INFO source=runner.go:932 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA A40-48Q, compute capability 8.6, VMM: no
  Device 1: NVIDIA A40-48Q, compute capability 8.6, VMM: no
load_backend: loaded CUDA backend from C:\Users\devops01\AppData\Local\Programs\Ollama\lib\ollama\cuda_v11\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\devops01\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-27T10:32:14.460+01:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=8
time=2025-03-27T10:32:14.462+01:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62648"
time=2025-03-27T10:32:14.611+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA A40-48Q) - 43830 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA A40-48Q) - 43830 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from C:\Users\devops01\.ollama\models\blobs\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = mllama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Model
llama_model_loader: - kv   3:                         general.size_label str              = 88B
llama_model_loader: - kv   4:                         mllama.block_count u32              = 100
llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 8192
llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 28672
llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 64
llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 18
llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,20]      = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48...
llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  282 tensors
llama_model_loader: - type q6_K:  702 tensors
llm_load_vocab: special tokens cache size = 257
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = mllama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_layer          = 100
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q6_K
llm_load_print_meta: model params     = 87.67 B
llm_load_print_meta: model size       = 66.98 GiB (6.56 BPW) 
llm_load_print_meta: general.name     = Model
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab mismatch 128256 !- 128257 ...
llm_load_tensors: offloading 100 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 101/101 layers to GPU
llm_load_tensors:          CPU model buffer size =   822.00 MiB
llm_load_tensors:        CUDA0 model buffer size = 34141.32 MiB
llm_load_tensors:        CUDA1 model buffer size = 33624.43 MiB
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 500000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 100, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   828.31 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   812.31 MiB
llama_new_context_with_model: KV self size  = 1640.62 MiB, K (f16):  820.31 MiB, V (f16):  820.31 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.52 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   400.01 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   400.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    32.02 MiB
llama_new_context_with_model: graph nodes  = 2566
llama_new_context_with_model: graph splits = 3
mllama_model_load: model name:   Llama-3.2-90B-Vision-Instruct
mllama_model_load: description:  vision encoder for Mllama
mllama_model_load: GGUF version: 3
mllama_model_load: alignment:    32
mllama_model_load: n_tensors:    512
mllama_model_load: n_kv:         17
mllama_model_load: ftype:        f16
mllama_model_load: 
mllama_model_load: mllama_model_load: using CUDA0 backend

mllama_model_load: compute allocated memory: 2853.34 MB
time=2025-03-27T10:32:49.460+01:00 level=INFO source=server.go:596 msg="llama runner started in 36.36 seconds"
[GIN] 2025/03/27 - 10:32:49 | 200 |   36.5347842s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/27 - 10:33:07 | 200 |        85.6µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:33:07 | 200 |       635.5µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/27 - 10:33:34 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:33:34 | 200 |      33.761ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-27T10:33:34.857+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
[GIN] 2025/03/27 - 10:33:34 | 200 |     37.4451ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/27 - 10:34:00 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:34:00 | 200 |     32.5477ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-27T10:34:00.626+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
[GIN] 2025/03/27 - 10:34:00 | 200 |     30.9911ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/27 - 10:34:16 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:34:16 | 200 |     33.7433ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-27T10:34:16.250+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
[GIN] 2025/03/27 - 10:34:16 | 200 |     33.6178ms |       127.0.0.1 | POST     "/api/generate"
time=2025-03-27T10:34:38.089+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
time=2025-03-27T10:34:39.562+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
time=2025-03-27T10:34:40.185+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
time=2025-03-27T10:34:40.590+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
[GIN] 2025/03/27 - 10:35:07 | 200 |    29.877435s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/03/27 - 10:35:48 | 200 |          1m9s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/03/27 - 10:38:28 | 200 |         3m47s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/03/27 - 10:39:05 | 200 |         4m25s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/03/27 - 10:39:18 | 200 |        49.6µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:39:18 | 200 |      4.2162ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/27 - 10:39:23 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:39:23 | 200 |     30.8407ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-27T10:39:23.738+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet"
time=2025-03-27T10:39:23.827+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128
time=2025-03-27T10:39:23.827+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128
time=2025-03-27T10:39:23.830+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128
time=2025-03-27T10:39:23.830+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128
time=2025-03-27T10:39:23.836+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128
time=2025-03-27T10:39:23.836+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128
time=2025-03-27T10:39:23.837+01:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=C:\Users\devops01\.ollama\models\blobs\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 library=cuda parallel=1 required="76.8 GiB"
time=2025-03-27T10:39:23.866+01:00 level=INFO source=server.go:97 msg="system memory" total="256.0 GiB" free="249.8 GiB" free_swap="284.6 GiB"
time=2025-03-27T10:39:23.869+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128
time=2025-03-27T10:39:23.869+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128
time=2025-03-27T10:39:23.870+01:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=101 layers.offload=101 layers.split=51,50 memory.available="[42.9 GiB 42.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="76.8 GiB" memory.required.partial="76.8 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[41.1 GiB 35.7 GiB]" memory.weights.total="67.0 GiB" memory.weights.repeating="66.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" projector.weights="1.9 GiB" projector.graph="2.8 GiB"
time=2025-03-27T10:39:23.876+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\devops01\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 --ctx-size 2048 --batch-size 512 --n-gpu-layers 101 --mmproj C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 8 --no-mmap --parallel 1 --tensor-split 51,50 --port 62665"
time=2025-03-27T10:39:23.883+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-27T10:39:23.883+01:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-27T10:39:23.884+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-27T10:39:23.921+01:00 level=INFO source=runner.go:932 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA A40-48Q, compute capability 8.6, VMM: no
  Device 1: NVIDIA A40-48Q, compute capability 8.6, VMM: no
load_backend: loaded CUDA backend from C:\Users\devops01\AppData\Local\Programs\Ollama\lib\ollama\cuda_v11\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\devops01\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-27T10:39:24.073+01:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=8
time=2025-03-27T10:39:24.073+01:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62665"
time=2025-03-27T10:39:24.136+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA A40-48Q) - 43830 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA A40-48Q) - 43830 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from C:\Users\devops01\.ollama\models\blobs\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = mllama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Model
llama_model_loader: - kv   3:                         general.size_label str              = 88B
llama_model_loader: - kv   4:                         mllama.block_count u32              = 100
llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 8192
llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 28672
llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 64
llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 18
llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,20]      = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48...
llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  282 tensors
llama_model_loader: - type q6_K:  702 tensors
llm_load_vocab: special tokens cache size = 257
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = mllama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_layer          = 100
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q6_K
llm_load_print_meta: model params     = 87.67 B
llm_load_print_meta: model size       = 66.98 GiB (6.56 BPW) 
llm_load_print_meta: general.name     = Model
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab mismatch 128256 !- 128257 ...
llm_load_tensors: offloading 100 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 101/101 layers to GPU
llm_load_tensors:          CPU model buffer size =   822.00 MiB
llm_load_tensors:        CUDA0 model buffer size = 34141.32 MiB
llm_load_tensors:        CUDA1 model buffer size = 33624.43 MiB
[GIN] 2025/03/27 - 10:39:29 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/27 - 10:39:29 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.12

Originally created by @forReason on GitHub (Mar 27, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10011 ### What is the issue? OLLAMA_NUM_PARALLEL is not working. It just loads one instance. ![Image](https://github.com/user-attachments/assets/6dc26074-bda6-4853-ae66-1d7eaf61b079) ![Image](https://github.com/user-attachments/assets/1b11c5f8-4708-4e9e-911f-e2615a6ec7e9) ### Relevant log output ```shell 2025/03/27 08:03:56 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\devops01\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-27T08:03:56.510+01:00 level=INFO source=images.go:432 msg="total blobs: 12" time=2025-03-27T08:03:56.511+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-27T08:03:56.512+01:00 level=INFO source=routes.go:1256 msg="Listening on [::]:11434 (version 0.5.12)" time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=4 time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=2 efficiency=0 threads=2 time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=2 efficiency=0 threads=2 time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:214 msg="" package=2 cores=2 efficiency=0 threads=2 time=2025-03-27T08:03:56.512+01:00 level=INFO source=gpu_windows.go:214 msg="" package=3 cores=2 efficiency=0 threads=2 time=2025-03-27T08:03:56.910+01:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-cae06f2c-27fb-11b2-9c39-98c46e11c22e library=cuda compute=8.6 driver=11.4 name="NVIDIA A40-48Q" overhead="907.2 MiB" time=2025-03-27T08:03:57.222+01:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-cdb4fbea-27fb-11b2-847d-42035a3548d7 library=cuda compute=8.6 driver=11.4 name="NVIDIA A40-48Q" overhead="932.5 MiB" time=2025-03-27T08:03:57.226+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cae06f2c-27fb-11b2-9c39-98c46e11c22e library=cuda variant=v11 compute=8.6 driver=11.4 name="NVIDIA A40-48Q" total="48.0 GiB" available="42.9 GiB" time=2025-03-27T08:03:57.226+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cdb4fbea-27fb-11b2-847d-42035a3548d7 library=cuda variant=v11 compute=8.6 driver=11.4 name="NVIDIA A40-48Q" total="48.0 GiB" available="42.9 GiB" [GIN] 2025/03/27 - 08:03:57 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-03-27T08:03:57.229+01:00 level=WARN source=routes.go:901 msg="bad manifest filepath" name=registry.ollama.ai/library/llama_vision_fp6_custom:latest error="open C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-069235addef1211ea9efd620985e88763b48096fffe53803d9e61a39af9fc866: The system cannot find the file specified." [GIN] 2025/03/27 - 08:03:57 | 200 | 2.2242ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/03/27 - 08:04:01 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 08:05:22 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 08:05:22 | 200 | 33.1633ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/27 - 08:05:22 | 200 | 22.717ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2025/03/27 - 10:24:46 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:24:46 | 200 | 19.0364ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/03/27 - 10:24:57 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:24:57 | 404 | 2.786ms | 127.0.0.1 | POST "/api/show" time=2025-03-27T10:25:09.966+01:00 level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama_vision_fp6_custom/manifests/latest\": dial tcp: lookup registry.ollama.ai: no such host" [GIN] 2025/03/27 - 10:25:09 | 200 | 12.0138758s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/03/27 - 10:26:41 | 200 | 65.3µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:26:41 | 404 | 4.4727ms | 127.0.0.1 | POST "/api/show" time=2025-03-27T10:26:53.952+01:00 level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama_vision_fp6_custom/manifests/latest\": dial tcp: lookup registry.ollama.ai: no such host" [GIN] 2025/03/27 - 10:26:53 | 200 | 12.0130116s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/03/27 - 10:32:12 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:32:12 | 200 | 74.9073ms | 127.0.0.1 | POST "/api/show" time=2025-03-27T10:32:12.953+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" time=2025-03-27T10:32:13.040+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128 time=2025-03-27T10:32:13.040+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128 time=2025-03-27T10:32:13.043+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128 time=2025-03-27T10:32:13.043+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128 time=2025-03-27T10:32:13.051+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128 time=2025-03-27T10:32:13.051+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128 time=2025-03-27T10:32:13.053+01:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=C:\Users\devops01\.ollama\models\blobs\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 library=cuda parallel=1 required="76.8 GiB" time=2025-03-27T10:32:13.084+01:00 level=INFO source=server.go:97 msg="system memory" total="256.0 GiB" free="250.0 GiB" free_swap="284.7 GiB" time=2025-03-27T10:32:13.087+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128 time=2025-03-27T10:32:13.087+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128 time=2025-03-27T10:32:13.087+01:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=101 layers.offload=101 layers.split=51,50 memory.available="[42.9 GiB 42.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="76.8 GiB" memory.required.partial="76.8 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[41.1 GiB 35.7 GiB]" memory.weights.total="67.0 GiB" memory.weights.repeating="66.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" projector.weights="1.9 GiB" projector.graph="2.8 GiB" time=2025-03-27T10:32:13.095+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\devops01\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 --ctx-size 2048 --batch-size 512 --n-gpu-layers 101 --mmproj C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 8 --no-mmap --parallel 1 --tensor-split 51,50 --port 62648" time=2025-03-27T10:32:13.102+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-27T10:32:13.102+01:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-27T10:32:13.103+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-27T10:32:13.139+01:00 level=INFO source=runner.go:932 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA A40-48Q, compute capability 8.6, VMM: no Device 1: NVIDIA A40-48Q, compute capability 8.6, VMM: no load_backend: loaded CUDA backend from C:\Users\devops01\AppData\Local\Programs\Ollama\lib\ollama\cuda_v11\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\devops01\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-27T10:32:14.460+01:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=8 time=2025-03-27T10:32:14.462+01:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62648" time=2025-03-27T10:32:14.611+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA A40-48Q) - 43830 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA A40-48Q) - 43830 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from C:\Users\devops01\.ollama\models\blobs\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 88B llama_model_loader: - kv 4: mllama.block_count u32 = 100 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 8192 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 28672 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 64 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 18 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,20] = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48... llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 282 tensors llama_model_loader: - type q6_K: 702 tensors llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 100 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q6_K llm_load_print_meta: model params = 87.67 B llm_load_print_meta: model size = 66.98 GiB (6.56 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... llm_load_tensors: offloading 100 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 101/101 layers to GPU llm_load_tensors: CPU model buffer size = 822.00 MiB llm_load_tensors: CUDA0 model buffer size = 34141.32 MiB llm_load_tensors: CUDA1 model buffer size = 33624.43 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 100, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 828.31 MiB llama_kv_cache_init: CUDA1 KV buffer size = 812.31 MiB llama_new_context_with_model: KV self size = 1640.62 MiB, K (f16): 820.31 MiB, V (f16): 820.31 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 400.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 400.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 32.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 3 mllama_model_load: model name: Llama-3.2-90B-Vision-Instruct mllama_model_load: description: vision encoder for Mllama mllama_model_load: GGUF version: 3 mllama_model_load: alignment: 32 mllama_model_load: n_tensors: 512 mllama_model_load: n_kv: 17 mllama_model_load: ftype: f16 mllama_model_load: mllama_model_load: mllama_model_load: using CUDA0 backend mllama_model_load: compute allocated memory: 2853.34 MB time=2025-03-27T10:32:49.460+01:00 level=INFO source=server.go:596 msg="llama runner started in 36.36 seconds" [GIN] 2025/03/27 - 10:32:49 | 200 | 36.5347842s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/27 - 10:33:07 | 200 | 85.6µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:33:07 | 200 | 635.5µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/27 - 10:33:34 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:33:34 | 200 | 33.761ms | 127.0.0.1 | POST "/api/show" time=2025-03-27T10:33:34.857+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" [GIN] 2025/03/27 - 10:33:34 | 200 | 37.4451ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/27 - 10:34:00 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:34:00 | 200 | 32.5477ms | 127.0.0.1 | POST "/api/show" time=2025-03-27T10:34:00.626+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" [GIN] 2025/03/27 - 10:34:00 | 200 | 30.9911ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/27 - 10:34:16 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:34:16 | 200 | 33.7433ms | 127.0.0.1 | POST "/api/show" time=2025-03-27T10:34:16.250+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" [GIN] 2025/03/27 - 10:34:16 | 200 | 33.6178ms | 127.0.0.1 | POST "/api/generate" time=2025-03-27T10:34:38.089+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" time=2025-03-27T10:34:39.562+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" time=2025-03-27T10:34:40.185+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" time=2025-03-27T10:34:40.590+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" [GIN] 2025/03/27 - 10:35:07 | 200 | 29.877435s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/27 - 10:35:48 | 200 | 1m9s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/27 - 10:38:28 | 200 | 3m47s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/27 - 10:39:05 | 200 | 4m25s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/27 - 10:39:18 | 200 | 49.6µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:39:18 | 200 | 4.2162ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/27 - 10:39:23 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:39:23 | 200 | 30.8407ms | 127.0.0.1 | POST "/api/show" time=2025-03-27T10:39:23.738+01:00 level=WARN source=sched.go:138 msg="mllama doesn't support parallel requests yet" time=2025-03-27T10:39:23.827+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128 time=2025-03-27T10:39:23.827+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128 time=2025-03-27T10:39:23.830+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128 time=2025-03-27T10:39:23.830+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128 time=2025-03-27T10:39:23.836+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128 time=2025-03-27T10:39:23.836+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128 time=2025-03-27T10:39:23.837+01:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=C:\Users\devops01\.ollama\models\blobs\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 library=cuda parallel=1 required="76.8 GiB" time=2025-03-27T10:39:23.866+01:00 level=INFO source=server.go:97 msg="system memory" total="256.0 GiB" free="249.8 GiB" free_swap="284.6 GiB" time=2025-03-27T10:39:23.869+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.key_length default=128 time=2025-03-27T10:39:23.869+01:00 level=WARN source=ggml.go:132 msg="key not found" key=mllama.attention.value_length default=128 time=2025-03-27T10:39:23.870+01:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=101 layers.offload=101 layers.split=51,50 memory.available="[42.9 GiB 42.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="76.8 GiB" memory.required.partial="76.8 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[41.1 GiB 35.7 GiB]" memory.weights.total="67.0 GiB" memory.weights.repeating="66.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" projector.weights="1.9 GiB" projector.graph="2.8 GiB" time=2025-03-27T10:39:23.876+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\devops01\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 --ctx-size 2048 --batch-size 512 --n-gpu-layers 101 --mmproj C:\\Users\\devops01\\.ollama\\models\\blobs\\sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 8 --no-mmap --parallel 1 --tensor-split 51,50 --port 62665" time=2025-03-27T10:39:23.883+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-27T10:39:23.883+01:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-27T10:39:23.884+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-27T10:39:23.921+01:00 level=INFO source=runner.go:932 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA A40-48Q, compute capability 8.6, VMM: no Device 1: NVIDIA A40-48Q, compute capability 8.6, VMM: no load_backend: loaded CUDA backend from C:\Users\devops01\AppData\Local\Programs\Ollama\lib\ollama\cuda_v11\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\devops01\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-27T10:39:24.073+01:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=8 time=2025-03-27T10:39:24.073+01:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62665" time=2025-03-27T10:39:24.136+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA A40-48Q) - 43830 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA A40-48Q) - 43830 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from C:\Users\devops01\.ollama\models\blobs\sha256-808f35a8f5569dc354fd7531d50b09889ef84d11cc570e83c0467268f9faf135 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 88B llama_model_loader: - kv 4: mllama.block_count u32 = 100 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 8192 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 28672 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 64 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 18 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,20] = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48... llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 282 tensors llama_model_loader: - type q6_K: 702 tensors llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 100 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q6_K llm_load_print_meta: model params = 87.67 B llm_load_print_meta: model size = 66.98 GiB (6.56 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... llm_load_tensors: offloading 100 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 101/101 layers to GPU llm_load_tensors: CPU model buffer size = 822.00 MiB llm_load_tensors: CUDA0 model buffer size = 34141.32 MiB llm_load_tensors: CUDA1 model buffer size = 33624.43 MiB [GIN] 2025/03/27 - 10:39:29 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/27 - 10:39:29 | 200 | 0s | 127.0.0.1 | GET "/api/ps" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.12
GiteaMirror added the bug label 2026-04-12 18:11:28 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 27, 2025):

OLLAMA_NUM_PARALLEL creates multiple instances of the KV cache (the working area for inference) but has only one copy of the model weights.

However, in your log:

OLLAMA_NUM_PARALLEL:0

it is unset. Have you restarted the server?

<!-- gh-comment-id:2757502511 --> @rick-github commented on GitHub (Mar 27, 2025): `OLLAMA_NUM_PARALLEL` creates multiple instances of the KV cache (the working area for inference) but has only one copy of the model weights. However, in your log: ``` OLLAMA_NUM_PARALLEL:0 ``` it is unset. Have you restarted the server?
Author
Owner

@forReason commented on GitHub (Mar 27, 2025):

yes. But it seems parallel not supported for vision models, is that correct?

<!-- gh-comment-id:2757608106 --> @forReason commented on GitHub (Mar 27, 2025): yes. But it seems parallel not supported for vision models, is that correct?
Author
Owner

@rick-github commented on GitHub (Mar 27, 2025):

https://github.com/ollama/ollama/issues/9564#issuecomment-2707382182

<!-- gh-comment-id:2757620834 --> @rick-github commented on GitHub (Mar 27, 2025): https://github.com/ollama/ollama/issues/9564#issuecomment-2707382182
Author
Owner

@sieveLau commented on GitHub (Mar 27, 2025):

System variables need a reboot to take effect, but I'm not sure if a logout and login can work. If you change the variable for the account, a new shell will take effect.

<!-- gh-comment-id:2757742494 --> @sieveLau commented on GitHub (Mar 27, 2025): System variables need a reboot to take effect, but I'm not sure if a logout and login can work. If you change the variable for the account, a new shell will take effect.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6563