[GH-ISSUE #5756] Ollama seems to be limited by single CPU thread on multi GPU machine with parallel processing enable #65622

Closed
opened 2026-05-03 21:56:00 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @traindi on GitHub (Jul 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5756

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I have 2 GPUs and have set the OLLAMA_NUM_PARALLEL environment variable. When multiple requests come in, I can see the model being loaded on both the GPU memory, but the GPU usage hovers around 40% for both of them. When I see the CPU usage, only 1 thread is being used and hits 100%.

I suspect that it is being limited by 1 single CPU thread. How can we make the 2 concurrent request serve by 2 separate thread. I am running ollama in docker (if that matters).

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.2.5

Originally created by @traindi on GitHub (Jul 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5756 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I have 2 GPUs and have set the OLLAMA_NUM_PARALLEL environment variable. When multiple requests come in, I can see the model being loaded on both the GPU memory, but the GPU usage hovers around 40% for both of them. When I see the CPU usage, only 1 thread is being used and hits 100%. I suspect that it is being limited by 1 single CPU thread. How can we make the 2 concurrent request serve by 2 separate thread. I am running ollama in docker (if that matters). ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.2.5
GiteaMirror added the question label 2026-05-03 21:56:00 -05:00
Author
Owner

@slyt commented on GitHub (Jul 19, 2024):

Using Ollama v0.2.7 on an Nvidia MIG 20GB GPU, I've noticed that manually setting OLLAMA_NUM_PARALLEL to high values causes layers to be offloaded to CPU instead of on GPU. Not setting OLLAMA_NUM_PARALLEL(setting to 0) actually allow concurrent calls to be made.

OLLAMA_NUM_PARALLEL=64 (unexpecte behavior)

Setting it to OLLAMA_NUM_PARALLEL=64 when I have 64 CPU cores accessible causes models to not fully load to the GPU:

OLLAMA_NUM_PARALLEL=64 ollama serve

Then doing run llama3, the output of ollama ps is:

ollama ps
NAME            ID              SIZE    PROCESSOR       UNTIL   
llama3:latest   365c0bd3c000    32 GB   36%/64% CPU/GPU Forever

The logs show that not all layers are being loaded to the GPU, even though there should be sufficient room: llm_load_tensors: offloaded 15/33 layers to GPU

Full logs:

OLLAMA_NUM_PARALLEL=64 ollama serve
2024/07/19 18:21:38 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/jovyan/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:64 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-19T18:21:38.990Z level=INFO source=images.go:778 msg="total blobs: 19"
time=2024-07-19T18:21:38.996Z level=INFO source=images.go:785 msg="total unused blobs removed: 0"
time=2024-07-19T18:21:38.998Z level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)"
time=2024-07-19T18:21:38.999Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama856793591/runners
time=2024-07-19T18:21:42.069Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-07-19T18:21:42.069Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-19T18:21:42.385Z level=INFO source=types.go:105 msg="inference compute" id=GPU-e97eebac-1c40-8e02-9f2e-83b4b7117af9 library=cuda compute=8.0 driver=12.2 name="NVIDIA A100-SXM4-80GB MIG 2g.20gb" total="19.5 GiB" available="19.4 GiB"
[GIN] 2024/07/19 - 18:21:48 | 200 |      96.058µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/19 - 18:21:49 | 200 |  390.318361ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-19T18:21:49.433Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=15 layers.split="" memory.available="[19.4 GiB]" memory.required.full="29.9 GiB" memory.required.partial="19.0 GiB" memory.required.kv="16.0 GiB" memory.required.allocations="[19.0 GiB]" memory.weights.total="19.7 GiB" memory.weights.repeating="19.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="8.3 GiB" memory.graph.partial="8.8 GiB"
time=2024-07-19T18:21:49.434Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama856793591/runners/cuda_v11/ollama_llama_server --model /home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 131072 --batch-size 512 --embedding --log-disable --n-gpu-layers 15 --parallel 64 --port 37799"
time=2024-07-19T18:21:49.435Z level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-19T18:21:49.435Z level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-19T18:21:49.435Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="139664873738240" timestamp=1721413309
INFO [main] system info | n_threads=128 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139664873738240" timestamp=1721413309 total_threads=128
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="37799" tid="139664873738240" timestamp=1721413309
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-07-19T18:21:49.687Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW) 
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA A100-SXM4-80GB MIG 2g.20gb, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size =    0.27 MiB
llm_load_tensors: offloading 15 repeating layers to GPU
llm_load_tensors: offloaded 15/33 layers to GPU
llm_load_tensors:        CPU buffer size =  4437.80 MiB
llm_load_tensors:      CUDA0 buffer size =  1755.47 MiB
llama_new_context_with_model: n_ctx      = 131072
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
time=2024-07-19T18:21:56.162Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-19T18:21:57.768Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:  CUDA_Host KV buffer size =  8704.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =  7680.00 MiB
llama_new_context_with_model: KV self size  = 16384.00 MiB, K (f16): 8192.00 MiB, V (f16): 8192.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =    32.31 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  8985.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =   264.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 191
time=2024-07-19T18:21:59.332Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-19T18:21:59.633Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="139664873738240" timestamp=1721413328
time=2024-07-19T18:22:08.795Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-19T18:22:09.502Z level=INFO source=server.go:617 msg="llama runner started in 20.07 seconds"
[GIN] 2024/07/19 - 18:22:09 | 200 | 20.407256829s |       127.0.0.1 | POST     "/api/chat"

OLLAMA_NUM_PARALLEL=0 (working)

However, if I just left the environment variable unset (or manually set to the default OLLAMA_NUM_PARALLEL=0) then things work fine; all model layers load to the GPU. I can even call the same model (or multiple models) concurrently.

OLLAMA_NUM_PARALLEL=0 ollama serve
ollama run llama3

ollama ps
NAME            ID              SIZE    PROCESSOR       UNTIL   
llama3:latest   365c0bd3c000    6.7 GB  100% GPU        Forever

Log of interest: llm_load_tensors: offloaded 33/33 layers to GPU

Full logs:

OLLAMA_NUM_PARALLEL=0 ollama serve
2024/07/19 18:24:40 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/jovyan/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-19T18:24:40.448Z level=INFO source=images.go:778 msg="total blobs: 19"
time=2024-07-19T18:24:40.454Z level=INFO source=images.go:785 msg="total unused blobs removed: 0"
time=2024-07-19T18:24:40.456Z level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)"
time=2024-07-19T18:24:40.457Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama384950711/runners
time=2024-07-19T18:24:43.521Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-07-19T18:24:43.521Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-19T18:24:43.848Z level=INFO source=types.go:105 msg="inference compute" id=GPU-e97eebac-1c40-8e02-9f2e-83b4b7117af9 library=cuda compute=8.0 driver=12.2 name="NVIDIA A100-SXM4-80GB MIG 2g.20gb" total="19.5 GiB" available="19.4 GiB"
[GIN] 2024/07/19 - 18:28:50 | 200 |     195.543µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/19 - 18:28:50 | 200 |     177.047µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2024/07/19 - 18:28:57 | 200 |       45.46µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/19 - 18:28:57 | 200 |   37.127058ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-19T18:28:57.599Z level=INFO source=sched.go:701 msg="new model will fit in available VRAM in single GPU, loading" model=/home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=GPU-e97eebac-1c40-8e02-9f2e-83b4b7117af9 parallel=4 available=20787953664 required="6.2 GiB"
time=2024-07-19T18:28:57.603Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[19.4 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-07-19T18:28:57.604Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama384950711/runners/cuda_v11/ollama_llama_server --model /home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 43208"
time=2024-07-19T18:28:57.605Z level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-19T18:28:57.605Z level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-19T18:28:57.606Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="140059309072384" timestamp=1721413737
INFO [main] system info | n_threads=128 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="140059309072384" timestamp=1721413737 total_threads=128
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="43208" tid="140059309072384" timestamp=1721413737
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-07-19T18:28:57.857Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW) 
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA A100-SXM4-80GB MIG 2g.20gb, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size =    0.27 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =   281.81 MiB
llm_load_tensors:      CUDA0 buffer size =  4155.99 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     2.02 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   560.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    24.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="140059309072384" timestamp=1721413744
time=2024-07-19T18:29:04.137Z level=INFO source=server.go:617 msg="llama runner started in 6.53 seconds"
[GIN] 2024/07/19 - 18:29:04 | 200 |  6.873128858s |       127.0.0.1 | POST     "/api/chat"

Detecting when layers will not be loaded to to GPU

I created a small bash script to find the smallest values of OLLAMA_NUM_PARALLEL where it starts to offload to CPU (not load all layers to GPU). This is model dependent, and likely GPU memory size dependent.

#!/bin/bash

# Starting value of OLLAMA_NUM_PARALLEL
NUM_PARALLEL=1

# Maximum value for the OLLAMA_NUM_PARALLEL
MAX_PARALLEL=100 # You can set a reasonable upper limit here to avoid infinite loops

# Log file to store the results
LOG_FILE="ollama_parallel_test_results.log"

# Function to check the log line
check_log() {
  while read -r line; do
    if [[ $line =~ llm_load_tensors:\ offloaded\ ([0-9]+)/([0-9]+)\ layers\ to\ GPU ]]; then
      LAYERS_OFFLOADED="${BASH_REMATCH[1]}"
      TOTAL_LAYERS="${BASH_REMATCH[2]}"
      echo "OLLAMA_NUM_PARALLEL=$NUM_PARALLEL: $LAYERS_OFFLOADED/$TOTAL_LAYERS layers offloaded to GPU" >> "$LOG_FILE"
      if [[ "$LAYERS_OFFLOADED" -eq "$TOTAL_LAYERS" ]]; then
        return 0
      else
        return 1
      fi
    fi
  done
}

# Initialize log file
echo "OLLAMA_NUM_PARALLEL Test Results" > "$LOG_FILE"
echo "=================================" >> "$LOG_FILE"

# Loop to increment OLLAMA_NUM_PARALLEL and check logs
while [[ $NUM_PARALLEL -le $MAX_PARALLEL ]]; do
  export OLLAMA_NUM_PARALLEL=$NUM_PARALLEL

  # Run ollama serve command in the background and capture its PID
  ollama serve 2>&1 | tee serve_output.log &
  SERVE_PID=$!

  # Give ollama serve some time to start
  sleep 5

  # Run ollama run phi3 command in a new terminal and send "/bye" to it
  (sleep 5; echo "/bye") | ollama run phi3

  # Wait for ollama serve to process the request
  sleep 5

  # Check the logs
  check_log < serve_output.log

  if [[ $? -ne 0 ]]; then
    # Kill the ollama serve process
    kill $SERVE_PID
    break
  fi

  # Kill the ollama serve process
  kill $SERVE_PID

  # Give some time for the port to be freed
  sleep 5

  # Increment NUM_PARALLEL for next iteration
  NUM_PARALLEL=$((NUM_PARALLEL + 1))
done

echo "Max OLLAMA_NUM_PARALLEL with all layers offloaded to GPU: $((NUM_PARALLEL - 1))" >> "$LOG_FILE"
<!-- gh-comment-id:2239896500 --> @slyt commented on GitHub (Jul 19, 2024): Using Ollama v0.2.7 on an Nvidia MIG 20GB GPU, I've noticed that manually setting `OLLAMA_NUM_PARALLEL` to high values causes layers to be offloaded to CPU instead of on GPU. Not setting `OLLAMA_NUM_PARALLEL`(setting to 0) actually allow concurrent calls to be made. ## OLLAMA_NUM_PARALLEL=64 (unexpecte behavior) Setting it to `OLLAMA_NUM_PARALLEL=64` when I have 64 CPU cores accessible causes models to not fully load to the GPU: `OLLAMA_NUM_PARALLEL=64 ollama serve` Then doing `run llama3`, the output of `ollama ps` is: ``` ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 32 GB 36%/64% CPU/GPU Forever ``` The logs show that not all layers are being loaded to the GPU, even though there should be sufficient room: `llm_load_tensors: offloaded 15/33 layers to GPU` Full logs: <details> ```sh OLLAMA_NUM_PARALLEL=64 ollama serve 2024/07/19 18:21:38 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/jovyan/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:64 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-19T18:21:38.990Z level=INFO source=images.go:778 msg="total blobs: 19" time=2024-07-19T18:21:38.996Z level=INFO source=images.go:785 msg="total unused blobs removed: 0" time=2024-07-19T18:21:38.998Z level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)" time=2024-07-19T18:21:38.999Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama856793591/runners time=2024-07-19T18:21:42.069Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]" time=2024-07-19T18:21:42.069Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-07-19T18:21:42.385Z level=INFO source=types.go:105 msg="inference compute" id=GPU-e97eebac-1c40-8e02-9f2e-83b4b7117af9 library=cuda compute=8.0 driver=12.2 name="NVIDIA A100-SXM4-80GB MIG 2g.20gb" total="19.5 GiB" available="19.4 GiB" [GIN] 2024/07/19 - 18:21:48 | 200 | 96.058µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/19 - 18:21:49 | 200 | 390.318361ms | 127.0.0.1 | POST "/api/show" time=2024-07-19T18:21:49.433Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=15 layers.split="" memory.available="[19.4 GiB]" memory.required.full="29.9 GiB" memory.required.partial="19.0 GiB" memory.required.kv="16.0 GiB" memory.required.allocations="[19.0 GiB]" memory.weights.total="19.7 GiB" memory.weights.repeating="19.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="8.3 GiB" memory.graph.partial="8.8 GiB" time=2024-07-19T18:21:49.434Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama856793591/runners/cuda_v11/ollama_llama_server --model /home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 131072 --batch-size 512 --embedding --log-disable --n-gpu-layers 15 --parallel 64 --port 37799" time=2024-07-19T18:21:49.435Z level=INFO source=sched.go:437 msg="loaded runners" count=1 time=2024-07-19T18:21:49.435Z level=INFO source=server.go:571 msg="waiting for llama runner to start responding" time=2024-07-19T18:21:49.435Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="139664873738240" timestamp=1721413309 INFO [main] system info | n_threads=128 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139664873738240" timestamp=1721413309 total_threads=128 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="37799" tid="139664873738240" timestamp=1721413309 llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-07-19T18:21:49.687Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100-SXM4-80GB MIG 2g.20gb, compute capability 8.0, VMM: yes llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 15 repeating layers to GPU llm_load_tensors: offloaded 15/33 layers to GPU llm_load_tensors: CPU buffer size = 4437.80 MiB llm_load_tensors: CUDA0 buffer size = 1755.47 MiB llama_new_context_with_model: n_ctx = 131072 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 time=2024-07-19T18:21:56.162Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server not responding" time=2024-07-19T18:21:57.768Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model" llama_kv_cache_init: CUDA_Host KV buffer size = 8704.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 7680.00 MiB llama_new_context_with_model: KV self size = 16384.00 MiB, K (f16): 8192.00 MiB, V (f16): 8192.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 32.31 MiB llama_new_context_with_model: CUDA0 compute buffer size = 8985.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 264.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 191 time=2024-07-19T18:21:59.332Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server not responding" time=2024-07-19T18:21:59.633Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model" INFO [main] model loaded | tid="139664873738240" timestamp=1721413328 time=2024-07-19T18:22:08.795Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server not responding" time=2024-07-19T18:22:09.502Z level=INFO source=server.go:617 msg="llama runner started in 20.07 seconds" [GIN] 2024/07/19 - 18:22:09 | 200 | 20.407256829s | 127.0.0.1 | POST "/api/chat" ``` </details> ## OLLAMA_NUM_PARALLEL=0 (working) However, if I just left the environment variable unset (or manually set to the default `OLLAMA_NUM_PARALLEL=0`) then things work fine; all model layers load to the GPU. I can even call the same model (or multiple models) concurrently. `OLLAMA_NUM_PARALLEL=0 ollama serve` `ollama run llama3` ``` ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 6.7 GB 100% GPU Forever ``` Log of interest: `llm_load_tensors: offloaded 33/33 layers to GPU` Full logs: <details> ```sh OLLAMA_NUM_PARALLEL=0 ollama serve 2024/07/19 18:24:40 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/jovyan/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-19T18:24:40.448Z level=INFO source=images.go:778 msg="total blobs: 19" time=2024-07-19T18:24:40.454Z level=INFO source=images.go:785 msg="total unused blobs removed: 0" time=2024-07-19T18:24:40.456Z level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)" time=2024-07-19T18:24:40.457Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama384950711/runners time=2024-07-19T18:24:43.521Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]" time=2024-07-19T18:24:43.521Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-07-19T18:24:43.848Z level=INFO source=types.go:105 msg="inference compute" id=GPU-e97eebac-1c40-8e02-9f2e-83b4b7117af9 library=cuda compute=8.0 driver=12.2 name="NVIDIA A100-SXM4-80GB MIG 2g.20gb" total="19.5 GiB" available="19.4 GiB" [GIN] 2024/07/19 - 18:28:50 | 200 | 195.543µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/19 - 18:28:50 | 200 | 177.047µs | 127.0.0.1 | GET "/api/ps" [GIN] 2024/07/19 - 18:28:57 | 200 | 45.46µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/19 - 18:28:57 | 200 | 37.127058ms | 127.0.0.1 | POST "/api/show" time=2024-07-19T18:28:57.599Z level=INFO source=sched.go:701 msg="new model will fit in available VRAM in single GPU, loading" model=/home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=GPU-e97eebac-1c40-8e02-9f2e-83b4b7117af9 parallel=4 available=20787953664 required="6.2 GiB" time=2024-07-19T18:28:57.603Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[19.4 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-07-19T18:28:57.604Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama384950711/runners/cuda_v11/ollama_llama_server --model /home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 43208" time=2024-07-19T18:28:57.605Z level=INFO source=sched.go:437 msg="loaded runners" count=1 time=2024-07-19T18:28:57.605Z level=INFO source=server.go:571 msg="waiting for llama runner to start responding" time=2024-07-19T18:28:57.606Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="140059309072384" timestamp=1721413737 INFO [main] system info | n_threads=128 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="140059309072384" timestamp=1721413737 total_threads=128 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="43208" tid="140059309072384" timestamp=1721413737 llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /home/jovyan/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-07-19T18:28:57.857Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100-SXM4-80GB MIG 2g.20gb, compute capability 8.0, VMM: yes llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 281.81 MiB llm_load_tensors: CUDA0 buffer size = 4155.99 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="140059309072384" timestamp=1721413744 time=2024-07-19T18:29:04.137Z level=INFO source=server.go:617 msg="llama runner started in 6.53 seconds" [GIN] 2024/07/19 - 18:29:04 | 200 | 6.873128858s | 127.0.0.1 | POST "/api/chat" ``` </details> ## Detecting when layers will not be loaded to to GPU I created a small bash script to find the smallest values of `OLLAMA_NUM_PARALLEL` where it starts to offload to CPU (not load all layers to GPU). This is model dependent, and likely GPU memory size dependent. ```sh #!/bin/bash # Starting value of OLLAMA_NUM_PARALLEL NUM_PARALLEL=1 # Maximum value for the OLLAMA_NUM_PARALLEL MAX_PARALLEL=100 # You can set a reasonable upper limit here to avoid infinite loops # Log file to store the results LOG_FILE="ollama_parallel_test_results.log" # Function to check the log line check_log() { while read -r line; do if [[ $line =~ llm_load_tensors:\ offloaded\ ([0-9]+)/([0-9]+)\ layers\ to\ GPU ]]; then LAYERS_OFFLOADED="${BASH_REMATCH[1]}" TOTAL_LAYERS="${BASH_REMATCH[2]}" echo "OLLAMA_NUM_PARALLEL=$NUM_PARALLEL: $LAYERS_OFFLOADED/$TOTAL_LAYERS layers offloaded to GPU" >> "$LOG_FILE" if [[ "$LAYERS_OFFLOADED" -eq "$TOTAL_LAYERS" ]]; then return 0 else return 1 fi fi done } # Initialize log file echo "OLLAMA_NUM_PARALLEL Test Results" > "$LOG_FILE" echo "=================================" >> "$LOG_FILE" # Loop to increment OLLAMA_NUM_PARALLEL and check logs while [[ $NUM_PARALLEL -le $MAX_PARALLEL ]]; do export OLLAMA_NUM_PARALLEL=$NUM_PARALLEL # Run ollama serve command in the background and capture its PID ollama serve 2>&1 | tee serve_output.log & SERVE_PID=$! # Give ollama serve some time to start sleep 5 # Run ollama run phi3 command in a new terminal and send "/bye" to it (sleep 5; echo "/bye") | ollama run phi3 # Wait for ollama serve to process the request sleep 5 # Check the logs check_log < serve_output.log if [[ $? -ne 0 ]]; then # Kill the ollama serve process kill $SERVE_PID break fi # Kill the ollama serve process kill $SERVE_PID # Give some time for the port to be freed sleep 5 # Increment NUM_PARALLEL for next iteration NUM_PARALLEL=$((NUM_PARALLEL + 1)) done echo "Max OLLAMA_NUM_PARALLEL with all layers offloaded to GPU: $((NUM_PARALLEL - 1))" >> "$LOG_FILE" ```
Author
Owner

@traindi commented on GitHub (Jul 20, 2024):

Concurrent calling works with multiple models - when different models are loaded and simultaneously queried they generate response in parallel. Both the GPUs are maxed out. The problem is calling the same model concurrently doesn't seem to leverage multiple GPUs fully, it seems to be CPU bound (only 1 CPU reaching 100%) and GPUs hovering around 35%.

<!-- gh-comment-id:2240793471 --> @traindi commented on GitHub (Jul 20, 2024): Concurrent calling works with multiple models - when different models are loaded and simultaneously queried they generate response in parallel. Both the GPUs are maxed out. The problem is calling the same model concurrently doesn't seem to leverage multiple GPUs fully, it seems to be CPU bound (only 1 CPU reaching 100%) and GPUs hovering around 35%.
Author
Owner

@dhiltgen commented on GitHub (Jul 23, 2024):

@traindi based on your description, it sounds like the system is working as intended. Let me explain:

There are 2 levels of concurrency. Multiple Models, and the number of parallel requests for each individual model. When we load models, we strive to load them into a single GPU if we can get them to fit, as that typically gives the best performance. If you have to split a model over multiple GPUs, then lots of communication takes place across the PCI bus between the GPUs and becomes a bottleneck slowing things down.

The OLLAMA_NUM_PARALLEL setting controls how many requests a single model will handle concurrently, and the larger this is set the more memory we need for the context.

You can use nvidia-smi and load the models one at a time and see the VRAM load change on your GPUs.

More information can be found here - https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests

<!-- gh-comment-id:2244009449 --> @dhiltgen commented on GitHub (Jul 23, 2024): @traindi based on your description, it sounds like the system is working as intended. Let me explain: There are 2 levels of concurrency. Multiple Models, and the number of parallel requests for each individual model. When we load models, we strive to load them into a single GPU if we can get them to fit, as that typically gives the best performance. If you have to split a model over multiple GPUs, then lots of communication takes place across the PCI bus between the GPUs and becomes a bottleneck slowing things down. The OLLAMA_NUM_PARALLEL setting controls how many requests a single model will handle concurrently, and the larger this is set the more memory we need for the context. You can use `nvidia-smi` and load the models one at a time and see the VRAM load change on your GPUs. More information can be found here - https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65622