[GH-ISSUE #10214] Concurrency Does Not Scale When Increasing GPUs from 2x to 4x RTX 4090 serving qwq model #6701

Closed
opened 2026-04-12 18:26:10 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @jaybom on GitHub (Apr 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10214

What is the issue?

Upgrading the hardware from 2x RTX 4090 GPUs to 4x RTX 4090 GPUs did not result in an increase in the maximum concurrent requests Ollama could handle when serving the qwq model (32B parameters, Q4_K_M GGUF). The maximum concurrency remained capped at 6 requests, the same as with 2 GPUs, even after adjusting relevant environment variables.

export OLLAMA_CONTEXT_LENGTH=4096
export OLLAMA_NUM_PARALLEL=12
export OLLAMA_KEEP_ALIVE=-1
export VLLM_USE_MODELSCOPE=True
export CUDA_VISIBLE_DEVICES=0,1,2,3
export GPU_DEVICE_ORDINAL=0,1,2,3
export OLLAMA_KEEP_ALIVE=-1
export OLLAMA_NUM_PARALLEL=14
export OLLAMA_MAX_LOADED_MODELS=4

root@nb-ehrwatbpi8e8-0:~# ollama serve
2025/04/10 06:47:03 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1,2,3 GPU_DEVICE_ORDINAL:0,1,2,3 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/epfs/model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:14 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-04-10T06:47:03.750Z level=INFO source=images.go:432 msg="total blobs: 9"
time=2025-04-10T06:47:03.752Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-04-10T06:47:03.755Z level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.2)"
time=2025-04-10T06:47:03.755Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-10T06:47:05.355Z level=INFO source=types.go:130 msg="inference compute" id=GPU-2f5c9445-c7c7-2306-4879-a91bb88ab1aa library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 D" total="23.4 GiB" available="23.1 GiB"
time=2025-04-10T06:47:05.355Z level=INFO source=types.go:130 msg="inference compute" id=GPU-ca8ec7cc-1467-a7da-4d02-d53f12ce0563 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 D" total="23.4 GiB" available="23.1 GiB"
time=2025-04-10T06:47:05.355Z level=INFO source=types.go:130 msg="inference compute" id=GPU-505aa4cd-1b6c-668c-11fc-64756a586015 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 D" total="23.4 GiB" available="23.1 GiB"
time=2025-04-10T06:47:05.355Z level=INFO source=types.go:130 msg="inference compute" id=GPU-0bf696a9-71a4-227d-fd27-c28d8fbb77f5 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 D" total="23.4 GiB" available="23.1 GiB"
time=2025-04-10T06:48:05.852Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-04-10T06:48:05.853Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-04-10T06:48:05.853Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-04-10T06:48:05.853Z level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=/root/epfs/model/blobs/sha256-7ccc6415b2c7cb61ff8e01fec069d6f2fd6e213c509824d642c8a15c3d002e73 library=cuda parallel=14 required="58.3 GiB"
time=2025-04-10T06:48:06.879Z level=INFO source=server.go:105 msg="system memory" total="503.7 GiB" free="447.5 GiB" free_swap="0 B"
time=2025-04-10T06:48:06.880Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-04-10T06:48:06.880Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-04-10T06:48:06.880Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-04-10T06:48:06.881Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=65 layers.split=17,16,16,16 memory.available="[23.1 GiB 23.1 GiB 23.1 GiB 23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="58.3 GiB" memory.required.partial="58.3 GiB" memory.required.kv="14.0 GiB" memory.required.allocations="[15.0 GiB 14.4 GiB 14.4 GiB 14.4 GiB]" memory.weights.total="17.5 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="5.6 GiB" memory.graph.partial="5.6 GiB"
llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from /root/epfs/model/blobs/sha256-7ccc6415b2c7cb61ff8e01fec069d6f2fd6e213c509824d642c8a15c3d002e73 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = QwQ 32B
llama_model_loader: - kv 3: general.basename str = QwQ
llama_model_loader: - kv 4: general.size_label str = 32B
llama_model_loader: - kv 5: general.license str = apache-2.0
llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b...
llama_model_loader: - kv 7: general.base_model.count u32 = 1
llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B
llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B
llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 13: qwen2.block_count u32 = 64
llama_model_loader: - kv 14: qwen2.context_length u32 = 40960
llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648
llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 31: general.quantization_version u32 = 2
llama_model_loader: - kv 32: general.file_type u32 = 15
llama_model_loader: - type f32: 321 tensors
llama_model_loader: - type q4_K: 385 tensors
llama_model_loader: - type q6_K: 65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 18.48 GiB (4.85 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen2
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 32.76 B
print_info: general.name = QwQ 32B
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-04-10T06:48:07.177Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/epfs/model/blobs/sha256-7ccc6415b2c7cb61ff8e01fec069d6f2fd6e213c509824d642c8a15c3d002e73 --ctx-size 57344 --batch-size 512 --n-gpu-layers 65 --threads 64 --parallel 14 --tensor-split 17,16,16,16 --port 38149"
time=2025-04-10T06:48:07.178Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-04-10T06:48:07.178Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-10T06:48:07.178Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-10T06:48:07.197Z level=INFO source=runner.go:846 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
Device 1: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
Device 2: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
Device 3: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-04-10T06:48:08.138Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-04-10T06:48:08.139Z level=INFO source=runner.go:906 msg="Server listening on 127.0.0.1:38149"
time=2025-04-10T06:48:08.182Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090 D) - 23640 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090 D) - 23640 MiB free
llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090 D) - 23640 MiB free
llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090 D) - 23640 MiB free
llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from /root/epfs/model/blobs/sha256-7ccc6415b2c7cb61ff8e01fec069d6f2fd6e213c509824d642c8a15c3d002e73 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = QwQ 32B
llama_model_loader: - kv 3: general.basename str = QwQ
llama_model_loader: - kv 4: general.size_label str = 32B
llama_model_loader: - kv 5: general.license str = apache-2.0
llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b...
llama_model_loader: - kv 7: general.base_model.count u32 = 1
llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B
llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B
llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 13: qwen2.block_count u32 = 64
llama_model_loader: - kv 14: qwen2.context_length u32 = 40960
llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648
llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 31: general.quantization_version u32 = 2
llama_model_loader: - kv 32: general.file_type u32 = 15
llama_model_loader: - type f32: 321 tensors
llama_model_loader: - type q4_K: 385 tensors
llama_model_loader: - type q6_K: 65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 18.48 GiB (4.85 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen2
print_info: vocab_only = 0
print_info: n_ctx_train = 40960
print_info: n_embd = 5120
print_info: n_layer = 64
print_info: n_head = 40
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 5
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: n_ff = 27648
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 40960
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 32B
print_info: model params = 32.76 B
print_info: general.name = QwQ 32B
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 64 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 65/65 layers to GPU
load_tensors: CUDA0 model buffer size = 4844.72 MiB
load_tensors: CUDA1 model buffer size = 4366.53 MiB
load_tensors: CUDA2 model buffer size = 4366.53 MiB
load_tensors: CUDA3 model buffer size = 4930.57 MiB
load_tensors: CPU_Mapped model buffer size = 417.66 MiB
llama_init_from_model: n_seq_max = 14
llama_init_from_model: n_ctx = 57344
llama_init_from_model: n_ctx_per_seq = 4096
llama_init_from_model: n_batch = 7168
llama_init_from_model: n_ubatch = 512
llama_init_from_model: flash_attn = 0
llama_init_from_model: freq_base = 1000000.0
llama_init_from_model: freq_scale = 1
llama_init_from_model: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 57344, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 3808.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 3584.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 3584.00 MiB
llama_kv_cache_init: CUDA3 KV buffer size = 3360.00 MiB
llama_init_from_model: KV self size = 14336.00 MiB, K (f16): 7168.00 MiB, V (f16): 7168.00 MiB
llama_init_from_model: CUDA_Host output buffer size = 8.39 MiB
llama_init_from_model: pipeline parallelism enabled (n_copies=4)
llama_init_from_model: CUDA0 compute buffer size = 5008.01 MiB
llama_init_from_model: CUDA1 compute buffer size = 5008.01 MiB
llama_init_from_model: CUDA2 compute buffer size = 5008.01 MiB
llama_init_from_model: CUDA3 compute buffer size = 5008.02 MiB
llama_init_from_model: CUDA_Host compute buffer size = 458.02 MiB
llama_init_from_model: graph nodes = 2246
llama_init_from_model: graph splits = 5
time=2025-04-10T06:48:14.456Z level=INFO source=server.go:619 msg="llama runner started in 7.28 seconds"

Relevant log output


OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.6.2

Originally created by @jaybom on GitHub (Apr 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10214 ### What is the issue? Upgrading the hardware from 2x RTX 4090 GPUs to 4x RTX 4090 GPUs did not result in an increase in the maximum concurrent requests Ollama could handle when serving the qwq model (32B parameters, Q4_K_M GGUF). The maximum concurrency remained capped at 6 requests, the same as with 2 GPUs, even after adjusting relevant environment variables. export OLLAMA_CONTEXT_LENGTH=4096 export OLLAMA_NUM_PARALLEL=12 export OLLAMA_KEEP_ALIVE=-1 export VLLM_USE_MODELSCOPE=True export CUDA_VISIBLE_DEVICES=0,1,2,3 export GPU_DEVICE_ORDINAL=0,1,2,3 export OLLAMA_KEEP_ALIVE=-1 export OLLAMA_NUM_PARALLEL=14 export OLLAMA_MAX_LOADED_MODELS=4 root@nb-ehrwatbpi8e8-0:~# ollama serve 2025/04/10 06:47:03 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1,2,3 GPU_DEVICE_ORDINAL:0,1,2,3 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/epfs/model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:14 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-10T06:47:03.750Z level=INFO source=images.go:432 msg="total blobs: 9" time=2025-04-10T06:47:03.752Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-04-10T06:47:03.755Z level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.2)" time=2025-04-10T06:47:03.755Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-10T06:47:05.355Z level=INFO source=types.go:130 msg="inference compute" id=GPU-2f5c9445-c7c7-2306-4879-a91bb88ab1aa library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 D" total="23.4 GiB" available="23.1 GiB" time=2025-04-10T06:47:05.355Z level=INFO source=types.go:130 msg="inference compute" id=GPU-ca8ec7cc-1467-a7da-4d02-d53f12ce0563 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 D" total="23.4 GiB" available="23.1 GiB" time=2025-04-10T06:47:05.355Z level=INFO source=types.go:130 msg="inference compute" id=GPU-505aa4cd-1b6c-668c-11fc-64756a586015 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 D" total="23.4 GiB" available="23.1 GiB" time=2025-04-10T06:47:05.355Z level=INFO source=types.go:130 msg="inference compute" id=GPU-0bf696a9-71a4-227d-fd27-c28d8fbb77f5 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 D" total="23.4 GiB" available="23.1 GiB" time=2025-04-10T06:48:05.852Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-10T06:48:05.853Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-10T06:48:05.853Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-10T06:48:05.853Z level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=/root/epfs/model/blobs/sha256-7ccc6415b2c7cb61ff8e01fec069d6f2fd6e213c509824d642c8a15c3d002e73 library=cuda parallel=14 required="58.3 GiB" time=2025-04-10T06:48:06.879Z level=INFO source=server.go:105 msg="system memory" total="503.7 GiB" free="447.5 GiB" free_swap="0 B" time=2025-04-10T06:48:06.880Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-10T06:48:06.880Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-10T06:48:06.880Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-10T06:48:06.881Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=65 layers.split=17,16,16,16 memory.available="[23.1 GiB 23.1 GiB 23.1 GiB 23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="58.3 GiB" memory.required.partial="58.3 GiB" memory.required.kv="14.0 GiB" memory.required.allocations="[15.0 GiB 14.4 GiB 14.4 GiB 14.4 GiB]" memory.weights.total="17.5 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="5.6 GiB" memory.graph.partial="5.6 GiB" llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from /root/epfs/model/blobs/sha256-7ccc6415b2c7cb61ff8e01fec069d6f2fd6e213c509824d642c8a15c3d002e73 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = QwQ 32B llama_model_loader: - kv 3: general.basename str = QwQ llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b... llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 13: qwen2.block_count u32 = 64 llama_model_loader: - kv 14: qwen2.context_length u32 = 40960 llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 31: general.quantization_version u32 = 2 llama_model_loader: - kv 32: general.file_type u32 = 15 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.48 GiB (4.85 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 32.76 B print_info: general.name = QwQ 32B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-04-10T06:48:07.177Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/epfs/model/blobs/sha256-7ccc6415b2c7cb61ff8e01fec069d6f2fd6e213c509824d642c8a15c3d002e73 --ctx-size 57344 --batch-size 512 --n-gpu-layers 65 --threads 64 --parallel 14 --tensor-split 17,16,16,16 --port 38149" time=2025-04-10T06:48:07.178Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-04-10T06:48:07.178Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-10T06:48:07.178Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-10T06:48:07.197Z level=INFO source=runner.go:846 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes Device 1: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes Device 2: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes Device 3: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-04-10T06:48:08.138Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-04-10T06:48:08.139Z level=INFO source=runner.go:906 msg="Server listening on 127.0.0.1:38149" time=2025-04-10T06:48:08.182Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090 D) - 23640 MiB free llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090 D) - 23640 MiB free llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090 D) - 23640 MiB free llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090 D) - 23640 MiB free llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from /root/epfs/model/blobs/sha256-7ccc6415b2c7cb61ff8e01fec069d6f2fd6e213c509824d642c8a15c3d002e73 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = QwQ 32B llama_model_loader: - kv 3: general.basename str = QwQ llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b... llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 13: qwen2.block_count u32 = 64 llama_model_loader: - kv 14: qwen2.context_length u32 = 40960 llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 31: general.quantization_version u32 = 2 llama_model_loader: - kv 32: general.file_type u32 = 15 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.48 GiB (4.85 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 27648 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = QwQ 32B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 64 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 65/65 layers to GPU load_tensors: CUDA0 model buffer size = 4844.72 MiB load_tensors: CUDA1 model buffer size = 4366.53 MiB load_tensors: CUDA2 model buffer size = 4366.53 MiB load_tensors: CUDA3 model buffer size = 4930.57 MiB load_tensors: CPU_Mapped model buffer size = 417.66 MiB llama_init_from_model: n_seq_max = 14 llama_init_from_model: n_ctx = 57344 llama_init_from_model: n_ctx_per_seq = 4096 llama_init_from_model: n_batch = 7168 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 1000000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 57344, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 3808.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 3584.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 3584.00 MiB llama_kv_cache_init: CUDA3 KV buffer size = 3360.00 MiB llama_init_from_model: KV self size = 14336.00 MiB, K (f16): 7168.00 MiB, V (f16): 7168.00 MiB llama_init_from_model: CUDA_Host output buffer size = 8.39 MiB llama_init_from_model: pipeline parallelism enabled (n_copies=4) llama_init_from_model: CUDA0 compute buffer size = 5008.01 MiB llama_init_from_model: CUDA1 compute buffer size = 5008.01 MiB llama_init_from_model: CUDA2 compute buffer size = 5008.01 MiB llama_init_from_model: CUDA3 compute buffer size = 5008.02 MiB llama_init_from_model: CUDA_Host compute buffer size = 458.02 MiB llama_init_from_model: graph nodes = 2246 llama_init_from_model: graph splits = 5 time=2025-04-10T06:48:14.456Z level=INFO source=server.go:619 msg="llama runner started in 7.28 seconds" ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.2
GiteaMirror added the bug label 2026-04-12 18:26:10 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

The maximum concurrency remained capped at 6 requests,

How are you measuring this?

<!-- gh-comment-id:2792315465 --> @rick-github commented on GitHub (Apr 10, 2025): > The maximum concurrency remained capped at 6 requests, How are you measuring this?
Author
Owner

@jaybom commented on GitHub (Apr 11, 2025):

@rick-github Thanks for asking. I measured the concurrency limit by using Dify. I opened multiple chat windows/tabs in Dify, each connected to the Ollama endpoint (qwq model), and submitted prompts simultaneously from these windows.

I gradually increased the number of concurrent windows sending requests and observed the server's responsiveness and success rate. The performance consistently degraded significantly (e.g., very long response times, timeouts, or errors) when attempting to handle more than 6 simultaneous requests.

This limit of ~6 concurrent requests was observed both on the initial 2x RTX 4090 setup and persisted even after upgrading to 4x RTX 4090 with OLLAMA_NUM_PARALLEL set to 14.

<!-- gh-comment-id:2795498780 --> @jaybom commented on GitHub (Apr 11, 2025): @rick-github Thanks for asking. I measured the concurrency limit by using Dify. I opened multiple chat windows/tabs in Dify, each connected to the Ollama endpoint (`qwq` model), and submitted prompts simultaneously from these windows. I gradually increased the number of concurrent windows sending requests and observed the server's responsiveness and success rate. The performance consistently degraded significantly (e.g., very long response times, timeouts, or errors) when attempting to handle more than 6 simultaneous requests. This limit of ~6 concurrent requests was observed both on the initial 2x RTX 4090 setup and persisted even after upgrading to 4x RTX 4090 with `OLLAMA_NUM_PARALLEL` set to 14.
Author
Owner

@rick-github commented on GitHub (Apr 11, 2025):

Might be a limitation of Dify, in my experience ollama scales up more than 6 concurrent requests. However, this test was done with a single 4070, I'll redo with multiple cards and see if I can duplicate your findings.

Image

<!-- gh-comment-id:2795588386 --> @rick-github commented on GitHub (Apr 11, 2025): Might be a limitation of Dify, in my experience ollama scales up more than 6 concurrent requests. However, this test was done with a single 4070, I'll redo with multiple cards and see if I can duplicate your findings. ![Image](https://github.com/user-attachments/assets/1c6c3b30-93ea-46c7-95fc-681d83776267)
Author
Owner

@rick-github commented on GitHub (Apr 14, 2025):

2xA100 40G, OLLAMA_SCHED_SPREAD=1, OLLAMA_NUM_PARALLEL=14, OLLAMA_CONTEXT_LENGTH=4096, ollama:0.6.5, qwq:32b-q4_K_M

Image

The dip at 8 happened in both runs I did so that bears further investigation. Other than that, tps scales as parallelism increases.

<!-- gh-comment-id:2802008066 --> @rick-github commented on GitHub (Apr 14, 2025): 2xA100 40G, OLLAMA_SCHED_SPREAD=1, OLLAMA_NUM_PARALLEL=14, OLLAMA_CONTEXT_LENGTH=4096, ollama:0.6.5, qwq:32b-q4_K_M ![Image](https://github.com/user-attachments/assets/5b47db9f-9e26-4a77-8e5d-168f1616afe6) The dip at 8 happened in both runs I did so that bears further investigation. Other than that, tps scales as parallelism increases.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6701