[GH-ISSUE #14510] ollama 0.17.4+ qwen3.5 35b/27b only have 1 active request. pls Add parallel request support for qwen35/qwen35moe architecture #55927

Closed
opened 2026-04-29 09:57:43 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @tonyltl on GitHub (Feb 28, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14510

What is the issue?

Environment

  • Ollama: 0.17.4
  • GPU: NVIDIA RTX PRO 4000 Blackwell 24GB
  • Model: qwen3.5:27b / qwen3.5:35b (Q4_K_M)

Issue

When loading qwen3.5 models, scheduler logs:
"model architecture does not currently support parallel requests" architecture=qwen35moe

Even with OLLAMA_NUM_PARALLEL=8, actual Parallel=1, causing severe throughput degradation.

Comparison

  • qwen3:30b (architecture=qwen3moe) → Parallel=8 works
  • qwen3.5:35b (architecture=qwen35moe) → Parallel=1 forced

Request

Please add qwen35* architectures to the parallel-request compatible list,
or provide a config override to force enable parallel for testing.

Relevant log output

(base) ubuntu@ubuntu-desktop:~/ollama$ docker logs ollama | grep "concurrent"
time=2026-02-28T08:33:56.729Z level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:8 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-02-28T08:33:56.729Z level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false"
time=2026-02-28T08:33:56.734Z level=INFO source=images.go:473 msg="total blobs: 50"
time=2026-02-28T08:33:56.735Z level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-02-28T08:33:56.736Z level=INFO source=routes.go:1718 msg="Listening on [::]:11434 (version 0.17.4)"
time=2026-02-28T08:33:56.737Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-02-28T08:33:56.738Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39501"
time=2026-02-28T08:33:56.951Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42957"
time=2026-02-28T08:33:57.151Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-02-28T08:33:57.152Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44783"
time=2026-02-28T08:33:57.152Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41311"
time=2026-02-28T08:33:57.345Z level=INFO source=types.go:42 msg="inference compute" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA RTX PRO 4000 Blackwell" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:43:00.0 type=discrete total="23.9 GiB" available="23.4 GiB"
time=2026-02-28T08:33:57.345Z level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="23.9 GiB" default_num_ctx=32768
time=2026-02-28T08:34:26.605Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33051"
time=2026-02-28T08:34:26.876Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-28T08:34:26.991Z level=WARN source=sched.go:452 msg="model architecture does not currently support parallel requests" architecture=qwen35moe
time=2026-02-28T08:34:27.054Z level=INFO source=server.go:247 msg="enabling flash attention"
time=2026-02-28T08:34:27.054Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a --port 40137"
time=2026-02-28T08:34:27.055Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="62.3 GiB" free_swap="16.0 GiB"
time=2026-02-28T08:34:27.055Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="23.0 GiB" free="23.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-28T08:34:27.055Z level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1
time=2026-02-28T08:34:27.075Z level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-02-28T08:34:27.075Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:40137"
time=2026-02-28T08:34:27.078Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:41[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T08:34:27.148Z level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=56
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX PRO 4000 Blackwell, compute capability 12.0, VMM: yes, ID: GPU-55d681de-539a-0846-58f0-f0dc55bdffaf
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-02-28T08:34:27.346Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-02-28T08:35:18.984Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:39[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:39(1..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T08:35:19.916Z level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:39[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:39(1..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T08:35:21.014Z level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:39[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:39(1..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T08:35:21.015Z level=INFO source=ggml.go:482 msg="offloading 39 repeating layers to GPU"
time=2026-02-28T08:35:21.015Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-02-28T08:35:21.015Z level=INFO source=ggml.go:494 msg="offloaded 39/41 layers to GPU"
time=2026-02-28T08:35:21.015Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="19.8 GiB"
time=2026-02-28T08:35:21.015Z level=INFO source=device.go:245 msg="model weights" device=CPU size="2.4 GiB"
time=2026-02-28T08:35:21.015Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.8 GiB"
time=2026-02-28T08:35:21.015Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="52.3 MiB"
time=2026-02-28T08:35:21.015Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="819.1 MiB"
time=2026-02-28T08:35:21.015Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB"
time=2026-02-28T08:35:21.015Z level=INFO source=device.go:272 msg="total memory" size="25.5 GiB"
time=2026-02-28T08:35:21.015Z level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-02-28T08:35:21.015Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-28T08:35:21.016Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-02-28T08:35:25.531Z level=INFO source=server.go:1388 msg="llama runner started in 58.48 seconds"
ggml_backend_cuda_device_get_memory device GPU-55d681de-539a-0846-58f0-f0dc55bdffaf utilizing NVML memory reporting free: 728367104 total: 25655508992
time=2026-02-28T09:07:59.592Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-28T09:07:59.628Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA total="23.9 GiB" available="694.6 MiB"
time=2026-02-28T09:07:59.701Z level=WARN source=sched.go:452 msg="model architecture does not currently support parallel requests" architecture=qwen35
time=2026-02-28T09:07:59.764Z level=INFO source=server.go:247 msg="enabling flash attention"
time=2026-02-28T09:07:59.764Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-7935de6e08f9444536d0edcacf19d2166b34bef8ddb4ac7ce9263ff5cad0693b --port 36681"
time=2026-02-28T09:07:59.765Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="47.5 GiB" free_swap="16.0 GiB"
time=2026-02-28T09:07:59.765Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="237.6 MiB" free="694.6 MiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-28T09:07:59.765Z level=INFO source=server.go:757 msg="loading model" "model layers"=65 requested=-1
time=2026-02-28T09:07:59.786Z level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-02-28T09:07:59.786Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:36681"
time=2026-02-28T09:07:59.789Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:65[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:07:59.857Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=1307 num_key_values=53
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX PRO 4000 Blackwell, compute capability 12.0, VMM: yes, ID: GPU-55d681de-539a-0846-58f0-f0dc55bdffaf
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-02-28T09:07:59.970Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-02-28T09:08:06.488Z level=INFO source=server.go:1029 msg="model requires more gpu memory than is currently available, evicting a model to make space" "loaded layers"=0
time=2026-02-28T09:08:06.489Z level=INFO source=runner.go:1284 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:08:06.489Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="15.5 GiB"
time=2026-02-28T09:08:06.489Z level=INFO source=device.go:245 msg="model weights" device=CPU size="710.2 MiB"
time=2026-02-28T09:08:06.489Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.7 GiB"
time=2026-02-28T09:08:06.489Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB"
time=2026-02-28T09:08:06.489Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="216.0 MiB"
time=2026-02-28T09:08:06.489Z level=INFO source=device.go:272 msg="total memory" size="22.2 GiB"
ggml_backend_cuda_device_get_memory device GPU-55d681de-539a-0846-58f0-f0dc55bdffaf utilizing NVML memory reporting free: 450166784 total: 25655508992
time=2026-02-28T09:08:09.773Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41193"
time=2026-02-28T09:08:09.903Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35435"
time=2026-02-28T09:08:10.029Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-28T09:08:10.129Z level=WARN source=sched.go:452 msg="model architecture does not currently support parallel requests" architecture=qwen35
time=2026-02-28T09:08:10.129Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="50.4 GiB" free_swap="16.0 GiB"
time=2026-02-28T09:08:10.129Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-28T09:08:10.129Z level=INFO source=server.go:757 msg="loading model" "model layers"=65 requested=-1
time=2026-02-28T09:08:10.130Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:65[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:08:11.020Z level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:65[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:08:12.270Z level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:65[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:08:12.270Z level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU"
time=2026-02-28T09:08:12.270Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-02-28T09:08:12.270Z level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU"
time=2026-02-28T09:08:12.270Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="15.5 GiB"
time=2026-02-28T09:08:12.271Z level=INFO source=device.go:245 msg="model weights" device=CPU size="710.2 MiB"
time=2026-02-28T09:08:12.271Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.7 GiB"
time=2026-02-28T09:08:12.271Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB"
time=2026-02-28T09:08:12.271Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="216.0 MiB"
time=2026-02-28T09:08:12.271Z level=INFO source=device.go:272 msg="total memory" size="22.2 GiB"
time=2026-02-28T09:08:12.271Z level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-02-28T09:08:12.271Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-28T09:08:12.271Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-02-28T09:08:17.290Z level=INFO source=server.go:1388 msg="llama runner started in 17.52 seconds"
time=2026-02-28T09:10:29.363Z level=INFO source=server.go:1568 msg="aborting completion request due to client closing the connection"
ggml_backend_cuda_device_get_memory device GPU-55d681de-539a-0846-58f0-f0dc55bdffaf utilizing NVML memory reporting free: 610926592 total: 25655508992
time=2026-02-28T09:13:39.206Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-28T09:13:39.222Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA total="23.9 GiB" available="582.6 MiB"
time=2026-02-28T09:13:39.290Z level=INFO source=server.go:247 msg="enabling flash attention"
time=2026-02-28T09:13:39.290Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-58574f2e94b99fb9e4391408b57e5aeaaaec10f6384e9a699fc2cb43a5c8eabf --port 36467"
time=2026-02-28T09:13:39.290Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="33.6 GiB" free_swap="16.0 GiB"
time=2026-02-28T09:13:39.290Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="125.6 MiB" free="582.6 MiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-28T09:13:39.290Z level=INFO source=server.go:757 msg="loading model" "model layers"=49 requested=-1
time=2026-02-28T09:13:39.311Z level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-02-28T09:13:39.311Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:36467"
time=2026-02-28T09:13:39.314Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:8 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType:q8_0 NumThreads:12 GPULayers:49[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:13:39.350Z level=INFO source=ggml.go:136 msg="" architecture=qwen3moe file_type=Q4_K_M name="Qwen3 30B A3B Thinking 2507" description="" num_tensors=579 num_key_values=33
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX PRO 4000 Blackwell, compute capability 12.0, VMM: yes, ID: GPU-55d681de-539a-0846-58f0-f0dc55bdffaf
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-02-28T09:13:39.465Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-02-28T09:13:51.835Z level=INFO source=server.go:1029 msg="model requires more gpu memory than is currently available, evicting a model to make space" "loaded layers"=0
time=2026-02-28T09:13:51.835Z level=INFO source=runner.go:1284 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:13:51.835Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="17.1 GiB"
time=2026-02-28T09:13:51.835Z level=INFO source=device.go:245 msg="model weights" device=CPU size="166.9 MiB"
time=2026-02-28T09:13:51.835Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="12.8 GiB"
time=2026-02-28T09:13:51.835Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.3 GiB"
time=2026-02-28T09:13:51.835Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.0 MiB"
time=2026-02-28T09:13:51.835Z level=INFO source=device.go:272 msg="total memory" size="31.3 GiB"
ggml_backend_cuda_device_get_memory device GPU-55d681de-539a-0846-58f0-f0dc55bdffaf utilizing NVML memory reporting free: 332726272 total: 25655508992
time=2026-02-28T09:13:52.118Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36551"
time=2026-02-28T09:13:52.240Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45653"
time=2026-02-28T09:13:52.366Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-28T09:13:52.416Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="34.4 GiB" free_swap="16.0 GiB"
time=2026-02-28T09:13:52.416Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-28T09:13:52.416Z level=INFO source=server.go:757 msg="loading model" "model layers"=49 requested=-1
time=2026-02-28T09:13:52.417Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:8 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType:q8_0 NumThreads:12 GPULayers:34[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:34(14..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:13:53.080Z level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:8 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType:q8_0 NumThreads:12 GPULayers:34[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:34(14..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:13:55.352Z level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:8 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType:q8_0 NumThreads:12 GPULayers:34[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:34(14..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T09:13:55.352Z level=INFO source=ggml.go:482 msg="offloading 34 repeating layers to GPU"
time=2026-02-28T09:13:55.352Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-02-28T09:13:55.352Z level=INFO source=ggml.go:494 msg="offloaded 34/49 layers to GPU"
time=2026-02-28T09:13:55.352Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="11.9 GiB"
time=2026-02-28T09:13:55.352Z level=INFO source=device.go:245 msg="model weights" device=CPU size="5.4 GiB"
time=2026-02-28T09:13:55.352Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="9.0 GiB"
time=2026-02-28T09:13:55.352Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="3.7 GiB"
time=2026-02-28T09:13:55.352Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.6 GiB"
time=2026-02-28T09:13:55.352Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.7 MiB"
time=2026-02-28T09:13:55.352Z level=INFO source=device.go:272 msg="total memory" size="31.6 GiB"
time=2026-02-28T09:13:55.352Z level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-02-28T09:13:55.352Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-28T09:13:55.353Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-02-28T09:14:01.623Z level=INFO source=server.go:1388 msg="llama runner started in 22.33 seconds"

OS

No response

GPU

Nvidia

CPU

Intel

Ollama version

0.17.4

Originally created by @tonyltl on GitHub (Feb 28, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14510 ### What is the issue? ## Environment - Ollama: 0.17.4 - GPU: NVIDIA RTX PRO 4000 Blackwell 24GB - Model: qwen3.5:27b / qwen3.5:35b (Q4_K_M) ## Issue When loading qwen3.5 models, scheduler logs: "model architecture does not currently support parallel requests" architecture=qwen35moe Even with OLLAMA_NUM_PARALLEL=8, actual Parallel=1, causing severe throughput degradation. ## Comparison - qwen3:30b (architecture=qwen3moe) → Parallel=8 works ✅ - qwen3.5:35b (architecture=qwen35moe) → Parallel=1 forced ❌ ## Request Please add qwen35* architectures to the parallel-request compatible list, or provide a config override to force enable parallel for testing. ### Relevant log output ```shell (base) ubuntu@ubuntu-desktop:~/ollama$ docker logs ollama | grep "concurrent" time=2026-02-28T08:33:56.729Z level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:8 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-02-28T08:33:56.729Z level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-02-28T08:33:56.734Z level=INFO source=images.go:473 msg="total blobs: 50" time=2026-02-28T08:33:56.735Z level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-28T08:33:56.736Z level=INFO source=routes.go:1718 msg="Listening on [::]:11434 (version 0.17.4)" time=2026-02-28T08:33:56.737Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-28T08:33:56.738Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39501" time=2026-02-28T08:33:56.951Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42957" time=2026-02-28T08:33:57.151Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-28T08:33:57.152Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44783" time=2026-02-28T08:33:57.152Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41311" time=2026-02-28T08:33:57.345Z level=INFO source=types.go:42 msg="inference compute" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA RTX PRO 4000 Blackwell" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:43:00.0 type=discrete total="23.9 GiB" available="23.4 GiB" time=2026-02-28T08:33:57.345Z level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="23.9 GiB" default_num_ctx=32768 time=2026-02-28T08:34:26.605Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33051" time=2026-02-28T08:34:26.876Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-28T08:34:26.991Z level=WARN source=sched.go:452 msg="model architecture does not currently support parallel requests" architecture=qwen35moe time=2026-02-28T08:34:27.054Z level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-28T08:34:27.054Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a --port 40137" time=2026-02-28T08:34:27.055Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="62.3 GiB" free_swap="16.0 GiB" time=2026-02-28T08:34:27.055Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="23.0 GiB" free="23.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T08:34:27.055Z level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1 time=2026-02-28T08:34:27.075Z level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-28T08:34:27.075Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:40137" time=2026-02-28T08:34:27.078Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:41[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T08:34:27.148Z level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=56 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX PRO 4000 Blackwell, compute capability 12.0, VMM: yes, ID: GPU-55d681de-539a-0846-58f0-f0dc55bdffaf load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-02-28T08:34:27.346Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-02-28T08:35:18.984Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:39[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:39(1..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T08:35:19.916Z level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:39[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:39(1..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T08:35:21.014Z level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:39[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:39(1..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T08:35:21.015Z level=INFO source=ggml.go:482 msg="offloading 39 repeating layers to GPU" time=2026-02-28T08:35:21.015Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-28T08:35:21.015Z level=INFO source=ggml.go:494 msg="offloaded 39/41 layers to GPU" time=2026-02-28T08:35:21.015Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="19.8 GiB" time=2026-02-28T08:35:21.015Z level=INFO source=device.go:245 msg="model weights" device=CPU size="2.4 GiB" time=2026-02-28T08:35:21.015Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.8 GiB" time=2026-02-28T08:35:21.015Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="52.3 MiB" time=2026-02-28T08:35:21.015Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="819.1 MiB" time=2026-02-28T08:35:21.015Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB" time=2026-02-28T08:35:21.015Z level=INFO source=device.go:272 msg="total memory" size="25.5 GiB" time=2026-02-28T08:35:21.015Z level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-28T08:35:21.015Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-28T08:35:21.016Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-02-28T08:35:25.531Z level=INFO source=server.go:1388 msg="llama runner started in 58.48 seconds" ggml_backend_cuda_device_get_memory device GPU-55d681de-539a-0846-58f0-f0dc55bdffaf utilizing NVML memory reporting free: 728367104 total: 25655508992 time=2026-02-28T09:07:59.592Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-28T09:07:59.628Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA total="23.9 GiB" available="694.6 MiB" time=2026-02-28T09:07:59.701Z level=WARN source=sched.go:452 msg="model architecture does not currently support parallel requests" architecture=qwen35 time=2026-02-28T09:07:59.764Z level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-28T09:07:59.764Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-7935de6e08f9444536d0edcacf19d2166b34bef8ddb4ac7ce9263ff5cad0693b --port 36681" time=2026-02-28T09:07:59.765Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="47.5 GiB" free_swap="16.0 GiB" time=2026-02-28T09:07:59.765Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="237.6 MiB" free="694.6 MiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T09:07:59.765Z level=INFO source=server.go:757 msg="loading model" "model layers"=65 requested=-1 time=2026-02-28T09:07:59.786Z level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-28T09:07:59.786Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:36681" time=2026-02-28T09:07:59.789Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:65[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:07:59.857Z level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=1307 num_key_values=53 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX PRO 4000 Blackwell, compute capability 12.0, VMM: yes, ID: GPU-55d681de-539a-0846-58f0-f0dc55bdffaf load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-02-28T09:07:59.970Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-02-28T09:08:06.488Z level=INFO source=server.go:1029 msg="model requires more gpu memory than is currently available, evicting a model to make space" "loaded layers"=0 time=2026-02-28T09:08:06.489Z level=INFO source=runner.go:1284 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:08:06.489Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="15.5 GiB" time=2026-02-28T09:08:06.489Z level=INFO source=device.go:245 msg="model weights" device=CPU size="710.2 MiB" time=2026-02-28T09:08:06.489Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.7 GiB" time=2026-02-28T09:08:06.489Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB" time=2026-02-28T09:08:06.489Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="216.0 MiB" time=2026-02-28T09:08:06.489Z level=INFO source=device.go:272 msg="total memory" size="22.2 GiB" ggml_backend_cuda_device_get_memory device GPU-55d681de-539a-0846-58f0-f0dc55bdffaf utilizing NVML memory reporting free: 450166784 total: 25655508992 time=2026-02-28T09:08:09.773Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41193" time=2026-02-28T09:08:09.903Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35435" time=2026-02-28T09:08:10.029Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-28T09:08:10.129Z level=WARN source=sched.go:452 msg="model architecture does not currently support parallel requests" architecture=qwen35 time=2026-02-28T09:08:10.129Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="50.4 GiB" free_swap="16.0 GiB" time=2026-02-28T09:08:10.129Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T09:08:10.129Z level=INFO source=server.go:757 msg="loading model" "model layers"=65 requested=-1 time=2026-02-28T09:08:10.130Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:65[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:08:11.020Z level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:65[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:08:12.270Z level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType:q8_0 NumThreads:12 GPULayers:65[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:08:12.270Z level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU" time=2026-02-28T09:08:12.270Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-02-28T09:08:12.270Z level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU" time=2026-02-28T09:08:12.270Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="15.5 GiB" time=2026-02-28T09:08:12.271Z level=INFO source=device.go:245 msg="model weights" device=CPU size="710.2 MiB" time=2026-02-28T09:08:12.271Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.7 GiB" time=2026-02-28T09:08:12.271Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB" time=2026-02-28T09:08:12.271Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="216.0 MiB" time=2026-02-28T09:08:12.271Z level=INFO source=device.go:272 msg="total memory" size="22.2 GiB" time=2026-02-28T09:08:12.271Z level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-28T09:08:12.271Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-28T09:08:12.271Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-02-28T09:08:17.290Z level=INFO source=server.go:1388 msg="llama runner started in 17.52 seconds" time=2026-02-28T09:10:29.363Z level=INFO source=server.go:1568 msg="aborting completion request due to client closing the connection" ggml_backend_cuda_device_get_memory device GPU-55d681de-539a-0846-58f0-f0dc55bdffaf utilizing NVML memory reporting free: 610926592 total: 25655508992 time=2026-02-28T09:13:39.206Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-28T09:13:39.222Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA total="23.9 GiB" available="582.6 MiB" time=2026-02-28T09:13:39.290Z level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-28T09:13:39.290Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-58574f2e94b99fb9e4391408b57e5aeaaaec10f6384e9a699fc2cb43a5c8eabf --port 36467" time=2026-02-28T09:13:39.290Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="33.6 GiB" free_swap="16.0 GiB" time=2026-02-28T09:13:39.290Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="125.6 MiB" free="582.6 MiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T09:13:39.290Z level=INFO source=server.go:757 msg="loading model" "model layers"=49 requested=-1 time=2026-02-28T09:13:39.311Z level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-28T09:13:39.311Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:36467" time=2026-02-28T09:13:39.314Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:8 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType:q8_0 NumThreads:12 GPULayers:49[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:13:39.350Z level=INFO source=ggml.go:136 msg="" architecture=qwen3moe file_type=Q4_K_M name="Qwen3 30B A3B Thinking 2507" description="" num_tensors=579 num_key_values=33 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX PRO 4000 Blackwell, compute capability 12.0, VMM: yes, ID: GPU-55d681de-539a-0846-58f0-f0dc55bdffaf load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-02-28T09:13:39.465Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-02-28T09:13:51.835Z level=INFO source=server.go:1029 msg="model requires more gpu memory than is currently available, evicting a model to make space" "loaded layers"=0 time=2026-02-28T09:13:51.835Z level=INFO source=runner.go:1284 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:13:51.835Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="17.1 GiB" time=2026-02-28T09:13:51.835Z level=INFO source=device.go:245 msg="model weights" device=CPU size="166.9 MiB" time=2026-02-28T09:13:51.835Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="12.8 GiB" time=2026-02-28T09:13:51.835Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.3 GiB" time=2026-02-28T09:13:51.835Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.0 MiB" time=2026-02-28T09:13:51.835Z level=INFO source=device.go:272 msg="total memory" size="31.3 GiB" ggml_backend_cuda_device_get_memory device GPU-55d681de-539a-0846-58f0-f0dc55bdffaf utilizing NVML memory reporting free: 332726272 total: 25655508992 time=2026-02-28T09:13:52.118Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36551" time=2026-02-28T09:13:52.240Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45653" time=2026-02-28T09:13:52.366Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-28T09:13:52.416Z level=INFO source=sched.go:491 msg="system memory" total="62.5 GiB" free="34.4 GiB" free_swap="16.0 GiB" time=2026-02-28T09:13:52.416Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-55d681de-539a-0846-58f0-f0dc55bdffaf library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T09:13:52.416Z level=INFO source=server.go:757 msg="loading model" "model layers"=49 requested=-1 time=2026-02-28T09:13:52.417Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:8 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType:q8_0 NumThreads:12 GPULayers:34[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:34(14..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:13:53.080Z level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:8 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType:q8_0 NumThreads:12 GPULayers:34[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:34(14..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:13:55.352Z level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:8 BatchSize:512 FlashAttention:Enabled KvSize:262144 KvCacheType:q8_0 NumThreads:12 GPULayers:34[ID:GPU-55d681de-539a-0846-58f0-f0dc55bdffaf Layers:34(14..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T09:13:55.352Z level=INFO source=ggml.go:482 msg="offloading 34 repeating layers to GPU" time=2026-02-28T09:13:55.352Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-28T09:13:55.352Z level=INFO source=ggml.go:494 msg="offloaded 34/49 layers to GPU" time=2026-02-28T09:13:55.352Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="11.9 GiB" time=2026-02-28T09:13:55.352Z level=INFO source=device.go:245 msg="model weights" device=CPU size="5.4 GiB" time=2026-02-28T09:13:55.352Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="9.0 GiB" time=2026-02-28T09:13:55.352Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="3.7 GiB" time=2026-02-28T09:13:55.352Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.6 GiB" time=2026-02-28T09:13:55.352Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.7 MiB" time=2026-02-28T09:13:55.352Z level=INFO source=device.go:272 msg="total memory" size="31.6 GiB" time=2026-02-28T09:13:55.352Z level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-28T09:13:55.352Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-28T09:13:55.353Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-02-28T09:14:01.623Z level=INFO source=server.go:1388 msg="llama runner started in 22.33 seconds" ``` ### OS _No response_ ### GPU Nvidia ### CPU Intel ### Ollama version 0.17.4
GiteaMirror added the bug label 2026-04-29 09:57:43 -05:00
Author
Owner

@ipa2800 commented on GitHub (Mar 4, 2026):

Meet the same problem in Qwen 3.5

// Some architectures are not safe with num_parallel > 1. // ref: https://github.com/ollama/ollama/issues/4165 if slices.Contains([]string{"mllama", "qwen3vl", "qwen3vlmoe", "qwen35", "qwen35moe", "qwen3next", "lfm2", "lfm2moe", "nemotron_h", "nemotron_h_moe"}, req.model.Config.ModelFamily) && numParallel != 1 { numParallel = 1 slog.Warn("model architecture does not currently support parallel requests", "architecture", req.model.Config.ModelFamily) }

<!-- gh-comment-id:3994943342 --> @ipa2800 commented on GitHub (Mar 4, 2026): Meet the same problem in Qwen 3.5 `// Some architectures are not safe with num_parallel > 1. // ref: https://github.com/ollama/ollama/issues/4165 if slices.Contains([]string{"mllama", "qwen3vl", "qwen3vlmoe", "qwen35", "qwen35moe", "qwen3next", "lfm2", "lfm2moe", "nemotron_h", "nemotron_h_moe"}, req.model.Config.ModelFamily) && numParallel != 1 { numParallel = 1 slog.Warn("model architecture does not currently support parallel requests", "architecture", req.model.Config.ModelFamily) }`
Author
Owner

@tonyltl commented on GitHub (Mar 6, 2026):

ollama 0.17.5, same issue.

<!-- gh-comment-id:4009820494 --> @tonyltl commented on GitHub (Mar 6, 2026): ollama 0.17.5, same issue.
Author
Owner

@scmarvin commented on GitHub (Mar 6, 2026):

Please note that this issue was erroneously closed - it is unrelated to ticket 4165 as that was created in May 2024, well before Qwen v3.5 existed. My testing indicates that this is an ongoing problem with the current version of Qwen (v3.5) in relation to the current version of Ollama (v0.17.7) and that Qwen's Ollama parallelism integration is perfectly functional under the last version of Qwen (v3). Further research indicates that this model does support parallelism however Ollama is degrading it. Kindly reopen and address this issue accordingly.

<!-- gh-comment-id:4013890033 --> @scmarvin commented on GitHub (Mar 6, 2026): Please note that this issue was erroneously closed - it is unrelated to ticket 4165 as that was created in May 2024, well before Qwen v3.5 existed. My testing indicates that this is an ongoing problem with the current version of Qwen (v3.5) in relation to the current version of Ollama (v0.17.7) and that Qwen's Ollama parallelism integration is perfectly functional under the last version of Qwen (v3). Further research indicates that this model does support parallelism however Ollama is degrading it. Kindly reopen and address this issue accordingly.
Author
Owner

@KevinTurnbull commented on GitHub (Mar 8, 2026):

Confirmed -- also a problem with 0.17.7 on my end.

Notably I'm using an older Nvidia Quatro RTX5000 (circa 2018).

With that being said this could definitely be related to #4165 since Qwen3 was not multimodal and Qwen3.5 is now has image understanding.

<!-- gh-comment-id:4019087618 --> @KevinTurnbull commented on GitHub (Mar 8, 2026): Confirmed -- also a problem with 0.17.7 on my end. Notably I'm using an older Nvidia Quatro RTX5000 (circa 2018). With that being said this could definitely be related to #4165 since Qwen3 was not multimodal and Qwen3.5 is now has image understanding.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55927