[GH-ISSUE #13580] High GPU-util imbalance with 4× Tesla T4 when serving qwen3:30b (Ollama 0.13.1) — single GPU hits 100% during long-context inference #8941

Open
opened 2026-04-12 21:45:50 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @arTG0D on GitHub (Dec 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13580

When serving qwen3:30b with Ollama 0.13.1 inside Docker on a machine with four NVIDIA Tesla T4 (16GB each), the model takes ~10 GB on each GPU, but during a single long-context inference one GPU’s GPU-UTIL spikes to 100% while the others remain largely idle. During the decode/output step GPU utilization becomes more even (≈30% each). I expect work to be more evenly distributed during long-context processing or for the scheduler to clarify why computation concentrates on one card.

`docker run -d
--name ollama
--gpus '"device=0,1,2,3"'
-p 11434:11434
-v /extra/ollama/models:/root/.ollama/models
-e OLLAMA_NEW_ENGINE=1
-e OLLAMA_FLASH_ATTENTION=1
-e OLLAMA_KV_CACHE_TYPE=q8_0
-e OLLAMA_CONTEXT_LENGTH=262144
-e OLLAMA_KEEP_ALIVE=3600
ollama/ollama:0.13.1

docker exec ollama ollama run qwen3:30b
`

Originally created by @arTG0D on GitHub (Dec 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13580 When serving qwen3:30b with Ollama 0.13.1 inside Docker on a machine with four NVIDIA Tesla T4 (16GB each), the model takes ~10 GB on each GPU, but during a single long-context inference one GPU’s GPU-UTIL spikes to 100% while the others remain largely idle. During the decode/output step GPU utilization becomes more even (≈30% each). I expect work to be more evenly distributed during long-context processing or for the scheduler to clarify why computation concentrates on one card. `docker run -d \ --name ollama \ --gpus '"device=0,1,2,3"' \ -p 11434:11434 \ -v /extra/ollama/models:/root/.ollama/models \ -e OLLAMA_NEW_ENGINE=1 \ -e OLLAMA_FLASH_ATTENTION=1 \ -e OLLAMA_KV_CACHE_TYPE=q8_0 \ -e OLLAMA_CONTEXT_LENGTH=262144 \ -e OLLAMA_KEEP_ALIVE=3600 \ ollama/ollama:0.13.1 docker exec ollama ollama run qwen3:30b `
Author
Owner

@arTG0D commented on GitHub (Dec 29, 2025):

time=2025-12-29T09:34:40.101Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DE
VICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_H
OST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_
QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_OR
IGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 htt
ps://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VU
LKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-12-29T09:34:40.106Z level=INFO source=images.go:522 msg="total blobs: 23"
time=2025-12-29T09:34:40.107Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-12-29T09:34:40.107Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.1)"
time=2025-12-29T09:34:40.108Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-12-29T09:34:40.108Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34120"
time=2025-12-29T09:34:41.589Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43062"
time=2025-12-29T09:34:41.713Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1"
time=2025-12-29T09:34:41.714Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36989"
time=2025-12-29T09:34:41.714Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38066"
time=2025-12-29T09:34:41.714Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41908"
time=2025-12-29T09:34:41.714Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38741"
time=2025-12-29T09:34:45.240Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 filter_id="" library=CUDA compute=7.5 name=CU
DA0 description="Tesla T4" libdirs=ollama,cuda_v12 driver=12.4 pci_id=0000:3d:00.0 type=discrete total="15.0 GiB" available="14.6 GiB"
time=2025-12-29T09:34:45.240Z level=INFO source=types.go:42 msg="inference compute" id=GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 filter_id="" library=CUDA compute=7.5 name=CU
DA1 description="Tesla T4" libdirs=ollama,cuda_v12 driver=12.4 pci_id=0000:3e:00.0 type=discrete total="15.0 GiB" available="14.5 GiB"
time=2025-12-29T09:34:45.240Z level=INFO source=types.go:42 msg="inference compute" id=GPU-b6340a31-e6f7-be77-25c2-46797bf761a1 filter_id="" library=CUDA compute=7.5 name=CU
DA2 description="Tesla T4" libdirs=ollama,cuda_v12 driver=12.4 pci_id=0000:40:00.0 type=discrete total="15.0 GiB" available="14.5 GiB"
time=2025-12-29T09:34:45.240Z level=INFO source=types.go:42 msg="inference compute" id=GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 filter_id="" library=CUDA compute=7.5 name=CU
DA3 description="Tesla T4" libdirs=ollama,cuda_v12 driver=12.4 pci_id=0000:41:00.0 type=discrete total="15.0 GiB" available="14.5 GiB"
[GIN] 2025/12/29 - 09:35:02 | 200 | 267.262µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/12/29 - 09:35:02 | 200 | 328.628µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/12/29 - 09:35:04 | 200 | 56.611µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/12/29 - 09:35:04 | 200 | 2.031663ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/12/29 - 09:35:17 | 200 | 53.009µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/12/29 - 09:35:17 | 200 | 97.684426ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/12/29 - 09:35:17 | 200 | 93.007787ms | 127.0.0.1 | POST "/api/show"
time=2025-12-29T09:35:18.173Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44042"
time=2025-12-29T09:35:19.745Z level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-29T09:35:19.745Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-585
74f2e94b99fb9e4391408b57e5aeaaaec10f6384e9a699fc2cb43a5c8eabf --port 43368"
time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:443 msg="system memory" total="1133.1 GiB" free="1117.5 GiB" free_swap="10.0 GiB"
time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 library=CUDA available="14.1 GiB" free="14.6 GiB" m
inimum="457.0 MiB" overhead="0 B"
time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 library=CUDA available="14.0 GiB" free="14.5 GiB" m
inimum="457.0 MiB" overhead="0 B"
time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b6340a31-e6f7-be77-25c2-46797bf761a1 library=CUDA available="14.0 GiB" free="14.5 GiB" m
inimum="457.0 MiB" overhead="0 B"
time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 library=CUDA available="14.0 GiB" free="14.5 GiB" m
inimum="457.0 MiB" overhead="0 B"
time=2025-12-29T09:35:19.746Z level=INFO source=server.go:702 msg="loading model" "model layers"=49 requested=-1
time=2025-12-29T09:35:19.777Z level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-29T09:35:19.778Z level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:43368"
time=2025-12-29T09:35:19.779Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCach
eType: NumThreads:48 GPULayers:49[ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-29T09:35:19.833Z level=INFO source=ggml.go:136 msg="" architecture=qwen3moe file_type=Q4_K_M name="Qwen3 30B A3B Thinking 2507" description="" num_tensors=579 n
um_key_values=33
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
Device 0: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7
Device 1: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93
Device 2: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-b6340a31-e6f7-be77-25c2-46797bf761a1
Device 3: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-12-29T09:35:20.405Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512
=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,
610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER
_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-12-29T09:35:21.512Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCach
eType: NumThreads:48 GPULayers:49[ID:GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Layers:12(0..11) ID:GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Layers:12(12..23) ID:GPU-b6340a31-
e6f7-be77-25c2-46797bf761a1 Layers:13(24..36) ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:12(37..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-29T09:35:23.151Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCach
eType: NumThreads:48 GPULayers:49[ID:GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Layers:12(0..11) ID:GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Layers:12(12..23) ID:GPU-b6340a31-
e6f7-be77-25c2-46797bf761a1 Layers:12(24..35) ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:13(36..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-29T09:35:24.007Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCa
cheType: NumThreads:48 GPULayers:49[ID:GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Layers:12(0..11) ID:GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Layers:12(12..23) ID:GPU-b6340a3
1-e6f7-be77-25c2-46797bf761a1 Layers:12(24..35) ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:13(36..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-29T09:35:25.349Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvC
acheType: NumThreads:48 GPULayers:49[ID:GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Layers:12(0..11) ID:GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Layers:12(12..23) ID:GPU-b6340a
31-e6f7-be77-25c2-46797bf761a1 Layers:12(24..35) ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:13(36..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-29T09:35:25.350Z level=INFO source=ggml.go:482 msg="offloading 48 repeating layers to GPU"
time=2025-12-29T09:35:25.350Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-29T09:35:25.350Z level=INFO source=ggml.go:494 msg="offloaded 49/49 layers to GPU"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="4.3 GiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="4.1 GiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="4.1 GiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:240 msg="model weights" device=CUDA3 size="4.6 GiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:245 msg="model weights" device=CPU size="166.9 MiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="6.0 GiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="6.0 GiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="6.0 GiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:251 msg="kv cache" device=CUDA3 size="6.0 GiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="858.0 MiB"
time=2025-12-29T09:35:25.350Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="398.5 MiB"
time=2025-12-29T09:35:25.351Z level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="398.5 MiB"
time=2025-12-29T09:35:25.351Z level=INFO source=device.go:262 msg="compute graph" device=CUDA3 size="398.5 MiB"
time=2025-12-29T09:35:25.351Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.0 MiB"
time=2025-12-29T09:35:25.351Z level=INFO source=device.go:272 msg="total memory" size="43.3 GiB"
time=2025-12-29T09:35:25.351Z level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-12-29T09:35:25.351Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-29T09:35:25.351Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-29T09:35:28.364Z level=INFO source=server.go:1332 msg="llama runner started in 8.62 seconds"
[GIN] 2025/12/29 - 09:35:28 | 200 | 10.378049155s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/12/29 - 09:35:47 | 200 | 125.996027ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/12/29 - 09:36:35 | 200 | 35.201562412s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/12/29 - 09:37:55 | 200 | 23.338275606s | xxxxxxx| POST "/v1/chat/completions"
[GIN] 2025/12/29 - 09:40:14 | 200 | 48.711189337s | xxxxxxx | POST "/v1/chat/completions"
[GIN] 2025/12/29 - 09:40:56 | 200 | 29.532278582s | xxxxxxx | POST "/v1/chat/completions"
[GIN] 2025/12/29 - 09:44:46 | 200 | 3m46s | xxxxxxx | POST "/v1/chat/completions"

Image
<!-- gh-comment-id:3696014264 --> @arTG0D commented on GitHub (Dec 29, 2025): > time=2025-12-29T09:34:40.101Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DE VICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_H OST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_ QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_OR IGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 htt ps://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VU LKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-12-29T09:34:40.106Z level=INFO source=images.go:522 msg="total blobs: 23" time=2025-12-29T09:34:40.107Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-12-29T09:34:40.107Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.1)" time=2025-12-29T09:34:40.108Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-12-29T09:34:40.108Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34120" time=2025-12-29T09:34:41.589Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43062" time=2025-12-29T09:34:41.713Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2025-12-29T09:34:41.714Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36989" time=2025-12-29T09:34:41.714Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38066" time=2025-12-29T09:34:41.714Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41908" time=2025-12-29T09:34:41.714Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38741" time=2025-12-29T09:34:45.240Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 filter_id="" library=CUDA compute=7.5 name=CU DA0 description="Tesla T4" libdirs=ollama,cuda_v12 driver=12.4 pci_id=0000:3d:00.0 type=discrete total="15.0 GiB" available="14.6 GiB" time=2025-12-29T09:34:45.240Z level=INFO source=types.go:42 msg="inference compute" id=GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 filter_id="" library=CUDA compute=7.5 name=CU DA1 description="Tesla T4" libdirs=ollama,cuda_v12 driver=12.4 pci_id=0000:3e:00.0 type=discrete total="15.0 GiB" available="14.5 GiB" time=2025-12-29T09:34:45.240Z level=INFO source=types.go:42 msg="inference compute" id=GPU-b6340a31-e6f7-be77-25c2-46797bf761a1 filter_id="" library=CUDA compute=7.5 name=CU DA2 description="Tesla T4" libdirs=ollama,cuda_v12 driver=12.4 pci_id=0000:40:00.0 type=discrete total="15.0 GiB" available="14.5 GiB" time=2025-12-29T09:34:45.240Z level=INFO source=types.go:42 msg="inference compute" id=GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 filter_id="" library=CUDA compute=7.5 name=CU DA3 description="Tesla T4" libdirs=ollama,cuda_v12 driver=12.4 pci_id=0000:41:00.0 type=discrete total="15.0 GiB" available="14.5 GiB" [GIN] 2025/12/29 - 09:35:02 | 200 | 267.262µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/29 - 09:35:02 | 200 | 328.628µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/12/29 - 09:35:04 | 200 | 56.611µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/29 - 09:35:04 | 200 | 2.031663ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/12/29 - 09:35:17 | 200 | 53.009µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/29 - 09:35:17 | 200 | 97.684426ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/29 - 09:35:17 | 200 | 93.007787ms | 127.0.0.1 | POST "/api/show" time=2025-12-29T09:35:18.173Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44042" time=2025-12-29T09:35:19.745Z level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-29T09:35:19.745Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-585 74f2e94b99fb9e4391408b57e5aeaaaec10f6384e9a699fc2cb43a5c8eabf --port 43368" time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:443 msg="system memory" total="1133.1 GiB" free="1117.5 GiB" free_swap="10.0 GiB" time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 library=CUDA available="14.1 GiB" free="14.6 GiB" m inimum="457.0 MiB" overhead="0 B" time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 library=CUDA available="14.0 GiB" free="14.5 GiB" m inimum="457.0 MiB" overhead="0 B" time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b6340a31-e6f7-be77-25c2-46797bf761a1 library=CUDA available="14.0 GiB" free="14.5 GiB" m inimum="457.0 MiB" overhead="0 B" time=2025-12-29T09:35:19.746Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 library=CUDA available="14.0 GiB" free="14.5 GiB" m inimum="457.0 MiB" overhead="0 B" time=2025-12-29T09:35:19.746Z level=INFO source=server.go:702 msg="loading model" "model layers"=49 requested=-1 time=2025-12-29T09:35:19.777Z level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-29T09:35:19.778Z level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:43368" time=2025-12-29T09:35:19.779Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCach eType: NumThreads:48 GPULayers:49[ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-29T09:35:19.833Z level=INFO source=ggml.go:136 msg="" architecture=qwen3moe file_type=Q4_K_M name="Qwen3 30B A3B Thinking 2507" description="" num_tensors=579 n um_key_values=33 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Device 1: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Device 2: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-b6340a31-e6f7-be77-25c2-46797bf761a1 Device 3: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-12-29T09:35:20.405Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512 =1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600, 610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER _MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-12-29T09:35:21.512Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCach eType: NumThreads:48 GPULayers:49[ID:GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Layers:12(0..11) ID:GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Layers:12(12..23) ID:GPU-b6340a31- e6f7-be77-25c2-46797bf761a1 Layers:13(24..36) ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:12(37..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-29T09:35:23.151Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCach eType: NumThreads:48 GPULayers:49[ID:GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Layers:12(0..11) ID:GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Layers:12(12..23) ID:GPU-b6340a31- e6f7-be77-25c2-46797bf761a1 Layers:12(24..35) ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:13(36..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-29T09:35:24.007Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCa cheType: NumThreads:48 GPULayers:49[ID:GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Layers:12(0..11) ID:GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Layers:12(12..23) ID:GPU-b6340a3 1-e6f7-be77-25c2-46797bf761a1 Layers:12(24..35) ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:13(36..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-29T09:35:25.349Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvC acheType: NumThreads:48 GPULayers:49[ID:GPU-6f7dc368-04a9-96de-3a39-b009aaf6caa7 Layers:12(0..11) ID:GPU-f60e98d0-4d14-3c26-c9ef-6cc93673df93 Layers:12(12..23) ID:GPU-b6340a 31-e6f7-be77-25c2-46797bf761a1 Layers:12(24..35) ID:GPU-1ba14d5b-3498-1c33-56d0-0ab87d894872 Layers:13(36..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-29T09:35:25.350Z level=INFO source=ggml.go:482 msg="offloading 48 repeating layers to GPU" time=2025-12-29T09:35:25.350Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-29T09:35:25.350Z level=INFO source=ggml.go:494 msg="offloaded 49/49 layers to GPU" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="4.3 GiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="4.1 GiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="4.1 GiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:240 msg="model weights" device=CUDA3 size="4.6 GiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:245 msg="model weights" device=CPU size="166.9 MiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="6.0 GiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="6.0 GiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="6.0 GiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:251 msg="kv cache" device=CUDA3 size="6.0 GiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="858.0 MiB" time=2025-12-29T09:35:25.350Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="398.5 MiB" time=2025-12-29T09:35:25.351Z level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="398.5 MiB" time=2025-12-29T09:35:25.351Z level=INFO source=device.go:262 msg="compute graph" device=CUDA3 size="398.5 MiB" time=2025-12-29T09:35:25.351Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.0 MiB" time=2025-12-29T09:35:25.351Z level=INFO source=device.go:272 msg="total memory" size="43.3 GiB" time=2025-12-29T09:35:25.351Z level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-12-29T09:35:25.351Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-29T09:35:25.351Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-29T09:35:28.364Z level=INFO source=server.go:1332 msg="llama runner started in 8.62 seconds" [GIN] 2025/12/29 - 09:35:28 | 200 | 10.378049155s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/12/29 - 09:35:47 | 200 | 125.996027ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/29 - 09:36:35 | 200 | 35.201562412s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/12/29 - 09:37:55 | 200 | 23.338275606s | xxxxxxx| POST "/v1/chat/completions" [GIN] 2025/12/29 - 09:40:14 | 200 | 48.711189337s | xxxxxxx | POST "/v1/chat/completions" [GIN] 2025/12/29 - 09:40:56 | 200 | 29.532278582s | xxxxxxx | POST "/v1/chat/completions" [GIN] 2025/12/29 - 09:44:46 | 200 | 3m46s | xxxxxxx | POST "/v1/chat/completions" <img width="994" height="485" alt="Image" src="https://github.com/user-attachments/assets/cef73ed9-c120-4714-8f5c-79622f9c5026" />
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8941