[GH-ISSUE #12084] Ollama on docker with multi-gpu load not balanced #8028

Closed
opened 2026-04-12 20:15:52 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @srshkmr on GitHub (Aug 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12084

What is the issue?

when using ollama docker on my multi gpu setup load is only happening on one GPU.

docker run -d --name ollama --gpus '"device=0,1,2,3"' -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e OLLAMA_HOST=0.0.0.0:11434 -e OLLAMA_KEEP_ALIVE=24h -e OLLAMA_NUM_PARALLEL=1 -e LLAMA_CUDA_SPLIT=0.25,0.25,0.25,0.25 -e OLLAMA_NEW_ESTIMATES=1 --restart unless-stopped -v ollama:/root/.ollama -p 11434:11434 ollama/ollama:latest

docker can see ollama having 4 gpus

when running a prompt only one gpu is used to 72% rest are idle I am running gpt-oss:20b model

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Tesla V100-SXM2-32GB On | 00000000:06:00.0 Off | 0 |
| N/A 48C P0 200W / 300W | 13868MiB / 32768MiB | 72% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 Tesla V100-SXM2-32GB On | 00000000:07:00.0 Off | 0 |
| N/A 43C P0 61W / 300W | 310MiB / 32768MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 Tesla V100-SXM2-32GB On | 00000000:0A:00.0 Off | 0 |
| N/A 43C P0 59W / 300W | 310MiB / 32768MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 Tesla V100-SXM2-32GB On | 00000000:0B:00.0 Off | 0 |
| N/A 40C P0 60W / 300W | 310MiB / 32768MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

Relevant log output

time=2025-08-25T10:18:11.685Z level=INFO source=routes.go:1318 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-08-25T10:18:11.686Z level=INFO source=images.go:477 msg="total blobs: 5"
time=2025-08-25T10:18:11.686Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-25T10:18:11.686Z level=INFO source=routes.go:1371 msg="Listening on [::]:11434 (version 0.11.6)"
time=2025-08-25T10:18:11.687Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-25T10:18:12.950Z level=INFO source=types.go:130 msg="inference compute" id=GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-25T10:18:12.950Z level=INFO source=types.go:130 msg="inference compute" id=GPU-b68f1f98-65b7-5328-f3ee-63fa5448f0b0 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-25T10:18:12.950Z level=INFO source=types.go:130 msg="inference compute" id=GPU-41e7ab21-5bc3-b454-300f-440c1ba8c243 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-25T10:18:12.950Z level=INFO source=types.go:130 msg="inference compute" id=GPU-8794b9db-bb90-3c4c-b58c-a570796dc18f library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
[GIN] 2025/08/25 - 10:18:26 | 200 |     133.816µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/08/25 - 10:18:32 | 200 |    1.943991ms |    59.98.50.179 | GET      "/api/tags"
time=2025-08-25T10:18:40.047Z level=INFO source=server.go:166 msg="enabling new memory estimates"
time=2025-08-25T10:18:41.250Z level=INFO source=server.go:383 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --port 45567"
time=2025-08-25T10:18:41.251Z level=INFO source=server.go:659 msg="loading model" "model layers"=25 requested=-1
time=2025-08-25T10:18:41.271Z level=INFO source=runner.go:1006 msg="starting ollama engine"
time=2025-08-25T10:18:41.272Z level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:45567"
time=2025-08-25T10:18:42.283Z level=INFO source=server.go:665 msg="system memory" total="503.8 GiB" free="493.5 GiB" free_swap="46.1 GiB"
time=2025-08-25T10:18:42.283Z level=INFO source=server.go:669 msg="gpu memory" id=GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-25T10:18:42.283Z level=INFO source=server.go:669 msg="gpu memory" id=GPU-b68f1f98-65b7-5328-f3ee-63fa5448f0b0 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-25T10:18:42.283Z level=INFO source=server.go:669 msg="gpu memory" id=GPU-41e7ab21-5bc3-b454-300f-440c1ba8c243 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-25T10:18:42.283Z level=INFO source=server.go:669 msg="gpu memory" id=GPU-8794b9db-bb90-3c4c-b58c-a570796dc18f available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-25T10:18:42.285Z level=INFO source=runner.go:925 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:40 GPULayers:25[ID:GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-25T10:18:42.383Z level=INFO source=ggml.go:130 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
  Device 0: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6
  Device 1: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-b68f1f98-65b7-5328-f3ee-63fa5448f0b0
  Device 2: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-41e7ab21-5bc3-b454-300f-440c1ba8c243
  Device 3: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-8794b9db-bb90-3c4c-b58c-a570796dc18f
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-08-25T10:18:43.338Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-08-25T10:18:43.573Z level=INFO source=runner.go:925 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:40 GPULayers:25[ID:GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-25T10:18:43.763Z level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:40 GPULayers:25[ID:GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-25T10:18:43.763Z level=INFO source=ggml.go:486 msg="offloading 24 repeating layers to GPU"
time=2025-08-25T10:18:43.763Z level=INFO source=ggml.go:492 msg="offloading output layer to GPU"
time=2025-08-25T10:18:43.763Z level=INFO source=ggml.go:497 msg="offloaded 25/25 layers to GPU"
time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB"
time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="300.0 MiB"
time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="1.1 GiB"
time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:342 msg="total memory" size="14.2 GiB"
time=2025-08-25T10:18:43.764Z level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-08-25T10:18:43.764Z level=INFO source=server.go:1234 msg="waiting for llama runner to start responding"
time=2025-08-25T10:18:43.765Z level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-25T10:18:48.789Z level=INFO source=server.go:1272 msg="llama runner started in 7.54 seconds"
[GIN] 2025/08/25 - 10:18:56 | 200 | 18.798246318s |    59.98.50.179 | POST     "/api/generate"
[GIN] 2025/08/25 - 10:19:33 | 200 |  4.343257122s |    59.98.50.179 | POST     "/api/generate"
[GIN] 2025/08/25 - 10:19:43 | 200 |    1.144925ms |   77.111.246.23 | GET      "/api/tags"
[GIN] 2025/08/25 - 10:19:56 | 200 |  7.979087286s |    59.98.50.179 | POST     "/api/generate"
[GIN] 2025/08/25 - 10:20:06 | 200 |  4.031849345s |    59.98.50.179 | POST     "/api/generate"
[GIN] 2025/08/25 - 10:22:45 | 200 |     807.875µs |   77.111.247.55 | GET      "/api/tags"
[GIN] 2025/08/25 - 10:24:42 | 200 |     830.746µs |   77.111.245.13 | GET      "/api/tags"
[GIN] 2025/08/25 - 10:36:34 | 200 |     778.678µs |   77.111.245.17 | GET      "/api/tags"
[GIN] 2025/08/25 - 10:37:17 | 200 |  4.984607357s |    59.98.50.179 | POST     "/api/generate"
[GIN] 2025/08/25 - 10:37:25 | 200 |  3.279109682s |    59.98.50.179 | POST     "/api/generate"
[GIN] 2025/08/25 - 10:40:23 | 200 |     475.465µs |   77.111.245.17 | GET      "/api/tags"
[GIN] 2025/08/26 - 04:57:42 | 200 |     912.831µs |   77.111.245.13 | GET      "/api/tags"
[GIN] 2025/08/26 - 05:57:19 | 200 |     805.769µs |   77.111.246.59 | GET      "/api/tags"
[GIN] 2025/08/26 - 06:16:49 | 200 |     785.412µs |   202.112.47.54 | GET      "/api/tags"
[GIN] 2025/08/26 - 06:27:40 | 200 |  4.663704431s |  223.237.177.24 | POST     "/api/generate"
[GIN] 2025/08/26 - 06:29:05 | 200 |     669.716µs |   77.111.247.72 | GET      "/api/tags"
[GIN] 2025/08/26 - 06:33:57 | 200 |     699.228µs |   77.111.246.24 | GET      "/api/tags"
[GIN] 2025/08/26 - 06:36:52 | 200 |     840.446µs |   77.111.245.15 | GET      "/api/tags"
[GIN] 2025/08/26 - 07:09:52 | 200 |     819.513µs |   77.111.245.14 | GET      "/api/tags"
[GIN] 2025/08/26 - 07:34:59 | 200 |     831.908µs |   77.111.245.15 | GET      "/api/tags"
[GIN] 2025/08/26 - 07:42:28 | 200 |     740.636µs |   77.111.246.27 | GET      "/api/tags"
[GIN] 2025/08/26 - 07:50:31 | 404 |     612.578µs |     87.120.93.7 | POST     "/api/generate"
[GIN] 2025/08/26 - 07:54:16 | 200 |       88.94µs |   202.112.47.54 | GET      "/api/version"
[GIN] 2025/08/26 - 08:18:04 | 200 |     798.843µs |   77.111.245.12 | GET      "/api/tags"
[GIN] 2025/08/26 - 09:08:06 | 200 |     697.414µs |   77.111.245.17 | GET      "/api/tags"
[GIN] 2025/08/26 - 09:12:00 | 200 |     770.227µs |   77.111.245.14 | GET      "/api/tags"
[GIN] 2025/08/26 - 09:16:42 | 200 |     841.982µs |   77.111.246.55 | GET      "/api/tags"
[GIN] 2025/08/26 - 09:38:04 | 200 |     944.688µs |   77.111.246.55 | GET      "/api/tags"
[GIN] 2025/08/26 - 09:40:04 | 200 |     814.502µs |    77.111.247.6 | GET      "/api/tags"
[GIN] 2025/08/26 - 09:59:21 | 200 |  5.823950007s |  223.237.177.24 | POST     "/api/generate"
[GIN] 2025/08/26 - 10:00:07 | 200 | 28.804853454s |  223.237.177.24 | POST     "/api/generate"
[GIN] 2025/08/26 - 10:02:31 | 200 |         1m57s |  223.237.177.24 | POST     "/api/generate"
[GIN] 2025/08/26 - 10:05:06 | 200 |      86.394µs |       127.0.0.1 | GET      "/api/version"


docker exec -it ollama nvidia-smi
Tue Aug 26 10:06:14 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.163.01             Driver Version: 550.163.01     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Tesla V100-SXM2-32GB           On  |   00000000:06:00.0 Off |                    0 |
| N/A   41C    P0             57W /  300W |   13868MiB /  32768MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  Tesla V100-SXM2-32GB           On  |   00000000:07:00.0 Off |                    0 |
| N/A   43C    P0             61W /  300W |     310MiB /  32768MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  Tesla V100-SXM2-32GB           On  |   00000000:0A:00.0 Off |                    0 |
| N/A   43C    P0             59W /  300W |     310MiB /  32768MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   3  Tesla V100-SXM2-32GB           On  |   00000000:0B:00.0 Off |                    0 |
| N/A   40C    P0             60W /  300W |     310MiB /  32768MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.11.6

Originally created by @srshkmr on GitHub (Aug 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12084 ### What is the issue? when using ollama docker on my multi gpu setup load is only happening on one GPU. docker run -d --name ollama --gpus '"device=0,1,2,3"' -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e OLLAMA_HOST=0.0.0.0:11434 -e OLLAMA_KEEP_ALIVE=24h -e OLLAMA_NUM_PARALLEL=1 -e LLAMA_CUDA_SPLIT=0.25,0.25,0.25,0.25 -e OLLAMA_NEW_ESTIMATES=1 --restart unless-stopped -v ollama:/root/.ollama -p 11434:11434 ollama/ollama:latest docker can see ollama having 4 gpus when running a prompt only one gpu is used to 72% rest are idle I am running gpt-oss:20b model +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla V100-SXM2-32GB On | 00000000:06:00.0 Off | 0 | | N/A 48C P0 200W / 300W | 13868MiB / 32768MiB | 72% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 Tesla V100-SXM2-32GB On | 00000000:07:00.0 Off | 0 | | N/A 43C P0 61W / 300W | 310MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 Tesla V100-SXM2-32GB On | 00000000:0A:00.0 Off | 0 | | N/A 43C P0 59W / 300W | 310MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 3 Tesla V100-SXM2-32GB On | 00000000:0B:00.0 Off | 0 | | N/A 40C P0 60W / 300W | 310MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ ### Relevant log output ```shell time=2025-08-25T10:18:11.685Z level=INFO source=routes.go:1318 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-08-25T10:18:11.686Z level=INFO source=images.go:477 msg="total blobs: 5" time=2025-08-25T10:18:11.686Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-25T10:18:11.686Z level=INFO source=routes.go:1371 msg="Listening on [::]:11434 (version 0.11.6)" time=2025-08-25T10:18:11.687Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-25T10:18:12.950Z level=INFO source=types.go:130 msg="inference compute" id=GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-25T10:18:12.950Z level=INFO source=types.go:130 msg="inference compute" id=GPU-b68f1f98-65b7-5328-f3ee-63fa5448f0b0 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-25T10:18:12.950Z level=INFO source=types.go:130 msg="inference compute" id=GPU-41e7ab21-5bc3-b454-300f-440c1ba8c243 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-25T10:18:12.950Z level=INFO source=types.go:130 msg="inference compute" id=GPU-8794b9db-bb90-3c4c-b58c-a570796dc18f library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" [GIN] 2025/08/25 - 10:18:26 | 200 | 133.816µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/08/25 - 10:18:32 | 200 | 1.943991ms | 59.98.50.179 | GET "/api/tags" time=2025-08-25T10:18:40.047Z level=INFO source=server.go:166 msg="enabling new memory estimates" time=2025-08-25T10:18:41.250Z level=INFO source=server.go:383 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --port 45567" time=2025-08-25T10:18:41.251Z level=INFO source=server.go:659 msg="loading model" "model layers"=25 requested=-1 time=2025-08-25T10:18:41.271Z level=INFO source=runner.go:1006 msg="starting ollama engine" time=2025-08-25T10:18:41.272Z level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:45567" time=2025-08-25T10:18:42.283Z level=INFO source=server.go:665 msg="system memory" total="503.8 GiB" free="493.5 GiB" free_swap="46.1 GiB" time=2025-08-25T10:18:42.283Z level=INFO source=server.go:669 msg="gpu memory" id=GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-25T10:18:42.283Z level=INFO source=server.go:669 msg="gpu memory" id=GPU-b68f1f98-65b7-5328-f3ee-63fa5448f0b0 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-25T10:18:42.283Z level=INFO source=server.go:669 msg="gpu memory" id=GPU-41e7ab21-5bc3-b454-300f-440c1ba8c243 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-25T10:18:42.283Z level=INFO source=server.go:669 msg="gpu memory" id=GPU-8794b9db-bb90-3c4c-b58c-a570796dc18f available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-25T10:18:42.285Z level=INFO source=runner.go:925 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:40 GPULayers:25[ID:GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-25T10:18:42.383Z level=INFO source=ggml.go:130 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 Device 1: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-b68f1f98-65b7-5328-f3ee-63fa5448f0b0 Device 2: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-41e7ab21-5bc3-b454-300f-440c1ba8c243 Device 3: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-8794b9db-bb90-3c4c-b58c-a570796dc18f load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-08-25T10:18:43.338Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-08-25T10:18:43.573Z level=INFO source=runner.go:925 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:40 GPULayers:25[ID:GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-25T10:18:43.763Z level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:40 GPULayers:25[ID:GPU-23e1acaf-bad8-ccca-b9f3-bcee5fc188a6 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-25T10:18:43.763Z level=INFO source=ggml.go:486 msg="offloading 24 repeating layers to GPU" time=2025-08-25T10:18:43.763Z level=INFO source=ggml.go:492 msg="offloading output layer to GPU" time=2025-08-25T10:18:43.763Z level=INFO source=ggml.go:497 msg="offloaded 25/25 layers to GPU" time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB" time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="300.0 MiB" time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="1.1 GiB" time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" time=2025-08-25T10:18:43.764Z level=INFO source=backend.go:342 msg="total memory" size="14.2 GiB" time=2025-08-25T10:18:43.764Z level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-08-25T10:18:43.764Z level=INFO source=server.go:1234 msg="waiting for llama runner to start responding" time=2025-08-25T10:18:43.765Z level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model" time=2025-08-25T10:18:48.789Z level=INFO source=server.go:1272 msg="llama runner started in 7.54 seconds" [GIN] 2025/08/25 - 10:18:56 | 200 | 18.798246318s | 59.98.50.179 | POST "/api/generate" [GIN] 2025/08/25 - 10:19:33 | 200 | 4.343257122s | 59.98.50.179 | POST "/api/generate" [GIN] 2025/08/25 - 10:19:43 | 200 | 1.144925ms | 77.111.246.23 | GET "/api/tags" [GIN] 2025/08/25 - 10:19:56 | 200 | 7.979087286s | 59.98.50.179 | POST "/api/generate" [GIN] 2025/08/25 - 10:20:06 | 200 | 4.031849345s | 59.98.50.179 | POST "/api/generate" [GIN] 2025/08/25 - 10:22:45 | 200 | 807.875µs | 77.111.247.55 | GET "/api/tags" [GIN] 2025/08/25 - 10:24:42 | 200 | 830.746µs | 77.111.245.13 | GET "/api/tags" [GIN] 2025/08/25 - 10:36:34 | 200 | 778.678µs | 77.111.245.17 | GET "/api/tags" [GIN] 2025/08/25 - 10:37:17 | 200 | 4.984607357s | 59.98.50.179 | POST "/api/generate" [GIN] 2025/08/25 - 10:37:25 | 200 | 3.279109682s | 59.98.50.179 | POST "/api/generate" [GIN] 2025/08/25 - 10:40:23 | 200 | 475.465µs | 77.111.245.17 | GET "/api/tags" [GIN] 2025/08/26 - 04:57:42 | 200 | 912.831µs | 77.111.245.13 | GET "/api/tags" [GIN] 2025/08/26 - 05:57:19 | 200 | 805.769µs | 77.111.246.59 | GET "/api/tags" [GIN] 2025/08/26 - 06:16:49 | 200 | 785.412µs | 202.112.47.54 | GET "/api/tags" [GIN] 2025/08/26 - 06:27:40 | 200 | 4.663704431s | 223.237.177.24 | POST "/api/generate" [GIN] 2025/08/26 - 06:29:05 | 200 | 669.716µs | 77.111.247.72 | GET "/api/tags" [GIN] 2025/08/26 - 06:33:57 | 200 | 699.228µs | 77.111.246.24 | GET "/api/tags" [GIN] 2025/08/26 - 06:36:52 | 200 | 840.446µs | 77.111.245.15 | GET "/api/tags" [GIN] 2025/08/26 - 07:09:52 | 200 | 819.513µs | 77.111.245.14 | GET "/api/tags" [GIN] 2025/08/26 - 07:34:59 | 200 | 831.908µs | 77.111.245.15 | GET "/api/tags" [GIN] 2025/08/26 - 07:42:28 | 200 | 740.636µs | 77.111.246.27 | GET "/api/tags" [GIN] 2025/08/26 - 07:50:31 | 404 | 612.578µs | 87.120.93.7 | POST "/api/generate" [GIN] 2025/08/26 - 07:54:16 | 200 | 88.94µs | 202.112.47.54 | GET "/api/version" [GIN] 2025/08/26 - 08:18:04 | 200 | 798.843µs | 77.111.245.12 | GET "/api/tags" [GIN] 2025/08/26 - 09:08:06 | 200 | 697.414µs | 77.111.245.17 | GET "/api/tags" [GIN] 2025/08/26 - 09:12:00 | 200 | 770.227µs | 77.111.245.14 | GET "/api/tags" [GIN] 2025/08/26 - 09:16:42 | 200 | 841.982µs | 77.111.246.55 | GET "/api/tags" [GIN] 2025/08/26 - 09:38:04 | 200 | 944.688µs | 77.111.246.55 | GET "/api/tags" [GIN] 2025/08/26 - 09:40:04 | 200 | 814.502µs | 77.111.247.6 | GET "/api/tags" [GIN] 2025/08/26 - 09:59:21 | 200 | 5.823950007s | 223.237.177.24 | POST "/api/generate" [GIN] 2025/08/26 - 10:00:07 | 200 | 28.804853454s | 223.237.177.24 | POST "/api/generate" [GIN] 2025/08/26 - 10:02:31 | 200 | 1m57s | 223.237.177.24 | POST "/api/generate" [GIN] 2025/08/26 - 10:05:06 | 200 | 86.394µs | 127.0.0.1 | GET "/api/version" docker exec -it ollama nvidia-smi Tue Aug 26 10:06:14 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla V100-SXM2-32GB On | 00000000:06:00.0 Off | 0 | | N/A 41C P0 57W / 300W | 13868MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 Tesla V100-SXM2-32GB On | 00000000:07:00.0 Off | 0 | | N/A 43C P0 61W / 300W | 310MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 Tesla V100-SXM2-32GB On | 00000000:0A:00.0 Off | 0 | | N/A 43C P0 59W / 300W | 310MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 3 Tesla V100-SXM2-32GB On | 00000000:0B:00.0 Off | 0 | | N/A 40C P0 60W / 300W | 310MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| +-----------------------------------------------------------------------------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.11.6
GiteaMirror added the bug label 2026-04-12 20:15:52 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 26, 2025):

There is no performance increase in ollama by spreading a model over multiple GPUs. However, you can force this by setting OLLAMA_SCHED_SPREAD to 1.

<!-- gh-comment-id:3223637790 --> @rick-github commented on GitHub (Aug 26, 2025): There is [no performance increase](https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990) in ollama by spreading a model over multiple GPUs. However, you can force this by setting [`OLLAMA_SCHED_SPREAD`](https://github.com/ollama/ollama/blob/30fb7e19f806c1b0f0fce19d088dbb126b36acaa/envconfig/config.go#L271C4-L271C23) to 1.
Author
Owner

@srshkmr commented on GitHub (Aug 26, 2025):

what happens during load? will ollama use the other gpus as well?

<!-- gh-comment-id:3223899975 --> @srshkmr commented on GitHub (Aug 26, 2025): what happens during load? will ollama use the other gpus as well?
Author
Owner

@rick-github commented on GitHub (Aug 26, 2025):

If a model doesn't fit on one GPU, ollama will use the other GPUs.

<!-- gh-comment-id:3224384635 --> @rick-github commented on GitHub (Aug 26, 2025): If a model doesn't fit on one GPU, ollama will use the other GPUs.
Author
Owner

@srshkmr commented on GitHub (Aug 26, 2025):

if the model fits, and assuming i receive 100 requests per second, will ollama scale to use the other GPUs?

<!-- gh-comment-id:3225335519 --> @srshkmr commented on GitHub (Aug 26, 2025): if the model fits, and assuming i receive 100 requests per second, will ollama scale to use the other GPUs?
Author
Owner

@rick-github commented on GitHub (Aug 26, 2025):

Concurret processing.

<!-- gh-comment-id:3225467184 --> @rick-github commented on GitHub (Aug 26, 2025): [Concurret processing](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
Author
Owner

@pdevine commented on GitHub (Aug 26, 2025):

I'm going to close this as answered. @srshkmr I can reopen if you feel otherwise.

<!-- gh-comment-id:3226165510 --> @pdevine commented on GitHub (Aug 26, 2025): I'm going to close this as answered. @srshkmr I can reopen if you feel otherwise.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8028