[GH-ISSUE #7648] Performance Impact of Scaling a 70B Model Across Multiple A100 GPUs and Further Speed Optimization #30639

Closed
opened 2026-04-22 10:29:16 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @gslin1224 on GitHub (Nov 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7648

Hi guys,
I have a question regarding the performance impact and potential optimizations for distributing a large model across multiple GPUs. Specifically:

When running a 70B parameter model, how does the speed compare when distributed across two A100 GPUs versus four A100 GPUs?
In general, does adding more GPUs consistently result in faster performance for such a large model, or are there potential diminishing returns due to factors like communication overhead?
Are there additional techniques or configurations within the Ollama framework (or recommended practices) that can further optimize or increase the speed when using multiple GPUs?
Thank you for your guidance and any insights you can provide to help enhance model performance!

Originally created by @gslin1224 on GitHub (Nov 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7648 Hi guys, I have a question regarding the performance impact and potential optimizations for distributing a large model across multiple GPUs. Specifically: When running a 70B parameter model, how does the speed compare when distributed across two A100 GPUs versus four A100 GPUs? In general, does adding more GPUs consistently result in faster performance for such a large model, or are there potential diminishing returns due to factors like communication overhead? Are there additional techniques or configurations within the Ollama framework (or recommended practices) that can further optimize or increase the speed when using multiple GPUs? Thank you for your guidance and any insights you can provide to help enhance model performance!
GiteaMirror added the question label 2026-04-22 10:29:16 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 13, 2024):

Inference with LLMs is, by the nature of LLMs, a serialized operation. They're composed of a series of layers, and computation in one layer needs to be completed before its output can be fed in to the next layer. This means that for an individual completion, multiple GPUs doesn't confer any increase in token generation. In fact, the token generation rate decreases, as the CPU/PCI interface is a bottleneck as intermediate results get moved from one GPU to the next. For example, llama3.1:70b on 4xA100s:

#GPUs Tokens/s
1 24.21 ± 0.018
2 22.66 ± 0.012
3 20.21 ± 0.012
4 20.14 ± 0.010

If you are doing multiple concurrent queries (OLLAMA_NUM_PARALLEL), then multiple GPUs can help, as GPU0 can process query 1 with the first half of the layers of the model while GPU1 processes query 2 with the second half. The problem is that you can't control the scheduling of queries on the GPUs or how quickly they complete their processing on a segment of the model, so sometimes you get both queries being queued onto the same GPU.

One approach to maximizing performance is to run one copy of a model per GPU, by using multiple ollama servers and binding them to a specific GPU using CUDA_VISIBLE_DEVICES, and then run a load balancing proxy in front to present a unified interface (eg litellm, ollama_proxy, nginx). Unfortunately, a 70b model might be too large for this, eg llama3.1:70b needs about 42G, which doesn't quite squeeze into a 40GB A100. The bits that don't fit on the GPU then use CPU for inference, which is slower. There is a hack though, if the system supports Nvidia's fallback memory. I believe this is enabled by default on Windows for supported GPUs, Linux users need to set GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 in the server environment. You can overallocate layers onto a GPU (https://github.com/ollama/ollama/issues/7629#issuecomment-2470280098) and the GPU will use system RAM for the bits that don't fit. Be aware that using this approach for too much of the model (ie large model or large context) can have a large performance impact (https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900)

<!-- gh-comment-id:2473561990 --> @rick-github commented on GitHub (Nov 13, 2024): Inference with LLMs is, by the nature of LLMs, a serialized operation. They're composed of a series of layers, and computation in one layer needs to be completed before its output can be fed in to the next layer. This means that for an individual completion, multiple GPUs doesn't confer any increase in token generation. In fact, the token generation rate decreases, as the CPU/PCI interface is a bottleneck as intermediate results get moved from one GPU to the next. For example, llama3.1:70b on 4xA100s: | #GPUs | Tokens/s| |---|---| |1 | 24.21 ± 0.018 | | 2 | 22.66 ± 0.012 | | 3 | 20.21 ± 0.012 | | 4 | 20.14 ± 0.010 | If you are doing multiple concurrent queries (`OLLAMA_NUM_PARALLEL`), then multiple GPUs can help, as GPU0 can process query 1 with the first half of the layers of the model while GPU1 processes query 2 with the second half. The problem is that you can't control the scheduling of queries on the GPUs or how quickly they complete their processing on a segment of the model, so sometimes you get both queries being queued onto the same GPU. One approach to maximizing performance is to run one copy of a model per GPU, by using multiple ollama servers and binding them to a specific GPU using `CUDA_VISIBLE_DEVICES`, and then run a load balancing proxy in front to present a unified interface (eg [litellm](https://github.com/BerriAI/litellm), [ollama_proxy](https://github.com/ParisNeo/ollama_proxy_server), [nginx](https://github.com/ollama/ollama/issues/7570#issuecomment-2464469733)). Unfortunately, a 70b model might be too large for this, eg llama3.1:70b needs about 42G, which doesn't quite squeeze into a 40GB A100. The bits that don't fit on the GPU then use CPU for inference, which is slower. There is a hack though, if the system supports Nvidia's fallback memory. I believe this is enabled by default on Windows for supported GPUs, Linux users need to set `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` in the server environment. You can overallocate layers onto a GPU (https://github.com/ollama/ollama/issues/7629#issuecomment-2470280098) and the GPU will use system RAM for the bits that don't fit. Be aware that using this approach for too much of the model (ie large model or large context) can have a large performance impact (https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900)
Author
Owner

@gslin1224 commented on GitHub (Nov 13, 2024):

Hi @rick-github,

Thank you again for your detailed response and helpful suggestions. Does this mean there are no other effective ways to significantly enhance inference speed? Also, is Ollama inherently slower in inference performance compared to VLLM or Tensor RT LLM?

Thanks again for your guidance!

<!-- gh-comment-id:2473663469 --> @gslin1224 commented on GitHub (Nov 13, 2024): Hi @rick-github, Thank you again for your detailed response and helpful suggestions. Does this mean there are no other effective ways to significantly enhance inference speed? Also, is Ollama inherently slower in inference performance compared to VLLM or Tensor RT LLM? Thanks again for your guidance!
Author
Owner

@rick-github commented on GitHub (Nov 13, 2024):

To my knowledge, there's no way to significantly enhance inference speed for a single completion in ollama.

There are cases where multiple GPUs can be used to do parallel matrix ops in a single layer, which would increase inference speed. The PR that implemented multi-GPU support in ollama claims to do this, but I've never seen this make a difference in practice. This could come down to sub-optimal configuration, as there are some options (eg [u]batch-size, split-mode) which allow some tuning. I experimented briefly some time ago but didn't see any improvement at the time.

When I'm concerned about performance, I usually focus on model selection, managing context size, and various options like OLLAMA_NUM_PARALLEL and OLLAMA_FLASH_ATTENTION. There's not much in the way of knobs to be twiddled in ollama.

Other inference implementations may perform differently, I haven't conducted benchmarks across other servers so I can't speak to that. It's on my TODO list though.

<!-- gh-comment-id:2473858702 --> @rick-github commented on GitHub (Nov 13, 2024): To my knowledge, there's no way to significantly enhance inference speed for a single completion in ollama. There are cases where multiple GPUs can be used to do parallel matrix ops in a single layer, which would increase inference speed. The [PR](https://github.com/ggerganov/llama.cpp/pull/1703) that implemented multi-GPU support in ollama claims to do this, but I've never seen this make a difference in practice. This could come down to sub-optimal configuration, as there are some options (eg [`[u]batch-size`](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md#batch-size), [`split-mode`](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#:~:text=%2Dsm%2C%20%2D%2D-,split%2Dmode,-%7Bnone%2Clayer%2Crow)) which allow some tuning. I experimented briefly some time ago but didn't see any improvement at the time. When I'm concerned about performance, I usually focus on model selection, managing context size, and various options like `OLLAMA_NUM_PARALLEL` and `OLLAMA_FLASH_ATTENTION`. There's not much in the way of knobs to be twiddled in ollama. Other inference implementations may perform differently, I haven't conducted benchmarks across other servers so I can't speak to that. It's on my TODO list though.
Author
Owner

@gslin1224 commented on GitHub (Nov 13, 2024):

@rick-github ok, thanks a lot for your useful response!

<!-- gh-comment-id:2473866410 --> @gslin1224 commented on GitHub (Nov 13, 2024): @rick-github ok, thanks a lot for your useful response!
Author
Owner

@Readon commented on GitHub (Nov 20, 2024):

To my knowledge, there's no way to significantly enhance inference speed for a single completion in ollama.

I have tested vLLM with AWQ quantization, which could utilize GPU in 100% load. And single sequence generation is almost 70% faster.
However, it could not support fast model switching.

<!-- gh-comment-id:2487197030 --> @Readon commented on GitHub (Nov 20, 2024): > To my knowledge, there's no way to significantly enhance inference speed for a single completion in ollama. > I have tested vLLM with AWQ quantization, which could utilize GPU in 100% load. And single sequence generation is almost 70% faster. However, it could not support fast model switching.
Author
Owner

@PrimosK commented on GitHub (Feb 27, 2025):

Hi guys,

I did some tests recently with:

The key point is that the model fits comfortably within the RTX 4090's VRAM.

1. TEST (1x RTX 4090)

  1. Other settings: (defaults)
  2. Token eval rate: 55,6 token/s
  3. VRAM utilization: 22GB / 24GB
  4. GPU processor utilization (during inference): ~90%
  5. Time to complete 100 requests: 212 seconds

2. TEST (4x RTX 4094)

  1. Other settings: OLLAMA_NUM_PARALLEL=4
  2. Token eval rate: 46,09 token/s
  3. VRAM utilization: 19GB / 24GB, 18 / 24GB, 18 / 24GB, 18/24GB
  4. GPU processor utilization (during inference): ~25%
  5. Time to complete 100 requests: 214 seconds

Surprisingly, there is virtually no improvement for the end user when doing 2. test. Throughput seems to be the same in both cases. The most noticeable thing is that in the case of the second test, the GPUs are only ~25% utilized (so roughly number of total GPUs / 4).

WRT:
One approach to maximizing performance is to run one copy of a model per GPU, by using multiple ollama servers and binding them to a specific GPU using CUDA_VISIBLE_DEVICES, and then run a load balancing proxy in front to present a unified interface (eg litellm, ollama_proxy, https://github.com/ollama/ollama/issues/7570#issuecomment-2464469733).

@rick-github: Is this approach necessary even if the model fits nicely into a single GPU? With 4x GPUs and OLLAMA_NUM_PARALLEL=4, I would have expected Ollama to load the entire model onto each GPU and distribute the workload accordingly. Is my understanding incorrect?

<!-- gh-comment-id:2687157167 --> @PrimosK commented on GitHub (Feb 27, 2025): Hi guys, I did some tests recently with: - Model used: [mistral-small:24b-instruct-2501-q4_K_M](mistral-small:24b-instruct-2501-q4_K_M) (`32k context`) - Various numbers of RTX 4090 - Making 100 requests (4 in parallel / so 4 threads) The key point is that the model fits comfortably within the RTX 4090's VRAM. **1. TEST (1x RTX 4090)** 1. Other settings: (defaults) 2. Token eval rate: 55,6 token/s 3. VRAM utilization: 22GB / 24GB 4. GPU processor utilization (during inference): **~90%** 5. Time to complete 100 requests: **212 seconds** **2. TEST (4x RTX 4094)** 1. Other settings: OLLAMA_NUM_PARALLEL=4 2. Token eval rate: 46,09 token/s 3. VRAM utilization: 19GB / 24GB, 18 / 24GB, 18 / 24GB, 18/24GB 4. GPU processor utilization (during inference): **~25%** 5. Time to complete 100 requests: **214 seconds** Surprisingly, there is virtually no improvement for the end user when doing 2. test. Throughput seems to be the same in both cases. The most noticeable thing is that in the case of the second test, the GPUs are only ~25% utilized (so roughly `number of total GPUs / 4`). WRT: _One approach to maximizing performance is to run one copy of a model per GPU, by using multiple ollama servers and binding them to a specific GPU using CUDA_VISIBLE_DEVICES, and then run a load balancing proxy in front to present a unified interface (eg [litellm](https://github.com/BerriAI/litellm), [ollama_proxy](https://github.com/ParisNeo/ollama_proxy_server), https://github.com/ollama/ollama/issues/7570#issuecomment-2464469733)._ @rick-github: Is this approach necessary even if the model fits nicely into a single GPU? With `4x GPUs` and `OLLAMA_NUM_PARALLEL=4`, I would have expected Ollama to load the entire model onto each `GPU` and distribute the workload accordingly. Is my understanding incorrect?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30639