[GH-ISSUE #6160] Ollama ps says 22 GB, but nvidia-smi says 16GB with flash attention enabled #65885

Closed
opened 2026-05-03 23:03:45 -05:00 by GiteaMirror · 32 comments
Owner

Originally created by @chigkim on GitHub (Aug 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6160

What is the issue?

Ollama indicates the model is utilizing 22GB, but nvidia-smi says it's utilizing 16GB.
The model was fully loaded and generating responses when I ran nvidia-smi.

Here's the log:

time=2024-08-04T12:51:05.930Z level=INFO source=server.go:384 msg="starting llama server" cmd="/tmp/ollama1680167563/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-d36aafdc1d822f932f3fd3ddc18296628764c5e43f153e9c02b29f5c4525cf2a --ctx-size 65536 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --flash-attn --parallel 32 --port 34551"

ollama ps                                                                                                                                                                                                               
 NAME                           ID           SIZE  PROCESSOR UNTIL                                                                                                                                                                            
 llama3.1:8b-instruct-q8_0      9b90f0f552e7 22 GB 100% GPU  27 seconds from now                                                                                                                                                              
 root@272875ddc015:~# nvidia-smi                                                                                                                                                                                                              
 Sun Aug  4 12:57:31 2024                                                                                                                                                                                                                     
 +-----------------------------------------------------------------------------------------+                                                                                                                                                  
 | NVIDIA-SMI 555.42.02              Driver Version: 555.42.02      CUDA Version: 12.5     |                                                                                                                                                  
 |-----------------------------------------+------------------------+----------------------+                                                                                                                                                  
 | GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |                                                                                                                                                  
 | Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |                                                                                                                                                  
 |                                         |                        |               MIG M. |                                                                                                                                                  
 |=========================================+========================+======================|                                                                                                                                                  
 |   0  NVIDIA GeForce RTX 4090        On  |   00000000:41:00.0 Off |                  Off |                                                                                                                                                  
 | 74%   68C    P0            353W /  450W |   16560MiB /  24564MiB |    100%      Default |                                                                                                                                                  
 |                                         |                        |                  N/A |                                                                                                                                                  
 +-----------------------------------------+------------------------+----------------------+                                                                                                                                                  
                                                                                                                                                                                                                                              
 +-----------------------------------------------------------------------------------------+                                                                                                                                                  
 | Processes:                                                                              |                                                                                                                                                  
 |  GPU   GI   CI        PID   Type   Process name                              GPU Memory |                                                                                                                                                  
 |        ID   ID                                                               Usage      |                                                                                                                                                  
 |=========================================================================================|                                                                                                                                                  
 +-----------------------------------------------------------------------------------------+                                                                                                                                                  

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.3

Originally created by @chigkim on GitHub (Aug 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6160 ### What is the issue? Ollama indicates the model is utilizing 22GB, but nvidia-smi says it's utilizing 16GB. The model was fully loaded and generating responses when I ran nvidia-smi. Here's the log: ``` time=2024-08-04T12:51:05.930Z level=INFO source=server.go:384 msg="starting llama server" cmd="/tmp/ollama1680167563/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-d36aafdc1d822f932f3fd3ddc18296628764c5e43f153e9c02b29f5c4525cf2a --ctx-size 65536 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --flash-attn --parallel 32 --port 34551" ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.1:8b-instruct-q8_0 9b90f0f552e7 22 GB 100% GPU 27 seconds from now root@272875ddc015:~# nvidia-smi Sun Aug 4 12:57:31 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.42.02 Driver Version: 555.42.02 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4090 On | 00000000:41:00.0 Off | Off | | 74% 68C P0 353W / 450W | 16560MiB / 24564MiB | 100% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| +-----------------------------------------------------------------------------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.3
GiteaMirror added the memorybug labels 2026-05-03 23:03:45 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 4, 2024):

You have flash attention enabled. ollama computes memory requirements but it's llama.cpp that actually does the memory allocations. Flash attention is a more efficient use of VRAM, so llama.cpp doesn't allocate as much memory as ollama thought it needed.

<!-- gh-comment-id:2267544330 --> @rick-github commented on GitHub (Aug 4, 2024): You have flash attention enabled. ollama computes memory requirements but it's llama.cpp that actually does the memory allocations. Flash attention is a more efficient use of VRAM, so llama.cpp doesn't allocate as much memory as ollama thought it needed.
Author
Owner

@chigkim commented on GitHub (Aug 4, 2024):

Thanks for pointing it out.
Just for curiosity, why Can't Ollama correctly calculate the memory size with flash attention?
Let's say if an entire model and context size can fit into vram with flash attention, but cannot fit without flash attention. Then when you load with flash attention, wouldn't Ollama try to offload layers even though it could fit into the vram with flash attention?

<!-- gh-comment-id:2267547138 --> @chigkim commented on GitHub (Aug 4, 2024): Thanks for pointing it out. Just for curiosity, why Can't Ollama correctly calculate the memory size with flash attention? Let's say if an entire model and context size can fit into vram with flash attention, but cannot fit without flash attention. Then when you load with flash attention, wouldn't Ollama try to offload layers even though it could fit into the vram with flash attention?
Author
Owner

@rick-github commented on GitHub (Aug 4, 2024):

Yes, ollama will spill when it doesn't need to. Flash attention is a relatively recent addition to ollama and it doesn't work for some architectures (deepseek2), so it's not in widespread use. There has been a spate of recent tickets regarding memory calculations in ollama so I expect this part of the code will receive some scrutiny soon, and along with that I think that the impact of flash attention will be taken in to account.

<!-- gh-comment-id:2267552191 --> @rick-github commented on GitHub (Aug 4, 2024): Yes, ollama will spill when it doesn't need to. Flash attention is a relatively recent addition to ollama and it doesn't work for some architectures (deepseek2), so it's not in widespread use. There has been a spate of recent tickets regarding memory calculations in ollama so I expect this part of the code will receive some scrutiny soon, and along with that I think that the impact of flash attention will be taken in to account.
Author
Owner

@sammcj commented on GitHub (Aug 9, 2024):

In a PR I've got up for review I added to the estimations Ollama's scheduler performs on the K/V cache, I suspect it might resolve or at least improve this.

<!-- gh-comment-id:2278836029 --> @sammcj commented on GitHub (Aug 9, 2024): In a [PR](https://github.com/ollama/ollama/pull/6279) I've got up for review I added to the estimations Ollama's scheduler performs on the K/V cache, I suspect it might resolve or at least improve this.
Author
Owner

@theasp commented on GitHub (Sep 17, 2024):

I guess this is the same problem as in #5022.

<!-- gh-comment-id:2356685734 --> @theasp commented on GitHub (Sep 17, 2024): I guess this is the same problem as in #5022.
Author
Owner

@theasp commented on GitHub (Oct 2, 2024):

@sammcj, I don't think your patch fixes this, but now I have a larger context anyway. This PR seems to indicate there is some double counting going on in the same section of code: https://github.com/ollama/ollama/pull/6218

This is q6_K_L, with q8_0 for k/v:

NAME                                     ID              SIZE     PROCESSOR    UNTIL
DEFAULT/mistral-small-2409-22b:latest    d9db479f49e8    24 GB    100% GPU     Forever
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.06              Driver Version: 555.42.06      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090        Off |   00000000:01:00.0 Off |                  N/A |
|  0%   38C    P8             36W /  420W |   20618MiB /  24576MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    341264      C   ...unners/cuda_v12/ollama_llama_server      20608MiB |
+-----------------------------------------------------------------------------------------+
ollama-1  | time=2024-10-02T02:45:29.850Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-abc5ca77009
9d3e1aaa7cd3043bf4a32d4c4bd927a28659ebb7434a7cd8d2479 gpu=GPU-07966a3b-efbf-f1c2-9933-f711da9a959d parallel=1 available=24784863232 required="22.7 GiB"
ollama-1  | time=2024-10-02T02:45:29.850Z level=INFO source=server.go:103 msg="system memory" total="31.2 GiB" free="11.9 GiB" free_swap="0 B"
ollama-1  | time=2024-10-02T02:45:29.851Z level=INFO source=memory.go:334 msg="offload to cuda" layers.requested=-1 layers.model=57 layers.offload=57 layers.split="" memory.available="[23.1
GiB]" memory.gpu_overhead="0 B" memory.required.full="22.7 GiB" memory.required.partial="22.7 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[22.7 GiB]" memory.weights.total=
"19.3 GiB" memory.weights.repeating="19.1 GiB" memory.weights.nonrepeating="204.0 MiB" memory.graph.full="2.3 GiB" memory.graph.partial="2.4 GiB"
ollama-1  | time=2024-10-02T02:45:29.851Z level=INFO source=server.go:296 msg="Enabling flash attention"
ollama-1  | time=2024-10-02T02:45:29.852Z level=INFO source=server.go:462 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/b
lobs/sha256-abc5ca770099d3e1aaa7cd3043bf4a32d4c4bd927a28659ebb7434a7cd8d2479 --ctx-size 24576 --batch-size 512 --embedding --log-disable --n-gpu-layers 57 --flash-attn --cache-type-k q8_0 --
cache-type-v q8_0 --no-mmap --parallel 1 --port 45327"
[...]
ollama-1  | llama_new_context_with_model: n_ctx      = 24576
ollama-1  | llama_new_context_with_model: n_batch    = 512
ollama-1  | llama_new_context_with_model: n_ubatch   = 512
ollama-1  | llama_new_context_with_model: flash_attn = 1
ollama-1  | llama_new_context_with_model: freq_base  = 1000000.0
ollama-1  | llama_new_context_with_model: freq_scale = 1
ollama-1  | llama_kv_cache_init:      CUDA0 KV buffer size =  2856.00 MiB
ollama-1  | llama_new_context_with_model: KV self size  = 2856.00 MiB, K (q8_0): 1428.00 MiB, V (q8_0): 1428.00 MiB
ollama-1  | llama_new_context_with_model:  CUDA_Host  output buffer size =     0.15 MiB
ollama-1  | llama_new_context_with_model:      CUDA0 compute buffer size =   148.00 MiB
ollama-1  | llama_new_context_with_model:  CUDA_Host compute buffer size =    60.01 MiB
ollama-1  | llama_new_context_with_model: graph nodes  = 1575
ollama-1  | llama_new_context_with_model: graph splits = 2
<!-- gh-comment-id:2387543301 --> @theasp commented on GitHub (Oct 2, 2024): @sammcj, I don't think your patch fixes this, but now I have a larger context anyway. This PR seems to indicate there is some double counting going on in the same section of code: https://github.com/ollama/ollama/pull/6218 This is `q6_K_L`, with `q8_0` for k/v: ``` NAME ID SIZE PROCESSOR UNTIL DEFAULT/mistral-small-2409-22b:latest d9db479f49e8 24 GB 100% GPU Forever ``` ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A | | 0% 38C P8 36W / 420W | 20618MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 341264 C ...unners/cuda_v12/ollama_llama_server 20608MiB | +-----------------------------------------------------------------------------------------+ ``` ``` ollama-1 | time=2024-10-02T02:45:29.850Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-abc5ca77009 9d3e1aaa7cd3043bf4a32d4c4bd927a28659ebb7434a7cd8d2479 gpu=GPU-07966a3b-efbf-f1c2-9933-f711da9a959d parallel=1 available=24784863232 required="22.7 GiB" ollama-1 | time=2024-10-02T02:45:29.850Z level=INFO source=server.go:103 msg="system memory" total="31.2 GiB" free="11.9 GiB" free_swap="0 B" ollama-1 | time=2024-10-02T02:45:29.851Z level=INFO source=memory.go:334 msg="offload to cuda" layers.requested=-1 layers.model=57 layers.offload=57 layers.split="" memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.7 GiB" memory.required.partial="22.7 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[22.7 GiB]" memory.weights.total= "19.3 GiB" memory.weights.repeating="19.1 GiB" memory.weights.nonrepeating="204.0 MiB" memory.graph.full="2.3 GiB" memory.graph.partial="2.4 GiB" ollama-1 | time=2024-10-02T02:45:29.851Z level=INFO source=server.go:296 msg="Enabling flash attention" ollama-1 | time=2024-10-02T02:45:29.852Z level=INFO source=server.go:462 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/b lobs/sha256-abc5ca770099d3e1aaa7cd3043bf4a32d4c4bd927a28659ebb7434a7cd8d2479 --ctx-size 24576 --batch-size 512 --embedding --log-disable --n-gpu-layers 57 --flash-attn --cache-type-k q8_0 -- cache-type-v q8_0 --no-mmap --parallel 1 --port 45327" [...] ollama-1 | llama_new_context_with_model: n_ctx = 24576 ollama-1 | llama_new_context_with_model: n_batch = 512 ollama-1 | llama_new_context_with_model: n_ubatch = 512 ollama-1 | llama_new_context_with_model: flash_attn = 1 ollama-1 | llama_new_context_with_model: freq_base = 1000000.0 ollama-1 | llama_new_context_with_model: freq_scale = 1 ollama-1 | llama_kv_cache_init: CUDA0 KV buffer size = 2856.00 MiB ollama-1 | llama_new_context_with_model: KV self size = 2856.00 MiB, K (q8_0): 1428.00 MiB, V (q8_0): 1428.00 MiB ollama-1 | llama_new_context_with_model: CUDA_Host output buffer size = 0.15 MiB ollama-1 | llama_new_context_with_model: CUDA0 compute buffer size = 148.00 MiB ollama-1 | llama_new_context_with_model: CUDA_Host compute buffer size = 60.01 MiB ollama-1 | llama_new_context_with_model: graph nodes = 1575 ollama-1 | llama_new_context_with_model: graph splits = 2 ```
Author
Owner

@rick-github commented on GitHub (Oct 2, 2024):

Note that the calculations that ollama does are only used to determine how many layers it asks llama.cpp to load into VRAM. You can override that with num_gpu. So if you increase num_ctx to the point where ollama decides that it needs to spill, just set num_gpu to 57 and let llama.cpp decide how it's going to allocate memory. That way you should be able to get llama.cpp to use all VRAM. If you get really close to 100% usage there's the possibility that llama.cpp will try to malloc some memory and fail, you can mitigate that by setting GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 in the server environment. This will cause any allocations that don't fit due to limited VRAM to spill into system RAM, while still allowing the GPU to access it. It will be slower than GPU->VRAM or CPU->RAM, so not recommended for large allocations (eg, a very large model that would normally be split across GPU/CPU), but it adds a safety valve for smaller allocations.

<!-- gh-comment-id:2387568721 --> @rick-github commented on GitHub (Oct 2, 2024): Note that the calculations that ollama does are only used to determine how many layers it asks llama.cpp to load into VRAM. You can override that with `num_gpu`. So if you increase `num_ctx` to the point where ollama decides that it needs to spill, just set `num_gpu` to 57 and let llama.cpp decide how it's going to allocate memory. That way you should be able to get llama.cpp to use all VRAM. If you get really close to 100% usage there's the possibility that llama.cpp will try to malloc some memory and fail, you can mitigate that by setting `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` in the server environment. This will cause any allocations that don't fit due to limited VRAM to spill into system RAM, while still allowing the GPU to access it. It will be slower than GPU->VRAM or CPU->RAM, so not recommended for large allocations (eg, a very large model that would normally be split across GPU/CPU), but it adds a safety valve for smaller allocations.
Author
Owner

@theasp commented on GitHub (Oct 2, 2024):

@rick-github Good point about num_gpu (note that ollama ps will still say it's splitting it even if it doesn't), however I think the estimate is also used when deciding to unload models. I'd like to be able to have ollama also load an additional embedding or reranking model without having to unload the current model.

This is the same model with:

PARAMETER num_gpu 57
PARAMETER num_ctx 49152
NAME                                     ID              SIZE     PROCESSOR          UNTIL
DEFAULT/mistral-small-2409-22b:latest    11e5ec01702c    29 GB    16%/84% CPU/GPU    Forever
|    0   N/A  N/A    780640      C   ...unners/cuda_v12/ollama_llama_server      23508MiB |
ollama-1  | llm_load_tensors:  CUDA_Host buffer size =   204.00 MiB
ollama-1  | llm_load_tensors:      CUDA0 buffer size = 17295.40 MiB
ollama-1  | llama_kv_cache_init:      CUDA0 KV buffer size =  5712.00 MiB
ollama-1  | llama_new_context_with_model:  CUDA_Host  output buffer size =     0.15 MiB
ollama-1  | llama_new_context_with_model:      CUDA0 compute buffer size =   192.00 MiB
ollama-1  | llama_new_context_with_model:  CUDA_Host compute buffer size =   108.01 MiB

Adding up those buffer sizes gives 23511.56 MiB, which is pretty close to what nvidia-smi says. This is probably material for another issue, but what if the memory requirements were updated after the model is loaded by parsing the llama.cpp output?

This is the estimate for the above:

ollama-1  | time=2024-10-02T14:10:15.883Z level=INFO source=memory.go:334 msg="offload to cuda" layers.requested=57 layers.model=57 layers.offload=45 layers.split="" memory
.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="27.8 GiB" memory.required.partial="23.3 GiB" memory.required.kv="5.2 GiB" memory.required.allocation
s="[23.3 GiB]" memory.weights.total="21.9 GiB" memory.weights.repeating="21.7 GiB" memory.weights.nonrepeating="204.0 MiB" memory.graph.full="4.6 GiB" memory.graph.partial=
"4.8 GiB"

NOTE, this is still with the q8 kv cache with the patch from the other PR.

Assuming I am reading this correctly it looks like the KV cache size is being underestimated here, and the model size is being overestimated:

  • Weights: Estimate=21.9 GiB (22425.6 MiB), llama.cpp=17499.40 MiB
  • KV cache: Estimate=5.2 GiB (5324.8 MiB), llama.cpp=5712.00 MiB
<!-- gh-comment-id:2388844052 --> @theasp commented on GitHub (Oct 2, 2024): @rick-github Good point about `num_gpu` (note that `ollama ps` will still say it's splitting it even if it doesn't), however I think the estimate is also used when deciding to unload models. I'd like to be able to have ollama also load an additional embedding or reranking model without having to unload the current model. This is the same model with: ``` PARAMETER num_gpu 57 PARAMETER num_ctx 49152 ``` ``` NAME ID SIZE PROCESSOR UNTIL DEFAULT/mistral-small-2409-22b:latest 11e5ec01702c 29 GB 16%/84% CPU/GPU Forever ``` ``` | 0 N/A N/A 780640 C ...unners/cuda_v12/ollama_llama_server 23508MiB | ``` ``` ollama-1 | llm_load_tensors: CUDA_Host buffer size = 204.00 MiB ollama-1 | llm_load_tensors: CUDA0 buffer size = 17295.40 MiB ollama-1 | llama_kv_cache_init: CUDA0 KV buffer size = 5712.00 MiB ollama-1 | llama_new_context_with_model: CUDA_Host output buffer size = 0.15 MiB ollama-1 | llama_new_context_with_model: CUDA0 compute buffer size = 192.00 MiB ollama-1 | llama_new_context_with_model: CUDA_Host compute buffer size = 108.01 MiB ``` Adding up those buffer sizes gives 23511.56 MiB, which is pretty close to what nvidia-smi says. This is probably material for another issue, but what if the memory requirements were updated after the model is loaded by parsing the llama.cpp output? This is the estimate for the above: ``` ollama-1 | time=2024-10-02T14:10:15.883Z level=INFO source=memory.go:334 msg="offload to cuda" layers.requested=57 layers.model=57 layers.offload=45 layers.split="" memory .available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="27.8 GiB" memory.required.partial="23.3 GiB" memory.required.kv="5.2 GiB" memory.required.allocation s="[23.3 GiB]" memory.weights.total="21.9 GiB" memory.weights.repeating="21.7 GiB" memory.weights.nonrepeating="204.0 MiB" memory.graph.full="4.6 GiB" memory.graph.partial= "4.8 GiB" ``` *NOTE*, this is still with the q8 kv cache with the patch from the other PR. Assuming I am reading this correctly it looks like the KV cache size is being underestimated here, and the model size is being overestimated: - Weights: Estimate=21.9 GiB (22425.6 MiB), llama.cpp=17499.40 MiB - KV cache: Estimate=5.2 GiB (5324.8 MiB), llama.cpp=5712.00 MiB
Author
Owner

@rick-github commented on GitHub (Oct 2, 2024):

I think the only way to achieve multiple models with this overcommit method would be to run two servers, one which loads the embedding model and the other loading the main model. One of the servers would need to be configured with a different port and the client would be configured appropriately. If a single API endpoint is required, a litellm proxy in front can distribute queries to the appropriate ollama server, although the client is then constrained to an OpenAI style API.

<!-- gh-comment-id:2388951644 --> @rick-github commented on GitHub (Oct 2, 2024): I think the only way to achieve multiple models with this overcommit method would be to run two servers, one which loads the embedding model and the other loading the main model. One of the servers would need to be configured with a different port and the client would be configured appropriately. If a single API endpoint is required, a litellm proxy in front can distribute queries to the appropriate ollama server, although the client is then constrained to an OpenAI style API.
Author
Owner

@chigkim commented on GitHub (Nov 25, 2024):

Any fix update on this yet?
Calculating correct memory usage will allow users to load models with longer context length!
Thanks!

<!-- gh-comment-id:2497946315 --> @chigkim commented on GitHub (Nov 25, 2024): Any fix update on this yet? Calculating correct memory usage will allow users to load models with longer context length! Thanks!
Author
Owner

@emzaedu commented on GitHub (Dec 6, 2024):

Another example (for CPU usage only) with KV q4_0

num_ctx 65536
qwen2.5-coder:7b-instruct-q4_K_M 2b0496514337 39 GB 100% CPU 4 minutes from now

num_ctx 131072
qwen2.5-coder:7b-instruct-q4_K_M 2b0496514337 34 GB 100% CPU 4 minutes from now

<!-- gh-comment-id:2523778922 --> @emzaedu commented on GitHub (Dec 6, 2024): Another example (for CPU usage only) with KV q4_0 num_ctx 65536 qwen2.5-coder:7b-instruct-q4_K_M 2b0496514337 39 GB 100% CPU 4 minutes from now num_ctx 131072 qwen2.5-coder:7b-instruct-q4_K_M 2b0496514337 34 GB 100% CPU 4 minutes from now
Author
Owner

@sammcj commented on GitHub (Dec 6, 2024):

I can actually replicate this too, the detected memory usage is way off with or without Flash Attention or qKV.

It looks like when Flash Attention is enabled more memory is saved and whatever the issue is just gets exacerbated as it seems Ollama is not taking FA into account when performing the calculation (but the new qKV is when it's used):

FA=0, F16 K/V

Ollama: 22GB
Actual: 20.13GB

image

FA=1, F16 K/V

  • Ollama: 22GB
  • Actual: 13.2GB
image

FA=1, Q8_0 K/V

  • Ollama: 18GB
  • Actual: 9.98GB
image

I can confirm however that the underlying llama.cpp is showing the correct memory usage in all three examples above.

e.g.

llm_load_tensors:        CPU buffer size =   426.36 MiB
llm_load_tensors:      CUDA0 buffer size =  5532.43 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =  3808.00 MiB
llama_new_context_with_model: KV self size  = 3808.00 MiB, K (q8_0): 1904.00 MiB, V (q8_0): 1904.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     2.38 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   412.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =   263.01 MiB

Where 5532+3808+263=9603 (+overheads) which is close enough to 9.98.

It's also wrong on Metal image
<!-- gh-comment-id:2523987941 --> @sammcj commented on GitHub (Dec 6, 2024): I can actually replicate this too, the detected memory usage is way off with or without Flash Attention or qKV. It looks like when Flash Attention is enabled more memory is saved and whatever the issue is just gets exacerbated as it seems Ollama is not taking FA into account when performing the calculation (but the new qKV is when it's used): FA=0, F16 K/V Ollama: 22GB Actual: 20.13GB <img width="1037" alt="image" src="https://github.com/user-attachments/assets/deec52d1-972d-474b-b892-5a4f1d7b66b4"> --- FA=1, F16 K/V - Ollama: 22GB - Actual: 13.2GB <img width="988" alt="image" src="https://github.com/user-attachments/assets/a33b661b-a96b-43fa-bfcb-1e66c15ffdb0"> --- FA=1, Q8_0 K/V - Ollama: 18GB - Actual: 9.98GB <img width="1670" alt="image" src="https://github.com/user-attachments/assets/eb5e4c43-f4ec-445a-9799-1bd4871e894e"> --- I can confirm however that the underlying llama.cpp is showing the correct memory usage in all three examples above. e.g. ``` llm_load_tensors: CPU buffer size = 426.36 MiB llm_load_tensors: CUDA0 buffer size = 5532.43 MiB llama_kv_cache_init: CUDA0 KV buffer size = 3808.00 MiB llama_new_context_with_model: KV self size = 3808.00 MiB, K (q8_0): 1904.00 MiB, V (q8_0): 1904.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB llama_new_context_with_model: CUDA0 compute buffer size = 412.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 263.01 MiB ``` Where 5532+3808+263=9603 (+overheads) which is close enough to 9.98. <details> <summary>It's also wrong on Metal</summary> <img width="3066" alt="image" src="https://github.com/user-attachments/assets/94872b4e-6bc9-461a-947c-4597f574a9d8"> </details>
Author
Owner

@rick-github commented on GitHub (Dec 6, 2024):

Note that ollama (via ollama ps) is reporting GB (10^9) and nvtop is reporting GiB (1024^3). So in the no FA case, ollama is fairly close (20.13GiB = 20.13 * 1024^3 = 21,614,422,917B = 21.6GB ~ 22GB). FA is where ollama diverges since it doesn't account for this when it's doing its memory estimation.

<!-- gh-comment-id:2524031901 --> @rick-github commented on GitHub (Dec 6, 2024): Note that ollama (via `ollama ps`) is reporting GB (10^9) and nvtop is reporting GiB (1024^3). So in the no FA case, ollama is fairly close (20.13GiB = 20.13 * 1024^3 = 21,614,422,917B = 21.6GB ~ 22GB). FA is where ollama diverges since it doesn't account for this when it's doing its memory estimation.
Author
Owner

@sammcj commented on GitHub (Dec 6, 2024):

Good catch Rick.

It would be trivial for me to submit a PR that adjusts the memory estimates when FA is enabled.

I'm just trying to find if there is a simple calculation of it's reduction.

For the model in my above comment simply applying *0.6 to the calculation would bring it a lot closer - but is this consistent across all models and hardware? I suspect the savings are more variable and dependant on a number of factors but I'm not sure.

e.g. I'm playing something like this:

// Adjust memory calculations for Flash Attention
if fa {
  const faCorrectionFactor = 0.05
  graphPartialOffload = uint64(float64(graphPartialOffload) * faCorrectionFactor)
  graphFullOffload = uint64(float64(graphFullOffload) * faCorrectionFactor)
  layerSize = uint64(float64(layerSize) * faCorrectionFactor)
}
<!-- gh-comment-id:2524049562 --> @sammcj commented on GitHub (Dec 6, 2024): Good catch Rick. It would be trivial for me to submit a PR that adjusts the memory estimates when FA is enabled. I'm just trying to find if there is a simple calculation of it's reduction. For the model in my above comment simply applying *0.6 to the calculation would bring it a lot closer - but is this consistent across all models and hardware? I suspect the savings are more variable and dependant on a number of factors but I'm not sure. e.g. [I'm playing something like this](https://github.com/ollama/ollama/compare/main...sammcj:ollama:fix/memory_estimates?expand=1): ```go // Adjust memory calculations for Flash Attention if fa { const faCorrectionFactor = 0.05 graphPartialOffload = uint64(float64(graphPartialOffload) * faCorrectionFactor) graphFullOffload = uint64(float64(graphFullOffload) * faCorrectionFactor) layerSize = uint64(float64(layerSize) * faCorrectionFactor) } ```
Author
Owner

@sammcj commented on GitHub (Dec 6, 2024):

Found the magic number, multiplying the graph and layer sizes by 0.05 results in close to correct memory estimations:

image image

Changes

<!-- gh-comment-id:2524238596 --> @sammcj commented on GitHub (Dec 6, 2024): Found the magic number, multiplying the graph and layer sizes by 0.05 results in close to correct memory estimations: <img width="2889" alt="image" src="https://github.com/user-attachments/assets/4f763fbf-bd63-43ab-88f7-be4a082b7ac9"> <img width="2953" alt="image" src="https://github.com/user-attachments/assets/fb2798f7-ec47-4217-9f98-1aa8a1ab1549"> [Changes](https://github.com/ollama/ollama/compare/main...sammcj:ollama:fix/memory_estimates?expand=1)
Author
Owner

@emzaedu commented on GitHub (Dec 9, 2024):

I am concerned that this size discrepancy may cause Ollama to misallocate resources, offloading layers to the CPU unnecessarily, while the model and cache could fully fit into the GPU memory.

<!-- gh-comment-id:2527758817 --> @emzaedu commented on GitHub (Dec 9, 2024): I am concerned that this size discrepancy may cause Ollama to misallocate resources, offloading layers to the CPU unnecessarily, while the model and cache could fully fit into the GPU memory.
Author
Owner

@rick-github commented on GitHub (Dec 9, 2024):

Yes, ollama will spill when it doesn't need to. Note that the calculations that ollama does are only used to determine how many layers it asks llama.cpp to load into VRAM. You can override that with num_gpu.

<!-- gh-comment-id:2527766344 --> @rick-github commented on GitHub (Dec 9, 2024): Yes, ollama will spill when it doesn't need to. Note that the calculations that ollama does are only used to determine how many layers it asks llama.cpp to load into VRAM. You can override that with `num_gpu`.
Author
Owner

@emzaedu commented on GitHub (Dec 10, 2024):

The num_gpu parameter really helped. I managed to run the Qwen 32B model (q3_k_m) with a 92k context, and it fit entirely into 24GB of memory, achieving a speed of 45 tokens per second.

<!-- gh-comment-id:2531486699 --> @emzaedu commented on GitHub (Dec 10, 2024): The num_gpu parameter really helped. I managed to run the Qwen 32B model (q3_k_m) with a 92k context, and it fit entirely into 24GB of memory, achieving a speed of 45 tokens per second.
Author
Owner

@theasp commented on GitHub (Dec 10, 2024):

Are we sure the ollama ps output is in base 10?

From my earlier comment:

NAME                                     ID              SIZE     PROCESSOR    UNTIL
DEFAULT/mistral-small-2409-22b:latest    d9db479f49e8    24 GB    100% GPU     Forever

I have 24 GiB of RAM, which would be 25.7 GB in base 10, but this shows 100% GPU used with 24 GB. I should have somewhere between 1-7% left depending on how 24 GB got rounded.

<!-- gh-comment-id:2531656437 --> @theasp commented on GitHub (Dec 10, 2024): Are we sure the ollama ps output is in base 10? From my earlier comment: ``` NAME ID SIZE PROCESSOR UNTIL DEFAULT/mistral-small-2409-22b:latest d9db479f49e8 24 GB 100% GPU Forever ``` I have 24 GiB of RAM, which would be 25.7 GB in base 10, but this shows 100% GPU used with 24 GB. I should have somewhere between 1-7% left depending on how 24 GB got rounded.
Author
Owner

@rick-github commented on GitHub (Dec 10, 2024):

"100%" means the model resides fully in VRAM, not that the VRAM is fully used. nvidia-smi will show how much memory the model is using in MiB.

<!-- gh-comment-id:2531730890 --> @rick-github commented on GitHub (Dec 10, 2024): "100%" means the model resides fully in VRAM, not that the VRAM is fully used. `nvidia-smi` will show how much memory the model is using in MiB.
Author
Owner

@theasp commented on GitHub (Dec 10, 2024):

Sorry, yeah that's painfully obvious now that I'm re-reading it later. You are correct.

<!-- gh-comment-id:2533208400 --> @theasp commented on GitHub (Dec 10, 2024): Sorry, yeah that's painfully obvious now that I'm re-reading it later. You are correct.
Author
Owner

@theasp commented on GitHub (Dec 11, 2024):

FYI, I made a PR to add ollama ps --base2: https://github.com/ollama/ollama/pull/8034

industrial:~/projects/ollama-src$ ./ollama ps --base2
NAME                                     ID              SIZE        PROCESSOR         UNTIL
DEFAULT/mistral-small-2409-22b:latest    671ad04c21ce    24.4 GiB    7%/93% CPU/GPU    Forever

industrial:~/projects/ollama-src$ nvidia-smi | grep ollama
|    0   N/A  N/A   1605279      C   ...unners/cuda_v12/ollama_llama_server      21600MiB |
<!-- gh-comment-id:2533347252 --> @theasp commented on GitHub (Dec 11, 2024): FYI, I made a PR to add `ollama ps --base2`: https://github.com/ollama/ollama/pull/8034 ``` industrial:~/projects/ollama-src$ ./ollama ps --base2 NAME ID SIZE PROCESSOR UNTIL DEFAULT/mistral-small-2409-22b:latest 671ad04c21ce 24.4 GiB 7%/93% CPU/GPU Forever industrial:~/projects/ollama-src$ nvidia-smi | grep ollama | 0 N/A N/A 1605279 C ...unners/cuda_v12/ollama_llama_server 21600MiB | ```
Author
Owner

@chigkim commented on GitHub (Dec 26, 2024):

When running Ollama with OLLAMA_NUM_PARALLEL=16 and OLLAMA_FLASH_ATTENTION=1 seems to exaggerate even more!

| 90% 46C P2 223W / 450W | 14808MiB / 24564MiB | 50%
llama3.1:8b-instruct-q6_K c0b9b9594806 20 GB 100% GPU 4 minutes from now

@theasp pr #8034 doesn't fix how Ollama overestimates memory usage and offloads incorrect number of layers, right?
Running ollama ps --base2 just offsets and shows numbers close to actual memory usage?

<!-- gh-comment-id:2563002074 --> @chigkim commented on GitHub (Dec 26, 2024): When running Ollama with OLLAMA_NUM_PARALLEL=16 and OLLAMA_FLASH_ATTENTION=1 seems to exaggerate even more! | 90% 46C P2 223W / 450W | 14808MiB / 24564MiB | 50% llama3.1:8b-instruct-q6_K c0b9b9594806 20 GB 100% GPU 4 minutes from now @theasp pr #8034 doesn't fix how Ollama overestimates memory usage and offloads incorrect number of layers, right? Running `ollama ps --base2` just offsets and shows numbers close to actual memory usage?
Author
Owner

@theasp commented on GitHub (Dec 26, 2024):

@theasp pr #8034 doesn't fix how Ollama overestimates memory usage and offloads incorrect number of layers, right?
Running ollama ps --base2 just offsets and shows numbers close to actual memory usage?

@chigkim Correct, it does not affect the estimate, only the units of the memory usage estimate that is displayed. Gibibytes (base 2, 1 KiB is 1024 bytes) instead of gigabytes (base 10, 1 KB is 1000 bytes).

<!-- gh-comment-id:2563025342 --> @theasp commented on GitHub (Dec 26, 2024): > @theasp pr #8034 doesn't fix how Ollama overestimates memory usage and offloads incorrect number of layers, right? > Running `ollama ps --base2` just offsets and shows numbers close to actual memory usage? @chigkim Correct, it does not affect the estimate, only the units of the memory usage estimate that is displayed. Gibibytes (base 2, 1 KiB is 1024 bytes) instead of gigabytes (base 10, 1 KB is 1000 bytes).
Author
Owner

@rick-github commented on GitHub (Dec 27, 2024):

The size of the KV allocation is proportional to the number of sequences that the model is asked to process, so the discrepancy will grow more or less linearly with the value of OLLAMA_NUM_PARALLEL when FA is used.

<!-- gh-comment-id:2563217172 --> @rick-github commented on GitHub (Dec 27, 2024): The size of the KV allocation is proportional to the number of sequences that the model is asked to process, so the discrepancy will grow more or less linearly with the value of `OLLAMA_NUM_PARALLEL` when FA is used.
Author
Owner

@maxi1134 commented on GitHub (Mar 30, 2025):

This is still an issue on 0.6.3 with Gemma 3 27B

Image

Image

<!-- gh-comment-id:2764729073 --> @maxi1134 commented on GitHub (Mar 30, 2025): This is still an issue on 0.6.3 with Gemma 3 27B ![Image](https://github.com/user-attachments/assets/562f857b-dd7f-493d-8bcf-0c9b07bbec02) ![Image](https://github.com/user-attachments/assets/c903a1cf-0dc5-408e-a26b-1afa9ec233cb)
Author
Owner

@ivanwong1989 commented on GitHub (Apr 4, 2025):

I am also noticing this, ollama ps says 7gb, but in task manager it's only using 6gb.... if i edit num_gpu to have more layers, it successfully use more vram and processes faster.

<!-- gh-comment-id:2777528270 --> @ivanwong1989 commented on GitHub (Apr 4, 2025): I am also noticing this, ollama ps says 7gb, but in task manager it's only using 6gb.... if i edit num_gpu to have more layers, it successfully use more vram and processes faster.
Author
Owner

@maxi1134 commented on GitHub (Apr 7, 2025):

Image

The problems is there for me even without flash attention actually..

<!-- gh-comment-id:2784570681 --> @maxi1134 commented on GitHub (Apr 7, 2025): ![Image](https://github.com/user-attachments/assets/528de945-cad4-4c4e-bb3c-0fe278371b4a) The problems is there for me even without flash attention actually..
Author
Owner

@c0008 commented on GitHub (Apr 22, 2025):

The overestimation must come from a flawed memory calculation for the KV-cache. The more context you use the more off the numbers become. With a small model and long context like in the previous comment this bug is the most obvious.

<!-- gh-comment-id:2821991376 --> @c0008 commented on GitHub (Apr 22, 2025): The overestimation must come from a flawed memory calculation for the KV-cache. The more context you use the more off the numbers become. With a small model and long context like in the previous comment this bug is the most obvious.
Author
Owner

@chigkim commented on GitHub (May 7, 2025):

export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_NUM_PARALLEL=1
export OLLAMA_CONTEXT_LENGTH=13000

ollama serve
ollama ps
NAME ID SIZE PROCESSOR UNTIL
qwen3:32b-q8_0 56a39c0a7ff6 48 GB 100% GPU 4 minutes from now

nvidia-smi|grep -e "MiB"
| 30% 64C P2 252W / 298W | 18779MiB / 24576MiB | 46% Default |
| 30% 58C P2 250W / 298W | 18589MiB / 24576MiB | 45% Default |

18779 + 18589 = 37368 MiB
37368/1024 = 36.49 GiB

<!-- gh-comment-id:2859835626 --> @chigkim commented on GitHub (May 7, 2025): export OLLAMA_FLASH_ATTENTION=1 export OLLAMA_NUM_PARALLEL=1 export OLLAMA_CONTEXT_LENGTH=13000 ollama serve ollama ps NAME ID SIZE PROCESSOR UNTIL qwen3:32b-q8_0 56a39c0a7ff6 48 GB 100% GPU 4 minutes from now nvidia-smi|grep -e "MiB" | 30% 64C P2 252W / 298W | 18779MiB / 24576MiB | 46% Default | | 30% 58C P2 250W / 298W | 18589MiB / 24576MiB | 45% Default | 18779 + 18589 = 37368 MiB 37368/1024 = 36.49 GiB
Author
Owner

@jessegross commented on GitHub (Jun 19, 2025):

There is an early preview of Ollama's new memory management with the goal of comprehensively fixing these issues. It is still in development, however, if you want to compile from source and try it out, you can find it here: https://github.com/ollama/ollama/pull/11090

Please leave any feedback on that PR.

<!-- gh-comment-id:2988855581 --> @jessegross commented on GitHub (Jun 19, 2025): There is an early preview of Ollama's new memory management with the goal of comprehensively fixing these issues. It is still in development, however, if you want to compile from source and try it out, you can find it here: https://github.com/ollama/ollama/pull/11090 Please leave any feedback on that PR.
Author
Owner

@jessegross commented on GitHub (Sep 24, 2025):

I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.

<!-- gh-comment-id:3330113954 --> @jessegross commented on GitHub (Sep 24, 2025): I'm going to go ahead and close this now that the new memory management logic is on by default. If you continue to see problems, please file a new issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65885