[GH-ISSUE #7883] Lower System Ram but High VRAM doesn't seem to correctly check available space. #30803

Closed
opened 2026-04-22 10:44:06 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @ramblingcoder on GitHub (Nov 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7883

What is the issue?

I have a server with 32GB of ram, a swap file that can go up to 150GB, and 128GB of VRAM (8x 4060 TIs).

I try to load the model into VRAM but ran into an issue that I am not sure is working as intended.

When I have the swap file set to 32GB, I have 64GB of system ram available.

I receive the following error when attempting to load a 123B model.
msg="model request too large for system" requested="75.2 GiB" available=66355601408 total="31.2 GiB" free="29.8 GiB" swap="32.0 GiB"

However, when I change the swap file to 150GB, I have 182GB of system ram. I can then load the model and context entirely into VRAM. However, according to my metric exporter, I never dipped into swap.

This makes me think there is unintended behavior when the system ram is less than the vram available on a system.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.4.6

Originally created by @ramblingcoder on GitHub (Nov 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7883 ### What is the issue? I have a server with 32GB of ram, a swap file that can go up to 150GB, and 128GB of VRAM (8x 4060 TIs). I try to load the model into VRAM but ran into an issue that I am not sure is working as intended. When I have the swap file set to 32GB, I have 64GB of system ram available. I receive the following error when attempting to load a 123B model. msg="model request too large for system" requested="75.2 GiB" available=66355601408 total="31.2 GiB" free="29.8 GiB" swap="32.0 GiB" However, when I change the swap file to 150GB, I have 182GB of system ram. I can then load the model and context entirely into VRAM. However, according to my metric exporter, I never dipped into swap. This makes me think there is unintended behavior when the system ram is less than the vram available on a system. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.6
GiteaMirror added the memorybug labels 2026-04-22 10:44:06 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 29, 2024):

Full ollama system logs from both scenarios (32G swap vs 150G swap) will aid in debugging. What is your metric exporter and how does it report swap?

<!-- gh-comment-id:2508691320 --> @rick-github commented on GitHub (Nov 29, 2024): Full ollama system logs from both scenarios (32G swap vs 150G swap) will aid in debugging. What is your metric exporter and how does it report swap?
Author
Owner

@ramblingcoder commented on GitHub (Nov 30, 2024):

Not a problem.
32gb.txt
150gb.txt

For the metric exporter, i'm using
exporter.txt

It is being presented in Grafana with the values from node_memory_SwapTotal_bytes and node_memory_SwapFree_bytes

<!-- gh-comment-id:2508748578 --> @ramblingcoder commented on GitHub (Nov 30, 2024): Not a problem. [32gb.txt](https://github.com/user-attachments/files/17963589/32gb.txt) [150gb.txt](https://github.com/user-attachments/files/17963590/150gb.txt) For the metric exporter, i'm using [exporter.txt](https://github.com/user-attachments/files/17963594/exporter.txt) It is being presented in Grafana with the values from node_memory_SwapTotal_bytes and node_memory_SwapFree_bytes
Author
Owner

@ramblingcoder commented on GitHub (Nov 30, 2024):

The model I was loading was Mistral Large 2411 gguf from https://huggingface.co/bartowski/Mistral-Large-Instruct-2411-GGUF

Modelfile:
FROM /custom/Mistral-Large-Instruct-2411-Q4_K_M/Mistral-Large-Instruct-2411-Q4_K_M.gguf
PARAMETER num_gpu 9999

Without PARAMETER num_gpu 9999, it was attempting to load the model in the system ram instead of vram. This may be intended behavior here but was assuming it was prioritizing usage of the vram first.

150gb_noparameter.txt

The 32gb and 150gb tests from the previous comment were done with the parameter set.

<!-- gh-comment-id:2508752784 --> @ramblingcoder commented on GitHub (Nov 30, 2024): The model I was loading was Mistral Large 2411 gguf from https://huggingface.co/bartowski/Mistral-Large-Instruct-2411-GGUF Modelfile: FROM /custom/Mistral-Large-Instruct-2411-Q4_K_M/Mistral-Large-Instruct-2411-Q4_K_M.gguf PARAMETER num_gpu 9999 Without PARAMETER num_gpu 9999, it was attempting to load the model in the system ram instead of vram. This may be intended behavior here but was assuming it was prioritizing usage of the vram first. [150gb_noparameter.txt](https://github.com/user-attachments/files/17963618/150gb_noparameter.txt) The 32gb and 150gb tests from the previous comment were done with the parameter set.
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

--tensor-split 1,1,1,1,1,1,1,1

Is your client setting num_gpu:8? Based on the tensor split, that's what it looks like. The model is 75G and has 88 layers, so ollama should be able to load about 18 layers into a GPU, not 1. So your effective GPU VRAM is about 7G, which is why ollama can't load it on a machine with 30G free RAM and 32G swap.

<!-- gh-comment-id:2508777120 --> @rick-github commented on GitHub (Nov 30, 2024): ``` --tensor-split 1,1,1,1,1,1,1,1 ``` Is your client setting `num_gpu:8`? Based on the tensor split, that's what it looks like. The model is 75G and has 88 layers, so ollama should be able to load about 18 layers into a GPU, not 1. So your effective GPU VRAM is about 7G, which is why ollama can't load it on a machine with 30G free RAM and 32G swap.
Author
Owner

@ramblingcoder commented on GitHub (Nov 30, 2024):

The client I'm using doesn't appear to be setting the num_gpu parameter itself or at least I see no setting for it.

Forgot to include the docker compose file I'm using as that may be whats setting the tensor-split.
docker-compose.txt

I'm not sure I follow the explanation. Do you mind reexplaining it? Even though each GPU has 16GB of VRAM, only 7GB of VRAM is being used?

<!-- gh-comment-id:2508845540 --> @ramblingcoder commented on GitHub (Nov 30, 2024): The client I'm using doesn't appear to be setting the num_gpu parameter itself or at least I see no setting for it. Forgot to include the docker compose file I'm using as that may be whats setting the tensor-split. [docker-compose.txt](https://github.com/user-attachments/files/17964391/docker-compose.txt) I'm not sure I follow the explanation. Do you mind reexplaining it? Even though each GPU has 16GB of VRAM, only 7GB of VRAM is being used?
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

A quick check would be to unload the model and then reload using the ollama cli:

docker compose exec -it ollama ollama stop model-name
docker compose exec -it ollama ollama run model-name hello
docker compose logs ollama | grep tensor-split

Each GPU has 16G of VRAM, but for some reason ollama thinks it can only fit one layer per GPU. The model is 75G and has 88 layers, so each layer (on average, size varies) is about 850M. With ollama loading only one layer per GPU, the effective VRAM across all GPUs is 8 * .85 or about 7G. The total resources that ollama thinks it has is 7G + 30G + 32G or 69G, which is too small to load the 75G model.

I had a look at the model on HF and it comes in two pieces, how did you combine them?

<!-- gh-comment-id:2508927768 --> @rick-github commented on GitHub (Nov 30, 2024): A quick check would be to unload the model and then reload using the ollama cli: ``` docker compose exec -it ollama ollama stop model-name docker compose exec -it ollama ollama run model-name hello docker compose logs ollama | grep tensor-split ``` Each GPU has 16G of VRAM, but for some reason ollama thinks it can only fit one layer per GPU. The model is 75G and has 88 layers, so each layer (on average, size varies) is about 850M. With ollama loading only one layer per GPU, the effective VRAM across all GPUs is 8 * .85 or about 7G. The total resources that ollama thinks it has is 7G + 30G + 32G or 69G, which is too small to load the 75G model. I had a look at the model on HF and it comes in two pieces, how did you combine them?
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

I experimented a bit and was unable to find a combination of parameters that would load a model with --n-gpu-layers 9999 and at the same time --tensor-split 1,1,1,1,1,1,1,1.

ollama  | time=2024-11-30T00:09:25.017Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-cf33602c9be98b81ec33a08172b96148078ad9d3114f5eb2abefe72793324665 --ctx-size 65536 --batch-size 512 --n-gpu-layers 9999 --verbose --threads 6 --flash-attn --no-mmap --parallel 1 --tensor-split 1,1,1,1,1,1,1,1 --port 46293"

I noticed OLLAMA_KV_CACHE_TYPE=q8_0 in the docker compose file, which is not a standard ollama configuration option. The logs seem to indicate that you are running standard 0.4.6, is that correct?

<!-- gh-comment-id:2508975598 --> @rick-github commented on GitHub (Nov 30, 2024): I experimented a bit and was unable to find a combination of parameters that would load a model with `--n-gpu-layers 9999` and at the same time `--tensor-split 1,1,1,1,1,1,1,1`. ``` ollama | time=2024-11-30T00:09:25.017Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-cf33602c9be98b81ec33a08172b96148078ad9d3114f5eb2abefe72793324665 --ctx-size 65536 --batch-size 512 --n-gpu-layers 9999 --verbose --threads 6 --flash-attn --no-mmap --parallel 1 --tensor-split 1,1,1,1,1,1,1,1 --port 46293" ``` I noticed `OLLAMA_KV_CACHE_TYPE=q8_0` in the docker compose file, which is not a standard ollama configuration option. The logs seem to indicate that you are running standard 0.4.6, is that correct?
Author
Owner

@ramblingcoder commented on GitHub (Nov 30, 2024):

Answering slightly out of order.:

Merging
The split gguf was combined using llama-gguf with the merge operation. the exact command being

./llama-gguf-split --merge ../../../mistral-large/Mistral-Large-Instruct-2411-Q4_K_M-00001-of-00002.gguf ../../../mistral-large/output.gguf

The output.gguf was renamed to Mistral-Large-Instruct-2411-Q4_K_M.gguf before being used in Modelfile creation.

KV Cache Quant
It is the stock ollama. I've just been watching the PR 6279 for KV cache quant and didn't want to go searching for the env variable name again once it was merged and pushed.

Docker Exec Results
Found another bit of information that appears to affect the behavior. When set to 32GB it did load the model into VRAM but noticed the context size it was using was 2048. I adjusted the model file to include "PARAMETER num_ctx 65536". Setting the context size to 65536 was something the client was doing.

Before the two runs I did a compose down and compose up -d to reset it. When I run the model with 32GB swap I get

server@scale3:~/ollama$ docker compose exec -it ollama ollama run bartowski-Mistral-Large-Instruct-2411-Q4_K_M:latest hello
Error: model requires more system memory (85.3 GiB) than is available (61.9 GiB)
server@scale3:~/ollama$ docker compose logs ollama | grep tensor-split
server@scale3:~/ollama$ 

When my swap is set back to 150GB I got the following

server@scale3:~/ollama$ docker compose exec -it ollama ollama run bartowski-Mistral-Large-Instruct-2411-Q4_K_M:latest hello
 ive been playing with the idea of having multiple cameras following one player, and then switching from one camera to another when i hit a button.

i have 2 cameras, each set up as spectator cams in the world settings, and made invisible via the visibility option in the details panel. im not sure if this is what is causing my issue but here it goes: i use an event tick that gets local player controller to check location of pawn, which then gets 
location of both cameras (cameras are on the players back) and calculates distance between each cam and the player.

this should return a float value which can be compared in order to determine which camera is further away from the player so that i can set that as my view target using blueprint. what happens instead is that when i move forward, the values start increasing at different rates until one of them reaches the 
max float value and crashes UE4

i know it has something to do with how i am calculating distance because ive tried several methods such as get camera location (which does not work) or even creating a vector between pawn and camera and then getting magnitude, which produces an identical result. im pretty stumped on this and dont 
understand why the values would increase at different rates and what could be causing the crash?

this is my setup:

server@scale3:~/ollama$ docker compose logs ollama | grep tensor-split
ollama  | time=2024-11-30T15:31:32.095Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-d476f0802d0110317e9471c865ae443cf210dd4d5ea4a3b3292944ab4d71918e --ctx-size 2048 --batch-size 512 --n-gpu-layers 9999 --verbose --threads 6 --flash-attn --no-mmap --parallel 1 --tensor-split 12,11,11,11,11,11,11,11 --port 33399"

This is the new modelfile to recreate the behavior

FROM /custom/Mistral-Large-Instruct-2411-Q4_K_M/Mistral-Large-Instruct-2411-Q4_K_M.gguf
PARAMETER num_gpu 9999
PARAMETER num_ctx 65536
<!-- gh-comment-id:2509017365 --> @ramblingcoder commented on GitHub (Nov 30, 2024): Answering slightly out of order.: **Merging** The split gguf was combined using llama-gguf with the merge operation. the exact command being ./llama-gguf-split --merge ../../../mistral-large/Mistral-Large-Instruct-2411-Q4_K_M-00001-of-00002.gguf ../../../mistral-large/output.gguf The output.gguf was renamed to Mistral-Large-Instruct-2411-Q4_K_M.gguf before being used in Modelfile creation. **KV Cache Quant** It is the stock ollama. I've just been watching the PR 6279 for KV cache quant and didn't want to go searching for the env variable name again once it was merged and pushed. **Docker Exec Results** Found another bit of information that appears to affect the behavior. When set to 32GB it did load the model into VRAM but noticed the context size it was using was 2048. I adjusted the model file to include "PARAMETER num_ctx 65536". Setting the context size to 65536 was something the client was doing. Before the two runs I did a compose down and compose up -d to reset it. When I run the model with 32GB swap I get ``` server@scale3:~/ollama$ docker compose exec -it ollama ollama run bartowski-Mistral-Large-Instruct-2411-Q4_K_M:latest hello Error: model requires more system memory (85.3 GiB) than is available (61.9 GiB) server@scale3:~/ollama$ docker compose logs ollama | grep tensor-split server@scale3:~/ollama$ ``` When my swap is set back to 150GB I got the following ``` server@scale3:~/ollama$ docker compose exec -it ollama ollama run bartowski-Mistral-Large-Instruct-2411-Q4_K_M:latest hello ive been playing with the idea of having multiple cameras following one player, and then switching from one camera to another when i hit a button. i have 2 cameras, each set up as spectator cams in the world settings, and made invisible via the visibility option in the details panel. im not sure if this is what is causing my issue but here it goes: i use an event tick that gets local player controller to check location of pawn, which then gets location of both cameras (cameras are on the players back) and calculates distance between each cam and the player. this should return a float value which can be compared in order to determine which camera is further away from the player so that i can set that as my view target using blueprint. what happens instead is that when i move forward, the values start increasing at different rates until one of them reaches the max float value and crashes UE4 i know it has something to do with how i am calculating distance because ive tried several methods such as get camera location (which does not work) or even creating a vector between pawn and camera and then getting magnitude, which produces an identical result. im pretty stumped on this and dont understand why the values would increase at different rates and what could be causing the crash? this is my setup: server@scale3:~/ollama$ docker compose logs ollama | grep tensor-split ollama | time=2024-11-30T15:31:32.095Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-d476f0802d0110317e9471c865ae443cf210dd4d5ea4a3b3292944ab4d71918e --ctx-size 2048 --batch-size 512 --n-gpu-layers 9999 --verbose --threads 6 --flash-attn --no-mmap --parallel 1 --tensor-split 12,11,11,11,11,11,11,11 --port 33399" ``` This is the new modelfile to recreate the behavior ``` FROM /custom/Mistral-Large-Instruct-2411-Q4_K_M/Mistral-Large-Instruct-2411-Q4_K_M.gguf PARAMETER num_gpu 9999 PARAMETER num_ctx 65536 ```
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

--n-gpu-layers 9999 --tensor-split 12,11,11,11,11,11,11,11 

So the manual loading of the model did the right thing with splitting the tensors in the 150G swap case. If that was also true for the 32G case, the issue is that the model is in fact too large to host on a system with only 32G swap and 30G RAM. The maths works out like this: to host a model, you need to load the model weights, the model graph, the context buffer, and some ancillary data structures. The unfortunate fact is that you need a copy of the model graph for each GPU, so the sum is (model_weights + num_gpu * model_graph + context) or (85 + 8 * 12.5 + 22) or a bit more than 207G. Subtract the 128G VRAM and you need more than 79G (85.3 as reported by the error message) system RAM+swap which exceeds the available resources.

If the metric exporter is reporting no swap used, then it would seem that the metric is inaccurate. What do these commands show when the model is loaded:

docker compose exec -it ollama ollama ps
free -h
<!-- gh-comment-id:2509079619 --> @rick-github commented on GitHub (Nov 30, 2024): ``` --n-gpu-layers 9999 --tensor-split 12,11,11,11,11,11,11,11 ``` So the manual loading of the model did the right thing with splitting the tensors in the 150G swap case. If that was also true for the 32G case, the issue is that the model is in fact too large to host on a system with only 32G swap and 30G RAM. The maths works out like this: to host a model, you need to load the model weights, the model graph, the context buffer, and some ancillary data structures. The unfortunate fact is that you need a copy of the model graph for each GPU, so the sum is (model_weights + num_gpu * model_graph + context) or (85 + 8 * 12.5 + 22) or a bit more than 207G. Subtract the 128G VRAM and you need more than 79G (85.3 as reported by the error message) system RAM+swap which exceeds the available resources. If the metric exporter is reporting no swap used, then it would seem that the metric is inaccurate. What do these commands show when the model is loaded: ``` docker compose exec -it ollama ollama ps free -h ```
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

The logs from the manual load experiments would also be helpful.

<!-- gh-comment-id:2509079897 --> @rick-github commented on GitHub (Nov 30, 2024): The logs from the manual load experiments would also be helpful.
Author
Owner

@ramblingcoder commented on GitHub (Nov 30, 2024):

32gb_manual.txt
150gb_manual.txt

I've attached the 32gb and 150gb manual tests.

<!-- gh-comment-id:2509094272 --> @ramblingcoder commented on GitHub (Nov 30, 2024): [32gb_manual.txt](https://github.com/user-attachments/files/17966327/32gb_manual.txt) [150gb_manual.txt](https://github.com/user-attachments/files/17966328/150gb_manual.txt) I've attached the 32gb and 150gb manual tests.
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

The cause for this is flash-attention, I got distracted by the weird tensor splits (which I still can't explain). There's two agents involved in loading the model: ollama finds the GGUF file and using parameters supplied for the model, computes the number of layers it thinks it can be loaded into the GPU. It then starts up a llama.cpp runner, whose job it is to actually allocate the memory. Flash attention is a much more efficient use of VRAM memory, so llama.cpp can allocate the entire model to VRAM rather than spilling to RAM/swap as the memory calculations would imply. Ollama has historically been bad at computing memory requirements when flash-attention is involved and there's a ticket open for it but not a lot of progress so far.

Sadly the result is that ollama will continue to think that the model is not loadable in your small swap case. If you bumped the swap to about 64G the models should load.

<!-- gh-comment-id:2509439963 --> @rick-github commented on GitHub (Nov 30, 2024): The cause for this is flash-attention, I got distracted by the weird tensor splits (which I still can't explain). There's two agents involved in loading the model: ollama finds the GGUF file and using parameters supplied for the model, computes the number of layers it thinks it can be loaded into the GPU. It then starts up a llama.cpp runner, whose job it is to actually allocate the memory. Flash attention is a much more efficient use of VRAM memory, so llama.cpp can allocate the entire model to VRAM rather than spilling to RAM/swap as the memory calculations would imply. Ollama has historically been bad at computing memory requirements when flash-attention is involved and there's a [ticket](https://github.com/ollama/ollama/issues/6160) open for it but not a lot of progress so far. Sadly the result is that ollama will continue to think that the model is not loadable in your small swap case. If you bumped the swap to about 64G the models should load.
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

Had a read through the PR attached to the ticket and it looks like it will address some of the FA issues, and better yet, it sounds like it's getting closer to being merged, so it might resolve this issue sooner rather than later.

<!-- gh-comment-id:2509465144 --> @rick-github commented on GitHub (Nov 30, 2024): Had a read through the PR attached to the ticket and it looks like it will address some of the FA issues, and better yet, it sounds like it's getting closer to being merged, so it might resolve this issue sooner rather than later.
Author
Owner

@ramblingcoder commented on GitHub (Nov 30, 2024):

Awesome, thank you for the patience and troubleshooting it with me. Completely fine with the workaround of using the over-sized swap since it isn't using the storage for anything else and not actually causing unnecessary wear and tear on the drive.

Should the ticket be closed since there is another issue?

<!-- gh-comment-id:2509472320 --> @ramblingcoder commented on GitHub (Nov 30, 2024): Awesome, thank you for the patience and troubleshooting it with me. Completely fine with the workaround of using the over-sized swap since it isn't using the storage for anything else and not actually causing unnecessary wear and tear on the drive. Should the ticket be closed since there is another issue?
Author
Owner

@rick-github commented on GitHub (Dec 1, 2024):

closing as dupe #6160

<!-- gh-comment-id:2509651104 --> @rick-github commented on GitHub (Dec 1, 2024): closing as dupe #6160
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30803