[GH-ISSUE #9555] High context uses system RAM #31992

Closed
opened 2026-04-22 12:51:18 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @frenzybiscuit on GitHub (Mar 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9555

What is the issue?

When using a high context (i.e. 70k) it loads directly into system RAM.

For example, with Qwen 2.5 14B 1M @ 70k context, system RAM.

With Qwen 2.5 14B 1M @ 60k context, VRAM is used. Total VRAM usage is 9GB out of 24GB for each GPU (2x3090).

I can also confirm 70B models load correctly and use around 22GB VRAM per card, if thats relevant at all.

Layers are at 256.

OS: Windows 11 Pro
Ollama version: 0.5.12
CUDA Toolkit 12.8 is installed from nvidia with driver version being 571.96.
7950x with 128GB RAM
1xiGPU (from 7950x)
2x3090

Environmental variables:

OLLAMA_FLASH_ATTENTION = 1
OLLAMA_HOST = ip address here
OLLAMA_KV_CACHE_TYPE = q8_0
OLLAMA_MODELS = hard drive to models

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.5.12

Originally created by @frenzybiscuit on GitHub (Mar 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9555 ### What is the issue? When using a high context (i.e. 70k) it loads directly into system RAM. For example, with Qwen 2.5 14B 1M @ 70k context, system RAM. With Qwen 2.5 14B 1M @ 60k context, VRAM is used. Total VRAM usage is 9GB out of 24GB for each GPU (2x3090). I can also confirm 70B models load correctly and use around 22GB VRAM per card, if thats relevant at all. Layers are at 256. OS: Windows 11 Pro Ollama version: 0.5.12 CUDA Toolkit 12.8 is installed from nvidia with driver version being 571.96. 7950x with 128GB RAM 1xiGPU (from 7950x) 2x3090 Environmental variables: OLLAMA_FLASH_ATTENTION = 1 OLLAMA_HOST = ip address here OLLAMA_KV_CACHE_TYPE = q8_0 OLLAMA_MODELS = hard drive to models ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.12
GiteaMirror added the needs more infobug labels 2026-04-22 12:51:19 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 6, 2025):

Server log.

<!-- gh-comment-id:2704632281 --> @rick-github commented on GitHub (Mar 6, 2025): Server log.
Author
Owner
<!-- gh-comment-id:2704645230 --> @frenzybiscuit commented on GitHub (Mar 6, 2025): > Server log. 70k context: https://gist.github.com/frenzybiscuit/3288e4abbfef646bbb4c782710d8bd03 60k context: https://gist.github.com/frenzybiscuit/42906bbd9a2ee0222a679e28165f6b9e
Author
Owner

@frenzybiscuit commented on GitHub (Mar 6, 2025):

I'm also using open-webui if that matters, but ollama is installed on bare hardware (no docker) separately on a different server.

<!-- gh-comment-id:2704659760 --> @frenzybiscuit commented on GitHub (Mar 6, 2025): I'm also using open-webui if that matters, but ollama is installed on bare hardware (no docker) separately on a different server.
Author
Owner

@rick-github commented on GitHub (Mar 6, 2025):

70k

time=2025-03-06T10:32:06.045-08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=256 layers.model=49 layers.offload=0 layers.split="" memory.available="[22.8 GiB 22.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="17.1 GiB" memory.required.partial="0 B" memory.required.kv="6.4 GiB" memory.required.allocations="[0 B 0 B]" memory.weights.total="16.5 GiB" memory.weights.repeating="15.9 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="23.3 GiB" memory.graph.partial="23.3 GiB"

60k

time=2025-03-06T10:33:55.607-08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=256 layers.model=49 layers.offload=12 layers.split=6,6 memory.available="[22.8 GiB 22.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="57.7 GiB" memory.required.partial="45.4 GiB" memory.required.kv="5.5 GiB" memory.required.allocations="[22.7 GiB 22.7 GiB]" memory.weights.total="15.6 GiB" memory.weights.repeating="15.0 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="20.0 GiB" memory.graph.partial="20.0 GiB"

The amount of memory required for the supporting data structures (KV, graph) leave no room for model weights, so the whole thing is loaded into system RAM.

<!-- gh-comment-id:2704691596 --> @rick-github commented on GitHub (Mar 6, 2025): 70k ``` time=2025-03-06T10:32:06.045-08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=256 layers.model=49 layers.offload=0 layers.split="" memory.available="[22.8 GiB 22.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="17.1 GiB" memory.required.partial="0 B" memory.required.kv="6.4 GiB" memory.required.allocations="[0 B 0 B]" memory.weights.total="16.5 GiB" memory.weights.repeating="15.9 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="23.3 GiB" memory.graph.partial="23.3 GiB" ``` 60k ``` time=2025-03-06T10:33:55.607-08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=256 layers.model=49 layers.offload=12 layers.split=6,6 memory.available="[22.8 GiB 22.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="57.7 GiB" memory.required.partial="45.4 GiB" memory.required.kv="5.5 GiB" memory.required.allocations="[22.7 GiB 22.7 GiB]" memory.weights.total="15.6 GiB" memory.weights.repeating="15.0 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="20.0 GiB" memory.graph.partial="20.0 GiB" ``` The amount of memory required for the supporting data structures (KV, graph) leave no room for model weights, so the whole thing is loaded into system RAM.
Author
Owner

@frenzybiscuit commented on GitHub (Mar 6, 2025):

2x3090 cant load 70k q8 context on a 14B Q6 model?

With 60k context the GPU's each report 9GB VRAM usage. Shouldn't 10k more context only increase the vram usage slightly?

Image

<!-- gh-comment-id:2704765833 --> @frenzybiscuit commented on GitHub (Mar 6, 2025): 2x3090 cant load 70k q8 context on a 14B Q6 model? With 60k context the GPU's each report 9GB VRAM usage. Shouldn't 10k more context only increase the vram usage slightly? ![Image](https://github.com/user-attachments/assets/d92f673a-d734-4260-9a0c-0451563cd02d)
Author
Owner

@frenzybiscuit commented on GitHub (Mar 6, 2025):

I just tested a 70B model (fallen) and it can run with 40k context. It can probably do more, but I didn't test.

Why would qwen 14B be stuck @ 60k context in this situation then?

<!-- gh-comment-id:2704811205 --> @frenzybiscuit commented on GitHub (Mar 6, 2025): I just tested a 70B model (fallen) and it can run with 40k context. It can probably do more, but I didn't test. Why would qwen 14B be stuck @ 60k context in this situation then?
Author
Owner

@frenzybiscuit commented on GitHub (Mar 6, 2025):

Works on koboldcpp with 110k context. Could probably go higher, but the GUI for kobold maxes out there.

<!-- gh-comment-id:2704836875 --> @frenzybiscuit commented on GitHub (Mar 6, 2025): Works on koboldcpp with 110k context. Could probably go higher, but the GUI for kobold maxes out there.
Author
Owner

@rick-github commented on GitHub (Mar 6, 2025):

Overriding num_batch increases the graph size. The sum of the memory required for the supporting data structures results in ollama deciding that no layers will fit on a GPU. Since you've overridden num_gpu it should still try to load layers into the GPU, but for reasons that aren't clear, the runner doesn't start a GPU backend. Set OLLAMA_DEBUG=1 in the server environment and try the 70k load again, then post the log.

<!-- gh-comment-id:2704876214 --> @rick-github commented on GitHub (Mar 6, 2025): Overriding `num_batch` increases the graph size. The sum of the memory required for the supporting data structures results in ollama deciding that no layers will fit on a GPU. Since you've overridden `num_gpu` it should still try to load layers into the GPU, but for reasons that aren't clear, the runner doesn't start a GPU backend. Set `OLLAMA_DEBUG=1` in the server environment and try the 70k load again, then post the log.
Author
Owner

@frenzybiscuit commented on GitHub (Mar 7, 2025):

I've been testing against a q4_0 gguf ollama of wayfarer-large and a exl2 4.0bpw of wayfarer-large with tabbyapi. Tabbyapi cannot use CPU/System RAM, it must load entirely onto GPU.

CUDA sysmem fallback is also disabled in the nvidia control panel, so it cant cheat.

With a q8_0 context @ 32k, this is what tabbyapi shows. Everything is entirely on the GPU(s), no system vram or cpu used.

Fri Mar  7 09:41:05 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 571.96                 Driver Version: 571.96         CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090      WDDM  |   00000000:01:00.0 Off |                  N/A |
|  0%   47C    P8             26W /  275W |   22590MiB /  24576MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3090      WDDM  |   00000000:03:00.0 Off |                  N/A |
|  0%   43C    P8             23W /  275W |   19816MiB /  24576MiB |      1%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

Ollama offloads and uses most GPU resources as well, but its putting a lot on system RAM.

However, when I run ollama ps its reporting its using 82GB in size!

ollama ps
NAME                          ID              SIZE     PROCESSOR          UNTIL
Fallen-Llama-3.3-R1:latest    0519610569f7    82 GB    41%/59% CPU/GPU    3 minutes from now

So yes, I can run the debugging and get you those logs, but it seems to me like ollama isn't utilizing GPU resources correctly in its calculations.

I will get you those debug logs now

<!-- gh-comment-id:2707054369 --> @frenzybiscuit commented on GitHub (Mar 7, 2025): I've been testing against a q4_0 gguf ollama of wayfarer-large and a exl2 4.0bpw of wayfarer-large with tabbyapi. Tabbyapi cannot use CPU/System RAM, it must load entirely onto GPU. CUDA sysmem fallback is also disabled in the nvidia control panel, so it cant cheat. With a q8_0 context @ 32k, this is what tabbyapi shows. Everything is entirely on the GPU(s), no system vram or cpu used. ``` Fri Mar 7 09:41:05 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 571.96 Driver Version: 571.96 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 WDDM | 00000000:01:00.0 Off | N/A | | 0% 47C P8 26W / 275W | 22590MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3090 WDDM | 00000000:03:00.0 Off | N/A | | 0% 43C P8 23W / 275W | 19816MiB / 24576MiB | 1% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ ``` **Ollama** offloads and uses most GPU resources as well, but its putting a lot on system RAM. However, when I run ollama ps its reporting its using 82GB in size! ``` ollama ps NAME ID SIZE PROCESSOR UNTIL Fallen-Llama-3.3-R1:latest 0519610569f7 82 GB 41%/59% CPU/GPU 3 minutes from now ``` So yes, I can run the debugging and get you those logs, but it seems to me like ollama isn't utilizing GPU resources correctly in its calculations. I will get you those debug logs now
Author
Owner

@frenzybiscuit commented on GitHub (Mar 7, 2025):

It didn't load onto the GPUs at all in debug mode with 70k context.

Here you go.

https://gist.github.com/frenzybiscuit/7945a6b993e008a09e30941eb1ddc12a

<!-- gh-comment-id:2707062006 --> @frenzybiscuit commented on GitHub (Mar 7, 2025): It didn't load onto the GPUs at all in debug mode with 70k context. Here you go. https://gist.github.com/frenzybiscuit/7945a6b993e008a09e30941eb1ddc12a
Author
Owner

@frenzybiscuit commented on GitHub (Mar 7, 2025):

also, can confirm the version of wayfarer-large on ollama is q4_0 and not a larger model.

<!-- gh-comment-id:2707065694 --> @frenzybiscuit commented on GitHub (Mar 7, 2025): also, can confirm the version of wayfarer-large on ollama is q4_0 and not a larger model.
Author
Owner

@frenzybiscuit commented on GitHub (Mar 7, 2025):

I tested with tabbyapi and the 8.0bpw exl2 14B 1M Qwen 2.5 model can fit 256k context into vram at Q8_0 quant context with like 1.5GB VRAM to spare.

So I'm not sure why Ollama can only fit 60k.

<!-- gh-comment-id:2707736307 --> @frenzybiscuit commented on GitHub (Mar 7, 2025): I tested with tabbyapi and the 8.0bpw exl2 14B 1M Qwen 2.5 model can fit 256k context into vram at Q8_0 quant context with like 1.5GB VRAM to spare. So I'm not sure why Ollama can only fit 60k.
Author
Owner

@rick-github commented on GitHub (Mar 9, 2025):

I think what's happening is you are running up against an optimization in ollama. It makes several attempts at fitting the model into the GPUs but ultimately decides it can't fit, so it drops the path to the GPU backends from the list of backends it passes to the runner. The runner, even though it's been told to load up to 256 layers into the GPU, can't find the GPU backend so just goes with CPU. You should be able to work around this by pre-seeding the list of backends with the path to the GPU backend. Add C:\Users\tyler\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 to the PATH environment variable in the server environment.

<!-- gh-comment-id:2709058547 --> @rick-github commented on GitHub (Mar 9, 2025): I think what's happening is you are running up against an optimization in ollama. It makes several attempts at fitting the model into the GPUs but ultimately decides it can't fit, so it drops the path to the GPU backends from the list of backends it passes to the runner. The runner, even though it's been told to load up to 256 layers into the GPU, can't find the GPU backend so just goes with CPU. You should be able to work around this by pre-seeding the list of backends with the path to the GPU backend. Add `C:\Users\tyler\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12` to the PATH environment variable in the server environment.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31992