[GH-ISSUE #12043] EXTREME RAM AND DISK USAGE WITH MULTIPLE MODELS #7997

Closed
opened 2026-04-12 20:11:56 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @skibiditolet873 on GitHub (Aug 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12043

What is the issue?

In python, streaming mode, running on purely cpu, ~16gb ram. 1st model is a finetuned Qwen3-4b-instruct. 2nd model is Qwen3:4b (With thinking) Try to stream chatcompletions with a 2k long context for each. Should be done one after the other, keep_alive=0. Ram and disk will spike to 99% after the second model request. This should not happen. My CPU is very good, and can run one model extremely easily.

Relevant log output

[WinError 1450] Insufficient system resources exist to complete the requested service

OS

Windows

GPU

Radeon 780M Graphics (512 mb vram so I dont use it)

CPU

AMD Ryzen 7 PRO 8700GE

Ollama version

0.11.4

Originally created by @skibiditolet873 on GitHub (Aug 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12043 ### What is the issue? In python, streaming mode, running on purely cpu, ~16gb ram. 1st model is a finetuned Qwen3-4b-instruct. 2nd model is Qwen3:4b (With thinking) Try to stream chatcompletions with a 2k long context for each. Should be done one after the other, keep_alive=0. Ram and disk will spike to 99% after the second model request. This should not happen. My CPU is very good, and can run one model extremely easily. ### Relevant log output ```shell [WinError 1450] Insufficient system resources exist to complete the requested service ``` ### OS Windows ### GPU Radeon 780M Graphics (512 mb vram so I dont use it) ### CPU AMD Ryzen 7 PRO 8700GE ### Ollama version 0.11.4
GiteaMirror added the question label 2026-04-12 20:11:56 -05:00
Author
Owner

@skibiditolet873 commented on GitHub (Aug 23, 2025):

server.log

<!-- gh-comment-id:3216106128 --> @skibiditolet873 commented on GitHub (Aug 23, 2025): [server.log](https://github.com/user-attachments/files/21946709/server.log)
Author
Owner

@rick-github commented on GitHub (Aug 23, 2025):

models\\blobs\\sha256-30464205ffdd5f0440e7a4b38ac0cc4bfb2575eefd5c917cf9a035db25c6ce39 --ctx-size 16384
models\\blobs\\sha256-6a77366395772462c84f0c4d226ac404674327cbe78c01e4391cc7e0c698851e --ctx-size 32768
models\\blobs\\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 2048
models\\blobs\\sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --ctx-size 16384

Only nomic-embed-text:v1.5 is being used with context 2048.

Memory requirements get as high as 8.5G:

memory.required.full="1.8 GiB"
memory.required.full="266.9 MiB"
memory.required.full="2.7 GiB"
memory.required.full="297.4 MiB"
memory.required.full="2.9 GiB"
memory.required.full="4.6 GiB"
memory.required.full="6.2 GiB"
memory.required.full="8.5 GiB"

If you have keep_alive:0, then the models will be reloaded every time causing high disk usage.

<!-- gh-comment-id:3216121720 --> @rick-github commented on GitHub (Aug 23, 2025): ``` models\\blobs\\sha256-30464205ffdd5f0440e7a4b38ac0cc4bfb2575eefd5c917cf9a035db25c6ce39 --ctx-size 16384 models\\blobs\\sha256-6a77366395772462c84f0c4d226ac404674327cbe78c01e4391cc7e0c698851e --ctx-size 32768 models\\blobs\\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 2048 models\\blobs\\sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --ctx-size 16384 ``` Only nomic-embed-text:v1.5 is being used with context 2048. Memory requirements get as high as 8.5G: ``` memory.required.full="1.8 GiB" memory.required.full="266.9 MiB" memory.required.full="2.7 GiB" memory.required.full="297.4 MiB" memory.required.full="2.9 GiB" memory.required.full="4.6 GiB" memory.required.full="6.2 GiB" memory.required.full="8.5 GiB" ``` If you have `keep_alive:0`, then the models will be reloaded every time causing high disk usage.
Author
Owner

@skibiditolet873 commented on GitHub (Aug 23, 2025):

I am using keep_alive=0, so that the first model will be unloaded before the second. This still does not explain the behavior where ram jumps to 99%, and the second request never finishes processing and crashes. None of those memory requirements exceeds 15GB. If they are all being loaded at the same time, that is unintended behavior.

<!-- gh-comment-id:3216128631 --> @skibiditolet873 commented on GitHub (Aug 23, 2025): I am using keep_alive=0, so that the first model will be unloaded before the second. This still does not explain the behavior where ram jumps to 99%, and the second request never finishes processing and crashes. None of those memory requirements exceeds 15GB. If they are all being loaded at the same time, that is unintended behavior.
Author
Owner

@rick-github commented on GitHub (Aug 23, 2025):

The only errors seem to be client disconnects:

time=2025-08-17T15:51:35.886-07:00 level=WARN source=server.go:605 msg="client connection closed before server finished loading, aborting load"

Increasing logging verbosity with OLLAMA_DEBUG=1 may show more details.

Why set keep_alive:0? If the next request is for a non-resident model and there's no room, ollama will evict the current model to make room.

None of those memory requirements exceeds 15GB

time=2025-08-17T16:49:18.107-07:00 level=INFO source=server.go:135 msg="system memory" total="15.1 GiB"
 free="3.4 GiB" free_swap="8.1 GiB"
time=2025-08-17T16:49:18.110-07:00 level=INFO source=server.go:175 msg=offload library=cpu layers.requested=-1
 layers.model=37 layers.offload=0 layers.split="" memory.available="[3.4 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="8.5 GiB" memory.required.partial="0 B" memory.required.kv="2.2 GiB"
 memory.required.allocations="[3.3 GiB]" memory.weights.total="4.5 GiB" memory.weights.repeating="4.1 GiB"
 memory.weights.nonrepeating="486.9 MiB" memory.graph.full="1.5 GiB" memory.graph.partial="1.5 GiB"

15G on your system, but only 3.4G free RAM and 8.1G free swap. The model requires 8.5G to load, so takes all the RAM and 5.1G of swap.

If your concern is this message:

[WinError 1450] Insufficient system resources exist to complete the requested service

that looks like a Windows error, not an ollama error.

If they are all being loaded at the same time, that is unintended behavior.

If you want to enforce only one model at a time, set OLLAMA_MAX_LOADED_MODELS=1.

<!-- gh-comment-id:3216158572 --> @rick-github commented on GitHub (Aug 23, 2025): The only errors seem to be client disconnects: ``` time=2025-08-17T15:51:35.886-07:00 level=WARN source=server.go:605 msg="client connection closed before server finished loading, aborting load" ``` Increasing logging verbosity with `OLLAMA_DEBUG=1` may show more details. Why set `keep_alive:0`? If the next request is for a non-resident model and there's no room, ollama will evict the current model to make room. > None of those memory requirements exceeds 15GB ``` time=2025-08-17T16:49:18.107-07:00 level=INFO source=server.go:135 msg="system memory" total="15.1 GiB" free="3.4 GiB" free_swap="8.1 GiB" time=2025-08-17T16:49:18.110-07:00 level=INFO source=server.go:175 msg=offload library=cpu layers.requested=-1 layers.model=37 layers.offload=0 layers.split="" memory.available="[3.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.5 GiB" memory.required.partial="0 B" memory.required.kv="2.2 GiB" memory.required.allocations="[3.3 GiB]" memory.weights.total="4.5 GiB" memory.weights.repeating="4.1 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="1.5 GiB" memory.graph.partial="1.5 GiB" ``` 15G on your system, but only 3.4G free RAM and 8.1G free swap. The model requires 8.5G to load, so takes all the RAM and 5.1G of swap. If your concern is this message: ``` [WinError 1450] Insufficient system resources exist to complete the requested service ``` that looks like a Windows error, not an ollama error. > If they are all being loaded at the same time, that is unintended behavior. If you want to enforce only one model at a time, set `OLLAMA_MAX_LOADED_MODELS=1`.
Author
Owner

@skibiditolet873 commented on GitHub (Aug 23, 2025):

@rick-github i reran it here is a clearer log
the main issue is that it still is way too slow

server.log

<!-- gh-comment-id:3216204948 --> @skibiditolet873 commented on GitHub (Aug 23, 2025): @rick-github i reran it here is a clearer log the main issue is that it still is way too slow [server.log](https://github.com/user-attachments/files/21946998/server.log)
Author
Owner

@rick-github commented on GitHub (Aug 23, 2025):

the main issue is that it still is way too slow

You are running on CPU with some portion of the model in swap. It will be slow. Unloading the model after each request with keep_alive:0 incurs more delay as the model is reloaded.

Choose a smaller model, reduce context, close some other RAM using programs.

<!-- gh-comment-id:3216821424 --> @rick-github commented on GitHub (Aug 23, 2025): > the main issue is that it still is way too slow You are running on CPU with some portion of the model in swap. It will be slow. Unloading the model after each request with `keep_alive:0` incurs more delay as the model is reloaded. Choose a smaller model, reduce context, close some other RAM using programs.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7997