[GH-ISSUE #6950] Support loading concurrent model(s) on CPU when GPU is full #50909

Open
opened 2026-04-28 17:26:56 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @Han-Huaqiao on GitHub (Sep 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6950

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I deployed the qwen2.5:72b-instruct-q6_K model, which occupies 4*3090 and a total of 75G GPU memory. When I use llama3:latest, it will not use RAM and CPU (755G/128 core), it will unload qwen2.5:72b-instruct-q6_K and load llama3:latest to GPU, even though qwen2.5:72b-instruct-q6_K is in use at this time.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.10

Originally created by @Han-Huaqiao on GitHub (Sep 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6950 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I deployed the qwen2.5:72b-instruct-q6_K model, which occupies 4*3090 and a total of 75G GPU memory. When I use llama3:latest, it will not use RAM and CPU (755G/128 core), it will unload qwen2.5:72b-instruct-q6_K and load llama3:latest to GPU, even though qwen2.5:72b-instruct-q6_K is in use at this time. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.10
GiteaMirror added the feature request label 2026-04-28 17:26:56 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 25, 2024):

You can get ollama to load llama3 in RAM by telling it to load 0 layers on the GPU. This can be done either in an API call:

$ curl localhost:11434/api/generate -d '{"model":"llama3:latest","options":{"num_gpu":0}}'
{"model":"llama3:latest","created_at":"2024-09-25T10:06:21.362308925Z","response":"","done":true,"done_reason":"load"}
$ ollama ps
NAME                            	ID          	SIZE  	PROCESSOR	UNTIL   
llama3:latest                   	365c0bd3c000	4.3 GB	100% CPU 	Forever	
qwen2:7b-instruct-q4_K_M        	f10f702d139e	5.4 GB	100% GPU 	Forever	

Or by setting the num_gpu parameter in the CLI:

$ ollama run llama3:latest
>>> /set parameter num_gpu 0
Set parameter 'num_gpu' to '0'
>>> hello
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?

>>> 
$ ollama ps
NAME                            	ID          	SIZE  	PROCESSOR	UNTIL   
llama3:latest                   	365c0bd3c000	4.3 GB	100% CPU 	Forever	
qwen2:7b-instruct-q4_K_M        	f10f702d139e	5.4 GB	100% GPU 	Forever	

Or by creating a copy of the model with num_gpu set to 0:

$ echo "FROM llama3:latest" > Modelfile
$ echo "PARAMETER num_gpu 0" >> Modelfile
$ ollama create llama3:cpu
transferring model data 
using existing layer sha256:6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa 
using existing layer sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f 
using existing layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f 
using existing layer sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f 
creating new layer sha256:dad74cbab1463b1de411f1337ba73b8f2201cbd931f70d3489896ffc1d30f0a2 
creating new layer sha256:1051250e921710d7333029087580bc89d451bea81ef6c6ac36e6db6f8261e580 
writing manifest 
success 
$ ollama run llama3:cpu hello
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?

$ ollama ps
NAME                            	ID          	SIZE  	PROCESSOR	UNTIL   
llama3:cpu                      	983e36520965	4.3 GB	100% CPU 	Forever	
qwen2:7b-instruct-q4_K_M        	f10f702d139e	5.4 GB	100% GPU 	Forever	
<!-- gh-comment-id:2373663650 --> @rick-github commented on GitHub (Sep 25, 2024): You can get ollama to load llama3 in RAM by telling it to load 0 layers on the GPU. This can be done either in an API call: ```console $ curl localhost:11434/api/generate -d '{"model":"llama3:latest","options":{"num_gpu":0}}' {"model":"llama3:latest","created_at":"2024-09-25T10:06:21.362308925Z","response":"","done":true,"done_reason":"load"} $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 4.3 GB 100% CPU Forever qwen2:7b-instruct-q4_K_M f10f702d139e 5.4 GB 100% GPU Forever ``` Or by setting the `num_gpu` parameter in the CLI: ```console $ ollama run llama3:latest >>> /set parameter num_gpu 0 Set parameter 'num_gpu' to '0' >>> hello Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat? >>> $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 4.3 GB 100% CPU Forever qwen2:7b-instruct-q4_K_M f10f702d139e 5.4 GB 100% GPU Forever ``` Or by creating a copy of the model with `num_gpu` set to 0: ```console $ echo "FROM llama3:latest" > Modelfile $ echo "PARAMETER num_gpu 0" >> Modelfile $ ollama create llama3:cpu transferring model data using existing layer sha256:6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa using existing layer sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f using existing layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f using existing layer sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f creating new layer sha256:dad74cbab1463b1de411f1337ba73b8f2201cbd931f70d3489896ffc1d30f0a2 creating new layer sha256:1051250e921710d7333029087580bc89d451bea81ef6c6ac36e6db6f8261e580 writing manifest success $ ollama run llama3:cpu hello Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat? $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:cpu 983e36520965 4.3 GB 100% CPU Forever qwen2:7b-instruct-q4_K_M f10f702d139e 5.4 GB 100% GPU Forever
Author
Owner

@dhiltgen commented on GitHub (Sep 25, 2024):

Related to #3902

<!-- gh-comment-id:2375132143 --> @dhiltgen commented on GitHub (Sep 25, 2024): Related to #3902
Author
Owner

@Han-Huaqiao commented on GitHub (Sep 26, 2024):

I set the keep_alive of qwen2.5:72b-instruct-q6_K to 60 minutes, and it will be loaded on the GPU (assuming that the GPU has no GPU memory available). However, if the parameter "options":{"num_gpu":0} of the llama3:latest model is not specified in the request, it will be loaded on the GPU and the qwen2.5:72b-instruct-q6_K model will be unloaded. If I do not send a request to specify the loading device of the model, why does ollama not automatically load the model to the CPU if the device has sufficient memory?

<!-- gh-comment-id:2375583758 --> @Han-Huaqiao commented on GitHub (Sep 26, 2024): I set the keep_alive of qwen2.5:72b-instruct-q6_K to 60 minutes, and it will be loaded on the GPU (assuming that the GPU has no GPU memory available). However, if the parameter "options":{"num_gpu":0} of the llama3:latest model is not specified in the request, it will be loaded on the GPU and the qwen2.5:72b-instruct-q6_K model will be unloaded. If I do not send a request to specify the loading device of the model, why does ollama not automatically load the model to the CPU if the device has sufficient memory?
Author
Owner

@Han-Huaqiao commented on GitHub (Sep 26, 2024):

If ollama cannot automatically load the model on the CPU and RAM when the GPU storage is fully occupied, then how is the 30% CPU /70% GPU model allocation shown in ollama ps achieved?

<!-- gh-comment-id:2375591915 --> @Han-Huaqiao commented on GitHub (Sep 26, 2024): If ollama cannot automatically load the model on the CPU and RAM when the GPU storage is fully occupied, then how is the 30% CPU /70% GPU model allocation shown in `ollama ps` achieved?
Author
Owner

@rick-github commented on GitHub (Sep 26, 2024):

ollama currently prefers to load models on the GPU. See #3902 for thoughts on changing that behaviour.

30/70 allocation is achieved by loading as much of the model on GPU as possible. Anything that doesn't fit in the GPU is loaded in the CPU.

<!-- gh-comment-id:2375616869 --> @rick-github commented on GitHub (Sep 26, 2024): ollama currently prefers to load models on the GPU. See #3902 for thoughts on changing that behaviour. 30/70 allocation is achieved by loading as much of the model on GPU as possible. Anything that doesn't fit in the GPU is loaded in the CPU.
Author
Owner

@Han-Huaqiao commented on GitHub (Sep 26, 2024):

Device CPU RAM is 755G

When I run only llama3:latest, the ollama ps results are as follows:

NAME            ID              SIZE    PROCESSOR       UNTIL               
llama3:latest   365c0bd3c000    6.7 GB  100% GPU        59 minutes from now

When I run only llama3:latest and translation-model-7B:latest, the ollama ps results are as follows:

NAME                            ID              SIZE    PROCESSOR       UNTIL
llama3:latest                   365c0bd3c000    6.7 GB  100% GPU        59 minutes from now
translation-model-7B:latest     b0728e964cb9    24 GB   100% GPU        4 minutes from now

Then I run qwen2.5:72b-instruct-q6_K, the ollama ps results are as follows:

NAME                            ID              SIZE    PROCESSOR       UNTIL
llama3:latest                   365c0bd3c000    6.7 GB  100% GPU        58 minutes from now
qwen2.5:72b-instruct-q6_K       5f611a49224d    76 GB   100% GPU        4 minutes from now

qwen2.5:72b-instruct-q6_K is not deployed in CPU RAM, ollama unloaded translation-model-7B:latest and then loads the qwen2.5:72b-instruct-q6_K model to GPU

<!-- gh-comment-id:2375750093 --> @Han-Huaqiao commented on GitHub (Sep 26, 2024): Device CPU RAM is 755G When I run only llama3:latest, the `ollama ps` results are as follows: ``` python NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 6.7 GB 100% GPU 59 minutes from now ``` When I run only llama3:latest and translation-model-7B:latest, the `ollama ps` results are as follows: ``` python NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 6.7 GB 100% GPU 59 minutes from now translation-model-7B:latest b0728e964cb9 24 GB 100% GPU 4 minutes from now ``` Then I run qwen2.5:72b-instruct-q6_K, the `ollama ps` results are as follows: ``` python NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 6.7 GB 100% GPU 58 minutes from now qwen2.5:72b-instruct-q6_K 5f611a49224d 76 GB 100% GPU 4 minutes from now ``` qwen2.5:72b-instruct-q6_K is not deployed in CPU RAM, ollama unloaded translation-model-7B:latest and then loads the qwen2.5:72b-instruct-q6_K model to GPU
Author
Owner

@rick-github commented on GitHub (Sep 26, 2024):

ollama currently prefers to load models on the GPU. It will evict a model if it needs to make room on the GPU.

If you want a model loaded in to RAM, you need to tell ollama by setting num_gpu to 0, either in the API call or by creating a copy of the model with PARAMETER num_gpu 0.

<!-- gh-comment-id:2375792678 --> @rick-github commented on GitHub (Sep 26, 2024): ollama currently prefers to load models on the GPU. It will evict a model if it needs to make room on the GPU. If you want a model loaded in to RAM, you need to tell ollama by setting `num_gpu` to 0, either in the API call or by creating a copy of the model with `PARAMETER num_gpu 0`.
Author
Owner

@Han-Huaqiao commented on GitHub (Sep 26, 2024):

I think the above phenomenon has nothing to do with ollama's model device deployment tendency. It is more like a bug. Because when I deploy a new model, if the GPU memory is full, it will unload the model in use. I think it is unreasonable for it to unload the model that has just completed the inference task. For example, if you request qwen2.5:72b-instruct-q6_K and translation-model-7B:latest multiple times at the same time, it will frequently load and unload these two models from GPU memory, which will seriously increase the request latency.

<!-- gh-comment-id:2376566710 --> @Han-Huaqiao commented on GitHub (Sep 26, 2024): I think the above phenomenon has nothing to do with ollama's model device deployment tendency. It is more like a bug. Because when I deploy a new model, if the GPU memory is full, it will unload the model in use. I think it is unreasonable for it to unload the model that has just completed the inference task. For example, if you request qwen2.5:72b-instruct-q6_K and translation-model-7B:latest multiple times at the same time, it will frequently load and unload these two models from GPU memory, which will seriously increase the request latency.
Author
Owner

@rick-github commented on GitHub (Sep 26, 2024):

ollama currently prefers to load models on the GPU. See https://github.com/ollama/ollama/issues/3902 for thoughts on changing that behaviour.

<!-- gh-comment-id:2376572660 --> @rick-github commented on GitHub (Sep 26, 2024): ollama currently prefers to load models on the GPU. See https://github.com/ollama/ollama/issues/3902 for thoughts on changing that behaviour.
Author
Owner

@bchtrue commented on GitHub (Aug 22, 2025):

Hello, the question about your example above.

ollama create hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:UD-Q4_K_XL
Error: no FROM line

How to create copy of the model if it does not have Modelfile ?

And maybe it is possible to change num_gpu without coping full model or some other easy way to set PARAMETER num_gpu 99 (in my case to load to VRAM) for some model by default?

Regards

<!-- gh-comment-id:3215068320 --> @bchtrue commented on GitHub (Aug 22, 2025): Hello, the question about your example above. ollama create hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:UD-Q4_K_XL Error: no FROM line How to create copy of the model if it does not have Modelfile ? And maybe it is possible to change `num_gpu` without coping full model or some other easy way to set `PARAMETER num_gpu 99` (in my case to load to VRAM) for some model by default? Regards
Author
Owner

@rick-github commented on GitHub (Aug 22, 2025):

How to create copy of the model if it does not have Modelfile ?

Create the Modelfile:

echo FROM hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:UD-Q4_K_XL > Modelfile
echo PARAMETER num_gpu 99 >> Modelfile
ollama create unlsoth/qwen3-coder:30b-a3b-q4_K_XL
$ ollama run unlsoth/qwen3-coder:30b-a3b-q4_K_XL hello
Hello! How can I help you today?

And maybe it is possible to change num_gpu without coping full model or some other easy way to set PARAMETER num_gpu 99 (in my case to load to VRAM) for some model by default?

It doesn't copy the full model, it creates references to the existing blobs. Extra disk space used is a kilobyte or so for the manifest and new parameter blob.

<!-- gh-comment-id:3215138792 --> @rick-github commented on GitHub (Aug 22, 2025): > How to create copy of the model if it does not have Modelfile ? Create the Modelfile: ``` echo FROM hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:UD-Q4_K_XL > Modelfile echo PARAMETER num_gpu 99 >> Modelfile ollama create unlsoth/qwen3-coder:30b-a3b-q4_K_XL ``` ```console $ ollama run unlsoth/qwen3-coder:30b-a3b-q4_K_XL hello Hello! How can I help you today? ``` > And maybe it is possible to change num_gpu without coping full model or some other easy way to set PARAMETER num_gpu 99 (in my case to load to VRAM) for some model by default? It doesn't copy the full model, it creates references to the existing blobs. Extra disk space used is a kilobyte or so for the manifest and new parameter blob.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50909