[GH-ISSUE #7225] ollama parallel #30346

Closed
opened 2026-04-22 09:55:09 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @jamalibrahimsec on GitHub (Oct 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7225

Hello
I am trying to run ollama in instance that has 40 cores of cpus
what I understood is that max models env variable permit doing that but there was no cleear explanation about how it would doing it with cpu (knowing that I have enough ram ).
if you can explain for me how ollama manage that with cpu it would be perfect.

tahnks

Originally created by @jamalibrahimsec on GitHub (Oct 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7225 Hello I am trying to run ollama in instance that has 40 cores of cpus what I understood is that max models env variable permit doing that but there was no cleear explanation about how it would doing it with cpu (knowing that I have enough ram ). if you can explain for me how ollama manage that with cpu it would be perfect. tahnks
GiteaMirror added the questionfeature request labels 2026-04-22 09:55:09 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 16, 2024):

Set OLLAMA_NUM_PARALLEL in the server environment to the number of parallel requests you want to handle. Note that ollama won't use all 40 cores by default, you can override that by setting num_thread in the API or by creating a copy of a model and setting PARAMETER num_thread xx in the Modelfile.

<!-- gh-comment-id:2416501640 --> @rick-github commented on GitHub (Oct 16, 2024): Set [`OLLAMA_NUM_PARALLEL`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server:~:text=for%20CPU%20inference.-,OLLAMA_NUM_PARALLEL,-%2D%20The%20maximum%20number) in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) to the number of parallel requests you want to handle. Note that ollama won't use all 40 cores by default, you can override that by setting `num_thread` in the [API](https://github.com/ollama/ollama/blob/main/docs/api.md#:~:text=use_mlock%22%3A%20false%2C%0A%20%20%20%20%22-,num_thread,-%22%3A%208%0A%20%20%7D%0A%7D) or by creating a [copy of a model](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650) and setting `PARAMETER num_thread xx` in the [Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md).
Author
Owner

@jamalibrahimsec commented on GitHub (Oct 16, 2024):

hello thanks for your response
so basically I cannot use more that one model for inference at the same time

<!-- gh-comment-id:2417148096 --> @jamalibrahimsec commented on GitHub (Oct 16, 2024): hello thanks for your response so basically I cannot use more that one model for inference at the same time
Author
Owner

@rick-github commented on GitHub (Oct 16, 2024):

You can use more than one model at the same, up to the amount specified in OLLAMA_MAX_LOADED_MODELS. Here I have it set to 4:

$ ollama ps
NAME                 	ID          	SIZE  	PROCESSOR	UNTIL   
phi:2.7b-chat-v2-q4_0	e2fd6321a5fe	4.2 GB	100% CPU 	Forever	
llama3.1:latest      	42182419e950	5.2 GB	100% CPU 	Forever	
llama3.2:latest      	a80c4f17acd5	2.8 GB	100% CPU 	Forever	
qwen2:0.5b           	6f48b936a09f	314 MB	100% CPU 	Forever	

If I load another model, one of the current ones gets unloaded:

$ ollama ps
NAME                 	ID          	SIZE  	PROCESSOR	UNTIL   
phi3:3.8b            	4f2222927938	5.4 GB	100% CPU 	Forever	
phi:2.7b-chat-v2-q4_0	e2fd6321a5fe	4.2 GB	100% CPU 	Forever	
llama3.2:latest      	a80c4f17acd5	2.8 GB	100% CPU 	Forever	
qwen2:0.5b           	6f48b936a09f	314 MB	100% CPU 	Forever	

If you were to run 4 models at a time, you would need to adjust num_thread as above to prevent the CPU from being oversubscribed.

You cannot run the same model multiple times.

This is independent of OLLAMA_NUM_PARALLEL, which determines how many parallel completions a model can do.

<!-- gh-comment-id:2417200344 --> @rick-github commented on GitHub (Oct 16, 2024): You can use more than one model at the same, up to the amount specified in `OLLAMA_MAX_LOADED_MODELS`. Here I have it set to 4: ```console $ ollama ps NAME ID SIZE PROCESSOR UNTIL phi:2.7b-chat-v2-q4_0 e2fd6321a5fe 4.2 GB 100% CPU Forever llama3.1:latest 42182419e950 5.2 GB 100% CPU Forever llama3.2:latest a80c4f17acd5 2.8 GB 100% CPU Forever qwen2:0.5b 6f48b936a09f 314 MB 100% CPU Forever ``` If I load another model, one of the current ones gets unloaded: ```console $ ollama ps NAME ID SIZE PROCESSOR UNTIL phi3:3.8b 4f2222927938 5.4 GB 100% CPU Forever phi:2.7b-chat-v2-q4_0 e2fd6321a5fe 4.2 GB 100% CPU Forever llama3.2:latest a80c4f17acd5 2.8 GB 100% CPU Forever qwen2:0.5b 6f48b936a09f 314 MB 100% CPU Forever ``` If you were to run 4 models at a time, you would need to adjust `num_thread` as above to prevent the CPU from being oversubscribed. You cannot run the same model multiple times. This is independent of `OLLAMA_NUM_PARALLEL`, which determines how many parallel completions a model can do.
Author
Owner

@jamalibrahimsec commented on GitHub (Oct 16, 2024):

ok thanks
so the same model cannot be loaded multiple times but different models can be loaded
if you can tell me why it would be appreciated
thanks a lot

<!-- gh-comment-id:2417222206 --> @jamalibrahimsec commented on GitHub (Oct 16, 2024): ok thanks so the same model cannot be loaded multiple times but different models can be loaded if you can tell me why it would be appreciated thanks a lot
Author
Owner

@rick-github commented on GitHub (Oct 16, 2024):

ollama doesn't currently support loading the same model more than once. #3902 is tracking the work for loading a model multiple times but there's no progress so far.

If you want to load a model multiple times so that it can process parallel queries, use one model and set OLLAMA_NUM_PARALLEL.

<!-- gh-comment-id:2417232296 --> @rick-github commented on GitHub (Oct 16, 2024): ollama doesn't currently support loading the same model more than once. #3902 is tracking the work for loading a model multiple times but there's no progress so far. If you want to load a model multiple times so that it can process parallel queries, use one model and set `OLLAMA_NUM_PARALLEL`.
Author
Owner

@jamalibrahimsec commented on GitHub (Oct 30, 2024):

following the same issue if I used the smae modelfile to create 10 different models can I run those models in parallel or ollama will keep recognizing the source model?

thanks

<!-- gh-comment-id:2446397643 --> @jamalibrahimsec commented on GitHub (Oct 30, 2024): following the same issue if I used the smae modelfile to create 10 different models can I run those models in parallel or ollama will keep recognizing the source model? thanks
Author
Owner

@rick-github commented on GitHub (Oct 30, 2024):

If you use the same Modelfile, you are creating 10 copies of the same model with different aliases, and ollama will only load one copy.

<!-- gh-comment-id:2446405550 --> @rick-github commented on GitHub (Oct 30, 2024): If you use the same Modelfile, you are creating 10 copies of the same model with different aliases, and ollama will only load one copy.
Author
Owner

@jamalibrahimsec commented on GitHub (Oct 30, 2024):

ok thanks for the explanation. so I understand that it would still works without parallelism as I am doing.

speaking about the num_threads can you explain me more what does this do. or in other words what this parameter control.

and if you have any recommendation about how to optimize the model parameters in terms of velocity it could be really helpfull

thanks

<!-- gh-comment-id:2446432054 --> @jamalibrahimsec commented on GitHub (Oct 30, 2024): ok thanks for the explanation. so I understand that it would still works without parallelism as I am doing. speaking about the num_threads can you explain me more what does this do. or in other words what this parameter control. and if you have any recommendation about how to optimize the model parameters in terms of velocity it could be really helpfull thanks
Author
Owner

@jamalibrahimsec commented on GitHub (Oct 30, 2024):

as well if I used three different models at the same time how ollama would manage the resources would it dedicates resources for each model or the models would compete for the resources

<!-- gh-comment-id:2446500791 --> @jamalibrahimsec commented on GitHub (Oct 30, 2024): as well if I used three different models at the same time how ollama would manage the resources would it dedicates resources for each model or the models would compete for the resources
Author
Owner

@rick-github commented on GitHub (Oct 30, 2024):

num_thread is the number of threads that the ollama runner (llama.cpp) will use for inference. The number of threads depends on the operating system, number of CPUs, number of cores per CPU, whether hyperthreading is enabled etc. Typically, a CPU core be able to support 2 threads, so your 40 core machine has 80 threads available. ollama tries to figure out what the optimal number of threads to use is, and will configure each runner to use that number. If you run multiple different models and then have ollama do multiple requests on each model, that can cause the CPU(s) to have too many simultaneous threads running, overwhelming the system. In that case you would set num_thread to the number of threads available on your system divided by the number of parallel requests you would expect to handle.

If you have one model, you can set OLLAMA_NUM_PARALLEL to 1 and set num_thread to the number of threads on your system, and ollama will process one query at a time with maximum processing power, or you can set OLLAMA_NUM_PARALLEL to the number of threads on your system and set num_thread to 1, and ollama will process many queries at the same time but using minimum processing power. The throughput of each approach will vary depending on the efficiency of the processing which will depend on the model, the runner invoked, etc. The way to get the best results is to experiment with the configuration and see what performs the best.

<!-- gh-comment-id:2446514183 --> @rick-github commented on GitHub (Oct 30, 2024): `num_thread` is the number of threads that the ollama runner (llama.cpp) will use for inference. The number of threads depends on the operating system, number of CPUs, number of cores per CPU, whether hyperthreading is enabled etc. Typically, a CPU core be able to support 2 threads, so your 40 core machine has 80 threads available. ollama tries to figure out what the optimal number of threads to use is, and will configure each runner to use that number. If you run multiple different models and then have ollama do multiple requests on each model, that can cause the CPU(s) to have too many simultaneous threads running, overwhelming the system. In that case you would set `num_thread` to the number of threads available on your system divided by the number of parallel requests you would expect to handle. If you have one model, you can set `OLLAMA_NUM_PARALLEL` to 1 and set `num_thread` to the number of threads on your system, and ollama will process one query at a time with maximum processing power, or you can set `OLLAMA_NUM_PARALLEL` to the number of threads on your system and set `num_thread` to 1, and ollama will process many queries at the same time but using minimum processing power. The throughput of each approach will vary depending on the efficiency of the processing which will depend on the model, the runner invoked, etc. The way to get the best results is to experiment with the configuration and see what performs the best.
Author
Owner

@rick-github commented on GitHub (Oct 30, 2024):

If you use three models, you can allocate a thread count per model and they will share. If you over allocate threads and have each model perform inference at the same time, they will compete for CPU cycles and will be less efficient.

<!-- gh-comment-id:2446519739 --> @rick-github commented on GitHub (Oct 30, 2024): If you use three models, you can allocate a thread count per model and they will share. If you over allocate threads and have each model perform inference at the same time, they will compete for CPU cycles and will be less efficient.
Author
Owner

@jamalibrahimsec commented on GitHub (Oct 30, 2024):

based on what are you saying I guess the best solution is to use different docker containers ( a container for each model) with dedicated resources .

<!-- gh-comment-id:2446724899 --> @jamalibrahimsec commented on GitHub (Oct 30, 2024): based on what are you saying I guess the best solution is to use different docker containers ( a container for each model) with dedicated resources .
Author
Owner

@rick-github commented on GitHub (Oct 30, 2024):

If you are running three different models, you can load them in one ollama server. If you want to run the same model three times in parallel, you can load it in one ollama server and set OLLAMA_NUM_PARALLEL=3.

<!-- gh-comment-id:2446871772 --> @rick-github commented on GitHub (Oct 30, 2024): If you are running three different models, you can load them in one ollama server. If you want to run the same model three times in parallel, you can load it in one ollama server and set OLLAMA_NUM_PARALLEL=3.
Author
Owner

@jamalibrahimsec commented on GitHub (Oct 30, 2024):

yes but what I am thinking is that may docker would be more efficient in isolating the resources

<!-- gh-comment-id:2446998949 --> @jamalibrahimsec commented on GitHub (Oct 30, 2024): yes but what I am thinking is that may docker would be more efficient in isolating the resources
Author
Owner

@sasakiyori commented on GitHub (Oct 31, 2024):

BTW when I increase the OLLAMA_NUM_PARALLEL, the ratio of CPU/GPU utilization changes, why? Is it preallocation or something? I used to think this is only about the model size, not the size of the concurrent sequences😢 @rick-github

OLLAMA_NUM_PARALLEL=5: 100%GPU
OLLAMA_NUM_PARALLEL=50: 16%/84% CPU/GPU

<!-- gh-comment-id:2448881527 --> @sasakiyori commented on GitHub (Oct 31, 2024): BTW when I increase the `OLLAMA_NUM_PARALLEL`, the ratio of CPU/GPU utilization changes, why? Is it preallocation or something? I used to think this is only about the model size, not the size of the concurrent sequences😢 @rick-github `OLLAMA_NUM_PARALLEL=5`: `100%GPU` `OLLAMA_NUM_PARALLEL=50`: `16%/84% CPU/GPU`
Author
Owner

@rick-github commented on GitHub (Oct 31, 2024):

Increasing OLLAMA_NUM_PARALLEL increases the size of the KV cache allocated on the GPU, leaving less room for model weights, so they get moved to system RAM. Each parallel completion needs its own KV cache to run in.

<!-- gh-comment-id:2448886029 --> @rick-github commented on GitHub (Oct 31, 2024): Increasing `OLLAMA_NUM_PARALLEL` increases the size of the KV cache allocated on the GPU, leaving less room for model weights, so they get moved to system RAM. Each parallel completion needs its own KV cache to run in.
Author
Owner

@jamalibrahimsec commented on GitHub (Nov 12, 2024):

I tried to use num_threads in the Modelfile and it gave me an unknown parameter

<!-- gh-comment-id:2470988285 --> @jamalibrahimsec commented on GitHub (Nov 12, 2024): I tried to use num_threads in the Modelfile and it gave me an unknown parameter
Author
Owner

@rick-github commented on GitHub (Nov 12, 2024):

My mistake, num_thread, no trailing 's'.

<!-- gh-comment-id:2471005337 --> @rick-github commented on GitHub (Nov 12, 2024): My mistake, `num_thread`, no trailing 's'.
Author
Owner

@jamalibrahimsec commented on GitHub (Nov 12, 2024):

thanks

<!-- gh-comment-id:2471011472 --> @jamalibrahimsec commented on GitHub (Nov 12, 2024): thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30346