[GH-ISSUE #2929] Ollama only using half of available CPU cores with NUMA multi-socket systems #63832

Open
opened 2026-05-03 15:06:13 -05:00 by GiteaMirror · 37 comments
Owner

Originally created by @sddzcuigc on GitHub (Mar 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2929

Originally assigned to: @dhiltgen on GitHub.

I just test using only cpu to lanch LLMs,however it only takes 4cpu busy 100% of the vmware, others still 0%

Originally created by @sddzcuigc on GitHub (Mar 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2929 Originally assigned to: @dhiltgen on GitHub. I just test using only cpu to lanch LLMs,however it only takes 4cpu busy 100% of the vmware, others still 0%
GiteaMirror added the performancebuglinux labels 2026-05-03 15:06:15 -05:00
Author
Owner

@rishabhgupta93 commented on GitHub (Mar 5, 2024):

I am facing a similar situation. I have 20 CPUs but it consumes only 10. Can we tweak the configuration to improve the performance of the model ?

<!-- gh-comment-id:1978322744 --> @rishabhgupta93 commented on GitHub (Mar 5, 2024): I am facing a similar situation. I have 20 CPUs but it consumes only 10. Can we tweak the configuration to improve the performance of the model ?
Author
Owner

@easp commented on GitHub (Mar 5, 2024):

By default I think it picks 1/2 the total # of cores. It does this because text generation is limited by memory bandwidth, rather than compute, and so using the full # of cores usually isn't faster and may actually be slower. That said, this doesn't always hold true when dealing with virtual machines.

It is possible to create a custom model and use the num_thread parameter to use more threads than the default. You can also do it within CLI, for example /set parameter num_thread 20. This setting only lasts for the duration of the CLI session, but, combined with /set verbose makes it easy to experiment and find an optimal setting that can then be used in a modelfile.

<!-- gh-comment-id:1979575857 --> @easp commented on GitHub (Mar 5, 2024): By default I think it picks 1/2 the total # of cores. It does this because text generation is limited by memory bandwidth, rather than compute, and so using the full # of cores usually isn't faster and may actually be slower. That said, this doesn't always hold true when dealing with virtual machines. It is possible to create a custom model and use the num_thread parameter to use more threads than the default. You can also do it within CLI, for example `/set parameter num_thread 20`. This setting only lasts for the duration of the CLI session, but, combined with `/set verbose` makes it easy to experiment and find an optimal setting that can then be used in a modelfile.
Author
Owner

@sddzcuigc commented on GitHub (Mar 6, 2024):

it works


发件人: Erik S @.>
发送时间: 2024年3月6日 4:22
收件人: ollama/ollama @.
>
抄送: Yang xiaoyu @.>; Author @.>
主题: Re: [ollama/ollama] Ollama only take 4 cpu in vmware,but I give it 8 cpu (Issue #2929)

By default I think it picks 1/2 the total # of cores. It does this because text generation is limited by memory bandwidth, rather than compute, and so using the full # of cores usually isn't faster and may actually be slower. That said, this doesn't always hold true when dealing with virtual machines.

It is possible to create a custom model and use the num_thread parameter to use more threads than the default. You can also do it within CLI, for example /set parameter num_thread 20. This setting only lasts for the duration of the CLI session, but, combined with /set verbose makes it easy to experiment and find an optimal setting that can then be used in a modelfile.


Reply to this email directly, view it on GitHubhttps://github.com/ollama/ollama/issues/2929#issuecomment-1979575857, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AUP6NMKAERCVSYA6FLXPWPLYWYSSFAVCNFSM6AAAAABEGXCGLWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZZGU3TKOBVG4.
You are receiving this because you authored the thread.Message ID: @.***>

<!-- gh-comment-id:1979980950 --> @sddzcuigc commented on GitHub (Mar 6, 2024): it works ________________________________ 发件人: Erik S ***@***.***> 发送时间: 2024年3月6日 4:22 收件人: ollama/ollama ***@***.***> 抄送: Yang xiaoyu ***@***.***>; Author ***@***.***> 主题: Re: [ollama/ollama] Ollama only take 4 cpu in vmware,but I give it 8 cpu (Issue #2929) By default I think it picks 1/2 the total # of cores. It does this because text generation is limited by memory bandwidth, rather than compute, and so using the full # of cores usually isn't faster and may actually be slower. That said, this doesn't always hold true when dealing with virtual machines. It is possible to create a custom model and use the num_thread parameter to use more threads than the default. You can also do it within CLI, for example /set parameter num_thread 20. This setting only lasts for the duration of the CLI session, but, combined with /set verbose makes it easy to experiment and find an optimal setting that can then be used in a modelfile. ― Reply to this email directly, view it on GitHub<https://github.com/ollama/ollama/issues/2929#issuecomment-1979575857>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AUP6NMKAERCVSYA6FLXPWPLYWYSSFAVCNFSM6AAAAABEGXCGLWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZZGU3TKOBVG4>. You are receiving this because you authored the thread.Message ID: ***@***.***>
Author
Owner

@dhiltgen commented on GitHub (Mar 6, 2024):

There's logic to try to figure out hyperthreading based on what's reported in sysfs to ensure we allocate 1 thread per real core, and don't create 2 threads accidentally based on hyperthreads, which yields thrashing and much poorer performance. My suspicion is the hypervisor is masking this somehow and causing the algorithm to get the core count incorect.

c29af7e225/common/common.cpp (L54-L70)

<!-- gh-comment-id:1981312519 --> @dhiltgen commented on GitHub (Mar 6, 2024): There's logic to try to figure out hyperthreading based on what's reported in sysfs to ensure we allocate 1 thread per real core, and don't create 2 threads accidentally based on hyperthreads, which yields thrashing and much poorer performance. My suspicion is the hypervisor is masking this somehow and causing the algorithm to get the core count incorect. https://github.com/ggerganov/llama.cpp/blob/c29af7e2252d288f2ea58a7d437c1cb7c0abf160/common/common.cpp#L54-L70
Author
Owner

@dhiltgen commented on GitHub (Mar 6, 2024):

@sddzcuigc the following might help shed some light...

ls /sys/devices/system/cpu/
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings

For reference, on a 4-core (8 hyperthread) Intel CPU I see something like this:

% ls /sys/devices/system/cpu/
cpu0  cpu2  cpu4  cpu6  cpufreq  hotplug       isolated    microcode  offline  possible  present  uevent
cpu1  cpu3  cpu5  cpu7  cpuidle  intel_pstate  kernel_max  modalias   online   power     smt      vulnerabilities
% cat /sys/devices/system/cpu/cpu*/topology/thread_siblings
11
22
44
88
11
22
44
88
<!-- gh-comment-id:1981995114 --> @dhiltgen commented on GitHub (Mar 6, 2024): @sddzcuigc the following might help shed some light... ``` ls /sys/devices/system/cpu/ cat /sys/devices/system/cpu/cpu*/topology/thread_siblings ``` For reference, on a 4-core (8 hyperthread) Intel CPU I see something like this: ``` % ls /sys/devices/system/cpu/ cpu0 cpu2 cpu4 cpu6 cpufreq hotplug isolated microcode offline possible present uevent cpu1 cpu3 cpu5 cpu7 cpuidle intel_pstate kernel_max modalias online power smt vulnerabilities % cat /sys/devices/system/cpu/cpu*/topology/thread_siblings 11 22 44 88 11 22 44 88 ```
Author
Owner

@norbsss commented on GitHub (Mar 18, 2024):

Hi there. I am facing the same issue with codellama:13b. Is there any solution for this yet?

<!-- gh-comment-id:2004441656 --> @norbsss commented on GitHub (Mar 18, 2024): Hi there. I am facing the same issue with codellama:13b. Is there any solution for this yet?
Author
Owner

@jackjiali commented on GitHub (Apr 19, 2024):

experiment

@easp I can use this /set parameter num_thread 20 command in ollama CLI and It works for me, The generation seems faster than before, Many thanks.
Is it available to set this parameter in the global config scope or in ollama api?

<!-- gh-comment-id:2066414977 --> @jackjiali commented on GitHub (Apr 19, 2024): > experiment @easp I can use this /set parameter num_thread 20 command in ollama CLI and It works for me, The generation seems faster than before, Many thanks. Is it available to set this parameter in the global config scope or in ollama api?
Author
Owner

@MeehanCole commented on GitHub (Apr 28, 2024):

@jackjiali hello sir , how do you set the paramater num_thread with CLI , I see there no command in the ollama CLI ,
root@ubuntu:customize_mode# ollama
Usage:
ollama [flags]
ollama [command]

Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
cp Copy a model
rm Remove a model
help Help about any command

Flags:
-h, --help help for ollama
-v, --version Show version information

Use "ollama [command] --help" for more information about a command.

<!-- gh-comment-id:2081298861 --> @MeehanCole commented on GitHub (Apr 28, 2024): @jackjiali hello sir , how do you set the paramater num_thread with CLI , I see there no command in the ollama CLI , root@ubuntu:customize_mode# ollama Usage: ollama [flags] ollama [command] Available Commands: serve Start ollama create Create a model from a Modelfile show Show information for a model run Run a model pull Pull a model from a registry push Push a model to a registry list List models cp Copy a model rm Remove a model help Help about any command Flags: -h, --help help for ollama -v, --version Show version information Use "ollama [command] --help" for more information about a command.
Author
Owner

@dhiltgen commented on GitHub (May 2, 2024):

The default behavior is to try to run one thread per physical core. Running one thread per hyperthread tends to lead to thrashing on the CPU and poorer performance. Many tools report the number of hyperthreads as the number of CPUs, so this can be a bit misleading. As commenters in this issue have pointed out, you can set this in the CLI. For example:

% ollama run llama3
>>> /set parameter num_thread 16
Set parameter 'num_thread' to '16'
>>> why is the sky blue?
What a great question!

The color of the sky appears blue to our eyes because of a phenomenon called Rayleigh scattering. Here's what
happens:
...

If you believe we got the number of physical cores incorrect on your system and the default thread count was wrong, please provide the information I mentioned above from sysfs so we can try to understand why it miscounted CPU cores.

<!-- gh-comment-id:2091897632 --> @dhiltgen commented on GitHub (May 2, 2024): The default behavior is to try to run one thread per physical core. Running one thread per hyperthread tends to lead to thrashing on the CPU and poorer performance. Many tools report the number of hyperthreads as the number of CPUs, so this can be a bit misleading. As commenters in this issue have pointed out, you can set this in the CLI. For example: ``` % ollama run llama3 >>> /set parameter num_thread 16 Set parameter 'num_thread' to '16' >>> why is the sky blue? What a great question! The color of the sky appears blue to our eyes because of a phenomenon called Rayleigh scattering. Here's what happens: ... ``` If you believe we got the number of physical cores incorrect on your system and the default thread count was wrong, please provide the information I mentioned above from sysfs so we can try to understand why it miscounted CPU cores.
Author
Owner

@danbeibei commented on GitHub (May 8, 2024):

Hi, is there a way to set the num_thread parameter when passing the prompt as an argument?
I tried doing something like this: $ ollama run llama3 "/set parameter num_thread 16" "Summarize this file: $(cat README.md)" but it doesn't seem to work.

<!-- gh-comment-id:2100003133 --> @danbeibei commented on GitHub (May 8, 2024): Hi, is there a way to set the num_thread parameter when passing the prompt as an argument? I tried doing something like this: `$ ollama run llama3 "/set parameter num_thread 16" "Summarize this file: $(cat README.md)"` but it doesn't seem to work.
Author
Owner

@dhiltgen commented on GitHub (May 10, 2024):

We don't currently have CLI flags to set these parameters.

<!-- gh-comment-id:2105333276 --> @dhiltgen commented on GitHub (May 10, 2024): We don't currently have CLI flags to set these parameters.
Author
Owner

@haydonryan commented on GitHub (May 16, 2024):

I'm also experiencing this issue but i'm using openwebui and enchanted ai.
I'm running an EPYC 7302P but it's only using half the threads unless I run the /set parameter and set to 32 threads it utilizes the whole cpu.

It would be awesome to expose this as an environment variable option. I'll create a new issue.

<!-- gh-comment-id:2116092155 --> @haydonryan commented on GitHub (May 16, 2024): I'm also experiencing this issue but i'm using openwebui and enchanted ai. I'm running an EPYC 7302P but it's only using half the threads unless I run the /set parameter and set to 32 threads it utilizes the whole cpu. It would be awesome to expose this as an environment variable option. I'll create a new issue.
Author
Owner

@dhiltgen commented on GitHub (May 17, 2024):

@haydonryan can you share the output of the commands I mentioned here?

<!-- gh-comment-id:2118190750 --> @dhiltgen commented on GitHub (May 17, 2024): @haydonryan can you share the output of the commands I mentioned [here](https://github.com/ollama/ollama/issues/2929#issuecomment-1981995114)?
Author
Owner

@haydonryan commented on GitHub (May 17, 2024):

$ ls /sys/devices/system/cpu/
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings
cpu0   cpu11  cpu14  cpu17  cpu2   cpu22  cpu25  cpu28  cpu30  cpu5  cpu8     cpuidle        isolated    nohz_full  possible  smt
cpu1   cpu12  cpu15  cpu18  cpu20  cpu23  cpu26  cpu29  cpu31  cpu6  cpu9     crash_hotplug  kernel_max  offline    power     uevent
cpu10  cpu13  cpu16  cpu19  cpu21  cpu24  cpu27  cpu3   cpu4   cpu7  cpufreq  hotplug        modalias    online     present   vulnerabilities
00000001
00000400
00000800
00001000
00002000
00004000
00008000
00010000
00020000
00040000
00080000
00000002
00100000
00200000
00400000
00800000
01000000
02000000
04000000
08000000
10000000
20000000
00000004
40000000
80000000
00000008
00000010
00000020
00000040
00000080
00000100
00000200
<!-- gh-comment-id:2118194670 --> @haydonryan commented on GitHub (May 17, 2024): ``` $ ls /sys/devices/system/cpu/ cat /sys/devices/system/cpu/cpu*/topology/thread_siblings cpu0 cpu11 cpu14 cpu17 cpu2 cpu22 cpu25 cpu28 cpu30 cpu5 cpu8 cpuidle isolated nohz_full possible smt cpu1 cpu12 cpu15 cpu18 cpu20 cpu23 cpu26 cpu29 cpu31 cpu6 cpu9 crash_hotplug kernel_max offline power uevent cpu10 cpu13 cpu16 cpu19 cpu21 cpu24 cpu27 cpu3 cpu4 cpu7 cpufreq hotplug modalias online present vulnerabilities 00000001 00000400 00000800 00001000 00002000 00004000 00008000 00010000 00020000 00040000 00080000 00000002 00100000 00200000 00400000 00800000 01000000 02000000 04000000 08000000 10000000 20000000 00000004 40000000 80000000 00000008 00000010 00000020 00000040 00000080 00000100 00000200 ```
Author
Owner

@haydonryan commented on GitHub (May 17, 2024):

I found a workaround by creating a new model in ollama and passing in the parameter max_threads as part of the model. That works fine.

<!-- gh-comment-id:2118204279 --> @haydonryan commented on GitHub (May 17, 2024): I found a workaround by creating a new model in ollama and passing in the parameter max_threads as part of the model. That works fine.
Author
Owner

@alaeddine-hash commented on GitHub (Jul 12, 2024):

export OLLAMA_NUM_THREADS=8

<!-- gh-comment-id:2225354948 --> @alaeddine-hash commented on GitHub (Jul 12, 2024): export OLLAMA_NUM_THREADS=8
Author
Owner

@d1abbolo commented on GitHub (Aug 7, 2024):

export OLLAMA_NUM_THREADS=8

is this real? I do not see the variable used in the code

<!-- gh-comment-id:2273460110 --> @d1abbolo commented on GitHub (Aug 7, 2024): > export OLLAMA_NUM_THREADS=8 is this real? I do not see the variable used in the code
Author
Owner

@alaeddine-hash commented on GitHub (Aug 7, 2024):

!!

Le mer. 7 août 2024 à 14:20, d1abbolo @.***> a écrit :

export OLLAMA_NUM_THREADS=8

is this real? I do not see the variable used in the code


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/2929#issuecomment-2273460110,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/A2FONJP7WXQKI6S4VZK6CWLZQINLDAVCNFSM6AAAAABEGXCGLWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZTGQ3DAMJRGA
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:2273467065 --> @alaeddine-hash commented on GitHub (Aug 7, 2024): !! Le mer. 7 août 2024 à 14:20, d1abbolo ***@***.***> a écrit : > export OLLAMA_NUM_THREADS=8 > > is this real? I do not see the variable used in the code > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/2929#issuecomment-2273460110>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/A2FONJP7WXQKI6S4VZK6CWLZQINLDAVCNFSM6AAAAABEGXCGLWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZTGQ3DAMJRGA> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@d1abbolo commented on GitHub (Aug 7, 2024):

well, I tried and it is not working for me.

<!-- gh-comment-id:2273626009 --> @d1abbolo commented on GitHub (Aug 7, 2024): well, I tried and it is not working for me.
Author
Owner

@RandomGitUser321 commented on GitHub (Aug 9, 2024):

It's a Windows scheduler issue. This same stuff happens with python.exe based apps. You have to run them as admin to get it to use the p-cores, otherwise, they'll only use e-cores.

Not sure if launching ollama.exe as admin will fix it though, but I'm assuming that under the hood of that exe, there's a python.exe in there or something like that.

<!-- gh-comment-id:2277470655 --> @RandomGitUser321 commented on GitHub (Aug 9, 2024): It's a Windows scheduler issue. This same stuff happens with python.exe based apps. You have to run them as admin to get it to use the p-cores, otherwise, they'll only use e-cores. Not sure if launching ollama.exe as admin will fix it though, but I'm assuming that under the hood of that exe, there's a python.exe in there or something like that.
Author
Owner

@haydonryan commented on GitHub (Aug 9, 2024):

It's a Windows scheduler issue. This same stuff happens with python.exe based apps. You have to run them as admin to get it to use the p-cores, otherwise, they'll only use e-cores.

No it's not. It's ollama. I'm running on linux, with an AMD Epyc CPU (no E Cores), same issue.

The workaround is to create a custom model that specifies all the cpu cores, however CPU cores should be a ollama cli parameter not a model parameter.

<!-- gh-comment-id:2278086588 --> @haydonryan commented on GitHub (Aug 9, 2024): > It's a Windows scheduler issue. This same stuff happens with python.exe based apps. You have to run them as admin to get it to use the p-cores, otherwise, they'll only use e-cores. No it's not. It's ollama. I'm running on linux, with an AMD Epyc CPU (no E Cores), same issue. The workaround is to create a custom model that specifies all the cpu cores, however CPU cores should be a ollama cli parameter not a model parameter.
Author
Owner

@d1abbolo commented on GitHub (Aug 9, 2024):

imho it should be a environmnent parameter to be set

<!-- gh-comment-id:2278090357 --> @d1abbolo commented on GitHub (Aug 9, 2024): imho it should be a environmnent parameter to be set
Author
Owner

@dhiltgen commented on GitHub (Aug 10, 2024):

Partially related windows issue #2936

PR #6186 should help Linux NUMA detection, which may intersect some scenarios in this issue.
PR #6264 may help resolve any remaining misalignment.

<!-- gh-comment-id:2278902461 --> @dhiltgen commented on GitHub (Aug 10, 2024): Partially related windows issue #2936 PR #6186 should help Linux NUMA detection, which may intersect some scenarios in this issue. PR #6264 may help resolve any remaining misalignment.
Author
Owner

@RandomGitUser321 commented on GitHub (Aug 10, 2024):

It's a Windows scheduler issue. This same stuff happens with python.exe based apps. You have to run them as admin to get it to use the p-cores, otherwise, they'll only use e-cores.

No it's not. It's ollama. I'm running on linux, with an AMD Epyc CPU (no E Cores), same issue.

The workaround is to create a custom model that specifies all the cpu cores, however CPU cores should be a ollama cli parameter not a model parameter.

Yeah I'm not sure how Linux handles scheduling, but at least for Windows 11 and with a 13th gen Intel, the only way to get python to use all the cores seems to be like I said. I'm sure there are libraries that can change threading, but it doesn't seem to be built or configured correctly with Ollama.

Setting the Ollama exes to launch as admin allows it to use my entire CPU for inference if the model doesn't fit completely into VRAM and has to offload some layers to CPU. If I don't do that, it will only use my e-cores and I've never seen it do anything otherwise. I always have my task manager graphs open when doing AI related things.

Testing things out with LM Studio, it will by default use the entire CPU correctly. So they must have a way of ensuring that it will use p and e-cores instead of only using e-cores.

<!-- gh-comment-id:2280091177 --> @RandomGitUser321 commented on GitHub (Aug 10, 2024): > > It's a Windows scheduler issue. This same stuff happens with python.exe based apps. You have to run them as admin to get it to use the p-cores, otherwise, they'll only use e-cores. > > No it's not. It's ollama. I'm running on linux, with an AMD Epyc CPU (no E Cores), same issue. > > The workaround is to create a custom model that specifies all the cpu cores, however CPU cores should be a ollama cli parameter not a model parameter. Yeah I'm not sure how Linux handles scheduling, but at least for Windows 11 and with a 13th gen Intel, the only way to get python to use all the cores seems to be like I said. I'm sure there are libraries that can change threading, but it doesn't seem to be built or configured correctly with Ollama. Setting the Ollama exes to launch as admin allows it to use my entire CPU for inference if the model doesn't fit completely into VRAM and has to offload some layers to CPU. If I don't do that, it will only use my e-cores and I've never seen it do anything otherwise. I always have my task manager graphs open when doing AI related things. Testing things out with LM Studio, it will by default use the entire CPU correctly. So they must have a way of ensuring that it will use p and e-cores instead of only using e-cores.
Author
Owner

@danbeibei commented on GitHub (Aug 13, 2024):

Even if the default core count detection is fixed, I think an environment variable or a CLI flag to set the server's number of threads would be useful.
In my case, I use a dual-socket 2x64 physical cores (no GPU) on Linux, and Ollama uses all physical cores. As the inference performances does not scale above 24 cores (in my testing), this is not relevant.
It would be nice to be able to set the number of threads other than using a custom model with the num_thread parameter.

EDIT: Please tell me if I should submit a feature request, as this goes beyond the original request for this issue.

<!-- gh-comment-id:2285750033 --> @danbeibei commented on GitHub (Aug 13, 2024): Even if the default core count detection is fixed, I think an environment variable or a CLI flag to set the server's number of threads would be useful. In my case, I use a dual-socket 2x64 physical cores (no GPU) on Linux, and Ollama uses all physical cores. As the inference performances does not scale above 24 cores (in my testing), this is not relevant. It would be nice to be able to set the number of threads other than using a custom model with the `num_thread` parameter. EDIT: Please tell me if I should submit a feature request, as this goes beyond the original request for this issue.
Author
Owner

@d1abbolo commented on GitHub (Aug 13, 2024):

Even if the default core count detection is fixed, I think an environment variable or a CLI flag to set the server's number of threads would be useful. In my case, I use a dual-socket 2x64 physical cores (no GPU) on Linux, and Ollama uses all physical cores. As the inference performances does not scale above 24 cores (in my testing), this is not relevant. It would be nice to be able to set the number of threads other than using a custom model with the num_thread parameter.

that would be also my preferred solution

<!-- gh-comment-id:2285753167 --> @d1abbolo commented on GitHub (Aug 13, 2024): > Even if the default core count detection is fixed, I think an environment variable or a CLI flag to set the server's number of threads would be useful. In my case, I use a dual-socket 2x64 physical cores (no GPU) on Linux, and Ollama uses all physical cores. As the inference performances does not scale above 24 cores (in my testing), this is not relevant. It would be nice to be able to set the number of threads other than using a custom model with the `num_thread` parameter. that would be also my preferred solution
Author
Owner

@meimi039 commented on GitHub (Aug 15, 2024):

As ollama seems confused with the number of cpu-cores when running inside an lxc-container, setting the num_thread parameter would be my preferred solution. Maybe setting OLLAMA_NUM_THREADS in the override.conf ?

<!-- gh-comment-id:2292070730 --> @meimi039 commented on GitHub (Aug 15, 2024): As ollama seems confused with the number of cpu-cores when running inside an lxc-container, setting the `num_thread` parameter would be my preferred solution. Maybe setting `OLLAMA_NUM_THREADS` in the override.conf ?
Author
Owner

@mario-mlc commented on GitHub (Sep 3, 2024):

Using Ollama in Python I managed to successfully use all CPUs and tune up some model parameters through:

from ollama import Options
(...)
options = Options(
            temperature=0.0,
            top_k=30,
            top_p=0.8,
            num_thread=8  # Adjust based on your system's CPU capabilities
        )

        # Generate the response using Ollama's chat method
        response = ollama.chat(
            model=model_name,
            messages=[{'role': 'user', 'content': prompt}],
            options=options
        )
        answer = response['message']['content']

As you could see my use case is a chatbot for specific usage but the important here is how to use the Options class.

<!-- gh-comment-id:2327457681 --> @mario-mlc commented on GitHub (Sep 3, 2024): Using Ollama in Python I managed to successfully use all CPUs and tune up some model parameters through: ``` from ollama import Options (...) options = Options( temperature=0.0, top_k=30, top_p=0.8, num_thread=8 # Adjust based on your system's CPU capabilities ) # Generate the response using Ollama's chat method response = ollama.chat( model=model_name, messages=[{'role': 'user', 'content': prompt}], options=options ) answer = response['message']['content'] ``` As you could see my use case is a chatbot for specific usage but the important here is how to use the Options class.
Author
Owner

@matteodiga commented on GitHub (Sep 5, 2024):

Using Ollama in Python I managed to successfully use all CPUs and tune up some model parameters through:

from ollama import Options
(...)
options = Options(
            temperature=0.0,
            top_k=30,
            top_p=0.8,
            num_thread=8  # Adjust based on your system's CPU capabilities
        )

        # Generate the response using Ollama's chat method
        response = ollama.chat(
            model=model_name,
            messages=[{'role': 'user', 'content': prompt}],
            options=options
        )
        answer = response['message']['content']

As you could see my use case is a chatbot for specific usage but the important here is how to use the Options class.

This approach works for me, after setting num_thread=12 as you suggested I see all my cpu running 100%.

<!-- gh-comment-id:2330805585 --> @matteodiga commented on GitHub (Sep 5, 2024): > Using Ollama in Python I managed to successfully use all CPUs and tune up some model parameters through: > > ``` > from ollama import Options > (...) > options = Options( > temperature=0.0, > top_k=30, > top_p=0.8, > num_thread=8 # Adjust based on your system's CPU capabilities > ) > > # Generate the response using Ollama's chat method > response = ollama.chat( > model=model_name, > messages=[{'role': 'user', 'content': prompt}], > options=options > ) > answer = response['message']['content'] > ``` > > As you could see my use case is a chatbot for specific usage but the important here is how to use the Options class. This approach works for me, after setting `num_thread=12` as you suggested I see all my cpu running 100%.
Author
Owner

@dhiltgen commented on GitHub (Oct 15, 2024):

We've merged new logic to discover the available CPU Sockets, Cores (efficiency and performance) and hyperthreads (logical cpus) for MacOS, Windows, and Linux. We'll default to the number of performance physical cores for the number of threads now.

In testing on a NUMA system on both Linux and Windows, there's still more work to do, so we've backed off to only allocate the number of physical cores in one socket for now to avoid thrashing. Folks on this issue with a single socket system should see better default behavior in the next release (0.3.14) but NUMA users, particularly those with more than 64 logical processors on windows, will still see underutilization, which I'll continue to track with this issue.

<!-- gh-comment-id:2415300598 --> @dhiltgen commented on GitHub (Oct 15, 2024): We've [merged new logic](https://github.com/ollama/ollama/pull/6264) to discover the available CPU Sockets, Cores (efficiency and performance) and hyperthreads (logical cpus) for MacOS, Windows, and Linux. We'll default to the number of performance physical cores for the number of threads now. In testing on a NUMA system on both Linux and Windows, there's still more work to do, so we've backed off to only allocate the number of physical cores in one socket for now to avoid thrashing. Folks on this issue with a single socket system should see better default behavior in the next release (0.3.14) but NUMA users, particularly those with more than 64 logical processors on windows, will still see underutilization, which I'll continue to track with this issue.
Author
Owner

@haydonryan commented on GitHub (Oct 16, 2024):

Thankyou @dhiltgen!

Super useful update especially as more higher core count machines come off lease / age out from Datacenters. Ollama on a 32c/64t epyc runs reasonably ok at 8b-q4.

<!-- gh-comment-id:2416807271 --> @haydonryan commented on GitHub (Oct 16, 2024): Thankyou @dhiltgen! Super useful update especially as more higher core count machines come off lease / age out from Datacenters. Ollama on a 32c/64t epyc runs reasonably ok at 8b-q4.
Author
Owner

@felixmarch commented on GitHub (Jan 6, 2025):

export OLLAMA_NUM_THREADS=8

This doesn't work 😕

<!-- gh-comment-id:2573221154 --> @felixmarch commented on GitHub (Jan 6, 2025): > export OLLAMA_NUM_THREADS=8 This doesn't work 😕
Author
Owner

@travnick commented on GitHub (Jan 8, 2025):

@dhiltgen Oh, I see why it uses 4 out of 8 of my cores. But does it properly stick to one thread per a physical core - thread affinity? I mean, not to end up running like 4 threads on 2 physical cores, and leaving other two cores idle.

asking because seeing random 100% usage of all of my 8 virtual cores.

<!-- gh-comment-id:2578316182 --> @travnick commented on GitHub (Jan 8, 2025): @dhiltgen Oh, I see why it uses 4 out of 8 of my cores. But does it properly stick to one thread per a physical core - thread affinity? I mean, not to end up running like 4 threads on 2 physical cores, and leaving other two cores idle. asking because seeing random 100% usage of all of my 8 virtual cores.
Author
Owner

@felixmarch commented on GitHub (Jan 9, 2025):

When I run it on kubernetes, it tries to create threads based on the CPU number seen on the physical node instead of the one allocated for that pod.

The correct CPU and memory should use calculation on this: https://stackoverflow.com/questions/57731048/kubernetes-get-actual-resource-limits-inside-container

Example:

$ kubectl get pods ollama-fd78c7dfd-c7vbc -o yaml
...
...
    resources:
      limits:
        cpu: "16"
        memory: 96Gi
      requests:
        cpu: "16"
        memory: 96Gi
...
...

$ kubectl exec -it ollama-fd78c7dfd-c7vbc -- /bin/bash
root@ollama-fd78c7dfd-c7vbc:/# nproc
128
root@ollama-fd78c7dfd-c7vbc:/# free -g
               total        used        free      shared  buff/cache   available
Mem:             503          55         407           0          39         444
Swap:              0           0           0

root@ollama-fd78c7dfd-c7vbc:/# CPU=`cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us`
root@ollama-fd78c7dfd-c7vbc:/# MEM=`cat /sys/fs/cgroup/memory/memory.limit_in_bytes | numfmt --to=iec`
root@ollama-fd78c7dfd-c7vbc:/# echo "Correct compute resource should be: $(( $CPU/100000 )) CPUs with memory $MEM"
Correct compute resource should be: 16 CPUs with memory 96G
root@ollama-fd78c7dfd-c7vbc:/# 
<!-- gh-comment-id:2579235022 --> @felixmarch commented on GitHub (Jan 9, 2025): When I run it on kubernetes, it tries to create threads based on the CPU number seen on the physical node instead of the one allocated for that pod. The correct CPU and memory should use calculation on this: https://stackoverflow.com/questions/57731048/kubernetes-get-actual-resource-limits-inside-container Example: ``` $ kubectl get pods ollama-fd78c7dfd-c7vbc -o yaml ... ... resources: limits: cpu: "16" memory: 96Gi requests: cpu: "16" memory: 96Gi ... ... ``` ``` $ kubectl exec -it ollama-fd78c7dfd-c7vbc -- /bin/bash root@ollama-fd78c7dfd-c7vbc:/# nproc 128 root@ollama-fd78c7dfd-c7vbc:/# free -g total used free shared buff/cache available Mem: 503 55 407 0 39 444 Swap: 0 0 0 root@ollama-fd78c7dfd-c7vbc:/# CPU=`cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us` root@ollama-fd78c7dfd-c7vbc:/# MEM=`cat /sys/fs/cgroup/memory/memory.limit_in_bytes | numfmt --to=iec` root@ollama-fd78c7dfd-c7vbc:/# echo "Correct compute resource should be: $(( $CPU/100000 )) CPUs with memory $MEM" Correct compute resource should be: 16 CPUs with memory 96G root@ollama-fd78c7dfd-c7vbc:/# ```
Author
Owner

@AlexanderTserkovniy commented on GitHub (Jan 30, 2025):

Guys, is there any update on this? Tried every existing variable out there:

docker run -d --name ollama \
  --cpus=16 \
  --cpuset-cpus="0-15" \
  --memory=120G \
  -p 8082:8080 \
  -v ollama-models:/root/.ollama \
  -e OLLAMA_THREADS=16 \
  -e OMP_NUM_THREADS=16 \
  -e OMP_PLACES=cores \
  -e OMP_PROC_BIND=close \

Still have 8 cpu usage. What should I do to force it use 16 on API calls?

<!-- gh-comment-id:2625155142 --> @AlexanderTserkovniy commented on GitHub (Jan 30, 2025): Guys, is there any update on this? Tried every existing variable out there: ``` docker run -d --name ollama \ --cpus=16 \ --cpuset-cpus="0-15" \ --memory=120G \ -p 8082:8080 \ -v ollama-models:/root/.ollama \ -e OLLAMA_THREADS=16 \ -e OMP_NUM_THREADS=16 \ -e OMP_PLACES=cores \ -e OMP_PROC_BIND=close \ ``` Still have 8 cpu usage. What should I do to force it use 16 on API calls?
Author
Owner

@james-irwin commented on GitHub (Feb 3, 2025):

Pull request #8792 would allow the server-side forcing of the number of worker threads here. The use case for the pull request was to dial them down, but it works both ways and satisfies this issue too.

<!-- gh-comment-id:2631924506 --> @james-irwin commented on GitHub (Feb 3, 2025): Pull request #8792 would allow the server-side forcing of the number of worker threads here. The use case for the pull request was to dial them down, but it works both ways and satisfies this issue too.
Author
Owner

@AlexanderTserkovniy commented on GitHub (Feb 3, 2025):

@james-irwin
If that lands, would be super helpful, thanks!

<!-- gh-comment-id:2631991440 --> @AlexanderTserkovniy commented on GitHub (Feb 3, 2025): @james-irwin If that lands, would be super helpful, thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63832