[GH-ISSUE #373] Can we optimize performance with the Apple M1 Max's 32-core GPU and Neural Engine? #62204

Closed
opened 2026-05-03 07:52:28 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @pascalandy on GitHub (Aug 17, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/373

Hello everyone,

I'm keen to explore ways to maximize the efficiency of my robust machines. It appears that Ollama currently utilizes only the CPU for processing.

I'm wondering if there's an option to configure it to leverage our GPU. Specifically, I'm interested in harnessing the power of the 32-core GPU and the 16-core Neural Engine in my setup.

Considering the specifications of the Apple M1 Max chip:

  • 10-core CPU with 8 performance cores and 2 efficiency cores
  • 32-core GPU
  • 16-core Neural Engine
  • 400GB/s memory bandwidth

Media engine

  • Hardware-accelerated H.264, HEVC, ProRes, and ProRes RAW
  • Video decode engine
  • Two video encode engines
  • Two ProRes encode and decode engines

Cheers!

Originally created by @pascalandy on GitHub (Aug 17, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/373 Hello everyone, I'm keen to explore ways to maximize the efficiency of my robust machines. It appears that Ollama currently utilizes only the CPU for processing. I'm wondering if there's an option to configure it to leverage our GPU. Specifically, I'm interested in harnessing the power of the **32-core GPU** and the **16-core Neural Engine** in my setup. Considering the **specifications** of the Apple M1 Max chip: - 10-core CPU with 8 performance cores and 2 efficiency cores - 32-core GPU - 16-core Neural Engine - 400GB/s memory bandwidth Media engine - Hardware-accelerated H.264, HEVC, ProRes, and ProRes RAW - Video decode engine - Two video encode engines - Two ProRes encode and decode engines Cheers!
GiteaMirror added the question label 2026-05-03 07:52:28 -05:00
Author
Owner

@technovangelist commented on GitHub (Aug 17, 2023):

Are you downloading the program from the download button on ollama.ai? It should automatically use the GPU. When you right click on the app in /Applications, what version does it say it is?

<!-- gh-comment-id:1683003347 --> @technovangelist commented on GitHub (Aug 17, 2023): Are you downloading the program from the download button on ollama.ai? It should automatically use the GPU. When you right click on the app in /Applications, what version does it say it is?
Author
Owner

@pascalandy commented on GitHub (Aug 17, 2023):

Are you downloading the program from the download button on ollama.ai?

yes

what version does it say it is?

v 0.0.14

It should automatically use the GPU.

How can I confirm ?

<!-- gh-comment-id:1683070816 --> @pascalandy commented on GitHub (Aug 17, 2023): > Are you downloading the program from the download button on ollama.ai? yes > what version does it say it is? v 0.0.14 > It should automatically use the GPU. How can I confirm ?
Author
Owner

@technovangelist commented on GitHub (Aug 18, 2023):

The tools for looking at gpu usage is limited on the mac. Try opening Activity Monitor. And then on the CPU tab, right click on the columns header. And check % GPU. Now you should be able to sort by gpu usage. Now ask something from one of the models. When it is processing the prompt it uses gpu, but when it start outputting and answer, it should shift to GPU. I see ollama get up to 90-99% gpu.

<!-- gh-comment-id:1684413520 --> @technovangelist commented on GitHub (Aug 18, 2023): The tools for looking at gpu usage is limited on the mac. Try opening Activity Monitor. And then on the CPU tab, right click on the columns header. And check % GPU. Now you should be able to sort by gpu usage. Now ask something from one of the models. When it is processing the prompt it uses gpu, but when it start outputting and answer, it should shift to GPU. I see ollama get up to 90-99% gpu.
Author
Owner

@jkleckner commented on GitHub (Aug 19, 2023):

Take a look at the brilliant "Apple Silicon top" implementation [1].
To get the metrics, you will have to trust it with sudo asitop so it is up to you to take that risk.
To install is as simple as pip install asitop.

[1] https://github.com/tlkh/asitop

<!-- gh-comment-id:1685003529 --> @jkleckner commented on GitHub (Aug 19, 2023): Take a look at the brilliant "Apple Silicon top" implementation [1]. To get the metrics, you will have to trust it with `sudo asitop` so it is up to you to take that risk. To install is as simple as `pip install asitop`. [1] https://github.com/tlkh/asitop
Author
Owner

@jkleckner commented on GitHub (Aug 20, 2023):

And, by the way, you can see the max use of GPU and as a bonus see the peak power usage.
With llama2:70b, I see about 31W max power for cpu/gpu/ane (ane never seems used by anything I do).
The 70b model uses around 32GB of memory on my M1 Max 64GB machine.
Compare this with an nVidia card in a machine with TDP of around 400W.
The Apple silicon is about 1/8 the perf at 1/15 the power, very much ballpark.

<!-- gh-comment-id:1685402705 --> @jkleckner commented on GitHub (Aug 20, 2023): And, by the way, you can see the max use of GPU and as a bonus see the peak power usage. With llama2:70b, I see about 31W max power for cpu/gpu/ane (ane never seems used by anything I do). The 70b model uses around 32GB of memory on my M1 Max 64GB machine. Compare this with an nVidia card in a machine with TDP of around 400W. The Apple silicon is about 1/8 the perf at 1/15 the power, very much ballpark.
Author
Owner

@pascalandy commented on GitHub (Aug 21, 2023):

I didn't tried the 70b models as I was sure they would choke the system. Now I'm curious!

<!-- gh-comment-id:1685492723 --> @pascalandy commented on GitHub (Aug 21, 2023): I didn't tried the 70b models as I was sure they would choke the system. Now I'm curious!
Author
Owner

@Nipol commented on GitHub (Aug 21, 2023):

When using the 70b model, I observed that the CPU was utilized when sending my messages and waiting for a response, while the GPU was activated during the answer generation process. However, I'm uncertain whether this setup is functioning correctly.

<!-- gh-comment-id:1685967048 --> @Nipol commented on GitHub (Aug 21, 2023): When using the 70b model, I observed that the CPU was utilized when sending my messages and waiting for a response, while the GPU was activated during the answer generation process. However, I'm uncertain whether this setup is functioning correctly.
Author
Owner

@technovangelist commented on GitHub (Aug 21, 2023):

Hi @pascalandy is your question answered? If so, can you go ahead and close? If not answered, how can we help further?

<!-- gh-comment-id:1687013413 --> @technovangelist commented on GitHub (Aug 21, 2023): Hi @pascalandy is your question answered? If so, can you go ahead and close? If not answered, how can we help further?
Author
Owner

@jkleckner commented on GitHub (Aug 22, 2023):

Yes, GPU is used for inference. You can increase the default session duration from 5 to 60 minutes [1] [2] to ensure that you will reuse the session and buffers while you are testing. If you do that, you should find that it can be GPU bound rather than CPU, which is used for setup. Of course, only if you want to have such a long default session!

[1] e3054fc74e/api/types.go (L330nn)
[2] e3054fc74e/server/routes.go (L41)

<!-- gh-comment-id:1687239842 --> @jkleckner commented on GitHub (Aug 22, 2023): Yes, GPU is used for inference. You can increase the default session duration from 5 to 60 minutes [1] [2] to ensure that you will reuse the session and buffers while you are testing. If you do that, you should find that it can be GPU bound rather than CPU, which is used for setup. Of course, only if you want to have such a long default session! [1] https://github.com/jmorganca/ollama/blob/e3054fc74e2101de8416976c2dd63e2796081061/api/types.go#L330nn [2] https://github.com/jmorganca/ollama/blob/e3054fc74e2101de8416976c2dd63e2796081061/server/routes.go#L41
Author
Owner

@jmorganca commented on GitHub (Oct 23, 2023):

Thanks for creating an issue! Going to close this for the time being, however do know that I am keeping an eye on all the improvements CoreML (which will run models on CPU+GPU+Neural Engine) as a candidate way to run models faster

<!-- gh-comment-id:1775579194 --> @jmorganca commented on GitHub (Oct 23, 2023): Thanks for creating an issue! Going to close this for the time being, however do know that I am keeping an eye on all the improvements CoreML (which will run models on CPU+GPU+Neural Engine) as a candidate way to run models faster
Author
Owner

@RooSoft commented on GitHub (Mar 14, 2024):

Hello, I'm Marc, 5 months in the future...

I'd like to know the status of CoreML and Neural Engine development. Considering buying a powerful Mac Mini or Mac Studio to run models as a headless server.

<!-- gh-comment-id:1997460526 --> @RooSoft commented on GitHub (Mar 14, 2024): Hello, I'm Marc, 5 months in the future... I'd like to know the status of CoreML and Neural Engine development. Considering buying a powerful Mac Mini or Mac Studio to run models as a headless server.
Author
Owner

@Mecil9 commented on GitHub (Apr 7, 2024):

I have the same question. When I run ollama on apple M1 Max. My activity monitor shows 100% CPU usage and 0% GPU usage, and after running for a while ollama becomes unresponsive. Don't know what caused it.

<!-- gh-comment-id:2041466300 --> @Mecil9 commented on GitHub (Apr 7, 2024): I have the same question. When I run ollama on apple M1 Max. My activity monitor shows 100% CPU usage and 0% GPU usage, and after running for a while ollama becomes unresponsive. Don't know what caused it.
Author
Owner

@shiva404 commented on GitHub (Apr 23, 2024):

@Mecil9 if you have installed with brew, it might be using older version of ollama. Try downloading from Download that should resolve it, at least it solved for me.

<!-- gh-comment-id:2073599036 --> @shiva404 commented on GitHub (Apr 23, 2024): @Mecil9 if you have installed with brew, it might be using older version of ollama. Try downloading from [Download](https://ollama.com/download/Ollama-darwin.zip) that should resolve it, at least it solved for me.
Author
Owner

@qdrddr commented on GitHub (Jun 28, 2024):

Idea/Feature request to add CoreML so Apple Neural Engine can be utilized alongside GPU.
https://github.com/ollama/ollama/issues/3898

<!-- gh-comment-id:2197622557 --> @qdrddr commented on GitHub (Jun 28, 2024): Idea/Feature request to add CoreML so Apple Neural Engine can be utilized alongside GPU. https://github.com/ollama/ollama/issues/3898
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62204