[GH-ISSUE #2324] Running Ollama with mixtral on Macbook pro m1 pro is incredibly slow #27103

Closed
opened 2026-04-22 04:04:12 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @azurwastaken on GitHub (Feb 2, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2324

Originally assigned to: @bmizerany on GitHub.

Hello, I tried to install ollama on my macbook today and give it a try but the model is taking 10+ min just to answer to an Hello.

Did i missed something in config ?

Originally created by @azurwastaken on GitHub (Feb 2, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2324 Originally assigned to: @bmizerany on GitHub. Hello, I tried to install ollama on my macbook today and give it a try but the model is taking 10+ min just to answer to an Hello. Did i missed something in config ?
Author
Owner

@truatpasteurdotfr commented on GitHub (Feb 2, 2024):

Hi, if you look at https://ollama.ai/library/mixtral/tags, the models size are very large, and your laptop may be limited by the amount of physical memory ?
My work allocated MBA M2 with 24 GB of RAM is also strugling with the 26GB mixtral weights with version v0.1.22

<!-- gh-comment-id:1923613558 --> @truatpasteurdotfr commented on GitHub (Feb 2, 2024): Hi, if you look at https://ollama.ai/library/mixtral/tags, the models size are very large, and your laptop may be limited by the amount of physical memory ? My work allocated MBA M2 with 24 GB of RAM is also strugling with the 26GB mixtral weights with version v0.1.22
Author
Owner

@igorschlum commented on GitHub (Feb 2, 2024):

Hi @azurwastaken it's a question of Memory. What is your Mac Memory? What is the size of the model you are using? If your Mac doesn't have enough memory, it will swap between the SSD and the Ram and yes, it's very slow. You may want to use a smaller Large Language Model (LLM). I think that you can close the Issue as Ollama has no way to increase the RAM of your Macbook.

<!-- gh-comment-id:1924400882 --> @igorschlum commented on GitHub (Feb 2, 2024): Hi @azurwastaken it's a question of Memory. What is your Mac Memory? What is the size of the model you are using? If your Mac doesn't have enough memory, it will swap between the SSD and the Ram and yes, it's very slow. You may want to use a smaller Large Language Model (LLM). I think that you can close the Issue as Ollama has no way to increase the RAM of your Macbook.
Author
Owner

@MatMatMatMatMatMat commented on GitHub (Feb 5, 2024):

Same here, MacBook pro m1 32Go.
Mixtral is not using GPU at all and run on CPU.
Same test with Mistral, GPU used instead of CPU.

May be related to https://github.com/ollama/ollama/issues/2362

<!-- gh-comment-id:1927584244 --> @MatMatMatMatMatMat commented on GitHub (Feb 5, 2024): Same here, MacBook pro m1 32Go. Mixtral is not using GPU at all and run on CPU. Same test with Mistral, GPU used instead of CPU. May be related to https://github.com/ollama/ollama/issues/2362
Author
Owner

@igorschlum commented on GitHub (Feb 6, 2024):

I also have a MacBook Pro 32 go and when I run Mixtral, it's not so slow. Try to restart your mac and launch only Mixtral. If you have other application running, they will lower the memory available for Mixtral.
https://github.com/ollama/ollama/assets/2884312/4d584a39-acc5-45bb-a7ca-2831dbeee462

<!-- gh-comment-id:1930359725 --> @igorschlum commented on GitHub (Feb 6, 2024): I also have a MacBook Pro 32 go and when I run Mixtral, it's not so slow. Try to restart your mac and launch only Mixtral. If you have other application running, they will lower the memory available for Mixtral. https://github.com/ollama/ollama/assets/2884312/4d584a39-acc5-45bb-a7ca-2831dbeee462
Author
Owner

@bmizerany commented on GitHub (Mar 11, 2024):

@azurwastaken This seems to be a memory issue. I'm closing this issue but if you feel this is in error and continue to have issues please feel free to reopen and update.

<!-- gh-comment-id:1989649287 --> @bmizerany commented on GitHub (Mar 11, 2024): @azurwastaken This seems to be a memory issue. I'm closing this issue but if you feel this is in error and continue to have issues please feel free to reopen and update.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27103