[GH-ISSUE #8736] Whats the difference in running ollama mac version vs docker on the same mac, performance severely affect on docker #67724

Closed
opened 2026-05-04 11:28:28 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @balajeek on GitHub (Jan 31, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8736

Good Day Awesome Developers!

I wanted to run AI on my own and i do self host a lot in my HomeLab. Few months ago i tried ollama on a proxmox vm (just a average server) and it wasn't enough for even a basic model. So i dropped it.

Yesterday i thought i should give it a try with my Mac Mini M4 16GB Ram, so i went to ollama site, downloaded mac version of ollama, ran it, it asked me to start with a model, i choose i think lama3.3, queried with some questions, and the response was instant. So exicited i ran with model glm4:9b and that too ran pretty fast, i asked for some home automation code and it gave it excellently.

Then i want to install the open-webui so it would be great for the chat, so i downloaded docker desktop for mac, and the docker immediately showed ollama container in them already running. (I guess even though its a mac app it still runs as container i thought). Followed up with running open-webui and got it working, asked the same question in chat and it the response fast like before.

Then I was trying to use paperless-ai with ollama (paperless is on different server), it was unable to connect to ollama, from the browser http://localhost:port works fine but if i refer like http://192.168.1.10:port (where ollama runs) it gives page not found.

On the mac docker desktop, ollama does not show any port binding and thats the reason its not working on the network.

So i deleted ollama, open-webui from the docker, and recreated it again but this time directly from docker desktop, it all worked, but chat with it for a simple question "who are you" took like 5 min to respond.

Why when ollama run inside the docker have this performance vs the mac app downloaded version ?

Originally created by @balajeek on GitHub (Jan 31, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8736 Good Day Awesome Developers! I wanted to run AI on my own and i do self host a lot in my HomeLab. Few months ago i tried ollama on a proxmox vm (just a average server) and it wasn't enough for even a basic model. So i dropped it. Yesterday i thought i should give it a try with my Mac Mini M4 16GB Ram, so i went to ollama site, downloaded mac version of ollama, ran it, it asked me to start with a model, i choose i think lama3.3, queried with some questions, and the response was instant. So exicited i ran with model glm4:9b and that too ran pretty fast, i asked for some home automation code and it gave it excellently. Then i want to install the open-webui so it would be great for the chat, so i downloaded docker desktop for mac, and the docker immediately showed ollama container in them already running. (I guess even though its a mac app it still runs as container i thought). Followed up with running open-webui and got it working, asked the same question in chat and it the response fast like before. Then I was trying to use paperless-ai with ollama (paperless is on different server), it was unable to connect to ollama, from the browser http://localhost:port works fine but if i refer like http://192.168.1.10:port (where ollama runs) it gives page not found. On the mac docker desktop, ollama does not show any port binding and thats the reason its not working on the network. So i deleted ollama, open-webui from the docker, and recreated it again but this time directly from docker desktop, it all worked, but chat with it for a simple question "who are you" took like 5 min to respond. Why when ollama run inside the docker have this performance vs the mac app downloaded version ?
GiteaMirror added the macosdocker labels 2026-05-04 11:28:28 -05:00
Author
Owner
<!-- gh-comment-id:2628418912 --> @rick-github commented on GitHub (Jan 31, 2025): https://chariotsolutions.com/blog/post/apple-silicon-gpus-docker-and-ollama-pick-two/ https://github.com/ollama/ollama/issues/5652
Author
Owner

@balajeek commented on GitHub (Jan 31, 2025):

Ah! seem i learned it the hard way :(
Thanks for the article, I will setup the mac version again.

Any idea that i can bind to default ollama port on mac version of ollama, this would solve me trying to access from other servers who needs ai functionality.

<!-- gh-comment-id:2628455560 --> @balajeek commented on GitHub (Jan 31, 2025): Ah! seem i learned it the hard way :( Thanks for the article, I will setup the mac version again. Any idea that i can bind to default ollama port on mac version of ollama, this would solve me trying to access from other servers who needs ai functionality.
Author
Owner

@rick-github commented on GitHub (Jan 31, 2025):

Have you set OLLAMA_HOST? If you have, can you supply more details about the errors you receive?

<!-- gh-comment-id:2628463658 --> @rick-github commented on GitHub (Jan 31, 2025): Have you set [`OLLAMA_HOST`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network)? If you have, can you supply more details about the errors you receive?
Author
Owner

@balajeek commented on GitHub (Jan 31, 2025):

Have you set OLLAMA_HOST? If you have, can you supply more details about the errors you receive?

I will run mac app ollama, I don't know where the option there to set ollama host or port bindings. I know how to do it on docker, but its not the same

<!-- gh-comment-id:2628472124 --> @balajeek commented on GitHub (Jan 31, 2025): > Have you set [`OLLAMA_HOST`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network)? If you have, can you supply more details about the errors you receive? I will run mac app ollama, I don't know where the option there to set ollama host or port bindings. I know how to do it on docker, but its not the same
Author
Owner

@rick-github commented on GitHub (Jan 31, 2025):

All is explained in the two sentences in the provided link.

<!-- gh-comment-id:2628476180 --> @rick-github commented on GitHub (Jan 31, 2025): All is explained in the two sentences in the provided link.
Author
Owner

@igorschlum commented on GitHub (Jan 31, 2025):

@balajeek, I was able to set up my Ollama server using this documentation by just changing some settings and redirecting my home internet box to my Mac. It's better if you can close the issue as resolved.

<!-- gh-comment-id:2628577343 --> @igorschlum commented on GitHub (Jan 31, 2025): @balajeek, I was able to set up my Ollama server using this documentation by just changing some settings and redirecting my home internet box to my Mac. It's better if you can close the issue as resolved.
Author
Owner

@balajeek commented on GitHub (Feb 1, 2025):

Sure i will close it today, Just got home and going to work it right now.
So you are saying i should follow OLLAMA_HOST link posted earlier !

<!-- gh-comment-id:2628654900 --> @balajeek commented on GitHub (Feb 1, 2025): Sure i will close it today, Just got home and going to work it right now. So you are saying i should follow OLLAMA_HOST link posted earlier !
Author
Owner

@balajeek commented on GitHub (Feb 1, 2025):

Last night i was able to follow the document and make it work, now i have ollama running mac M4 and setup open-webui on my proxmox lxc and was able to add ollama connection by using the local ip address.

Hope fully it will work on my paperless-ai too, which i am going to try today.

Thanks all for helping me.

<!-- gh-comment-id:2629015445 --> @balajeek commented on GitHub (Feb 1, 2025): Last night i was able to follow the document and make it work, now i have ollama running mac M4 and setup open-webui on my proxmox lxc and was able to add ollama connection by using the local ip address. Hope fully it will work on my paperless-ai too, which i am going to try today. Thanks all for helping me.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67724