[GH-ISSUE #2607] Does not work on Mac? Causing System Crashes building and running #1534

Closed
opened 2026-04-12 11:26:49 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @kuro337 on GitHub (Feb 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2607

Originally assigned to: @dhiltgen on GitHub.

Is Ollama not meant to be run on ARM macs?

I followed these steps

git clone git@github.com:ollama/ollama.git
cd ollama
go generate ./...
go build .

./ollama

# First time running
[1]    1651 killed     ./ollama

# After running again
 ./ollama

# hangs indefinitely

Then it hands indefinitely - I am not able to Terminate it and even using kill does not work

./ollama   

^C^C^C^C

# or any combination of cancels/sigterms

Deleting it for now, will try to run on my Ubuntu with some clarification

Is this the way to run and serve a Model over HTTP?

# steps to run the REST API?

./ollama serve

./ollama run mixtral:8x7b-instruct-v0.1-q5_1

curl http://localhost:11434/api/generate -d '{
  "model": "mixtral",
  "messages": [
    { "role": "system", "content": "Explain using Async in Scala?" }
  ]
}'

Thank you , would appreciate any pointers

I have the latest version of Go , running on a Macbook with 128gb memory

Originally created by @kuro337 on GitHub (Feb 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2607 Originally assigned to: @dhiltgen on GitHub. Is Ollama not meant to be run on ARM macs? I followed these steps ```bash git clone git@github.com:ollama/ollama.git cd ollama go generate ./... go build . ./ollama # First time running [1] 1651 killed ./ollama # After running again ./ollama # hangs indefinitely ``` Then it hands indefinitely - I am not able to Terminate it and even using `kill` does not work ```bash ./ollama ^C^C^C^C # or any combination of cancels/sigterms ``` Deleting it for now, will try to run on my Ubuntu with some clarification Is this the way to run and serve a Model over HTTP? ```bash # steps to run the REST API? ./ollama serve ./ollama run mixtral:8x7b-instruct-v0.1-q5_1 curl http://localhost:11434/api/generate -d '{ "model": "mixtral", "messages": [ { "role": "system", "content": "Explain using Async in Scala?" } ] }' ``` Thank you , would appreciate any pointers I have the latest version of Go , running on a Macbook with 128gb memory
GiteaMirror added the question label 2026-04-12 11:26:49 -05:00
Author
Owner

@kuro337 commented on GitHub (Feb 20, 2024):

Also for reference I have llama.cpp and it works fine for running .gguf models - so doesn't seem to be an issue related to system deps

<!-- gh-comment-id:1953585812 --> @kuro337 commented on GitHub (Feb 20, 2024): Also for reference I have `llama.cpp` and it works fine for running .gguf models - so doesn't seem to be an issue related to system deps
Author
Owner

@dhiltgen commented on GitHub (Feb 21, 2024):

Is it possible you're running under Rosetta?

% sysctl -n sysctl.proc_translated

If that says "1" you're emulating x86, not running on native ARM.

<!-- gh-comment-id:1955609191 --> @dhiltgen commented on GitHub (Feb 21, 2024): Is it possible you're running under Rosetta? ``` % sysctl -n sysctl.proc_translated ``` If that says "1" you're emulating x86, not running on native ARM.
Author
Owner

@kuro337 commented on GitHub (Feb 21, 2024):

Running on Native ARM

sysctl -n sysctl.proc_translated
0

I ran this natively not in a container so should be ARM, so the steps I followed were fine?

I can try again

<!-- gh-comment-id:1955730392 --> @kuro337 commented on GitHub (Feb 21, 2024): Running on Native ARM ```bash sysctl -n sysctl.proc_translated 0 ``` I ran this natively not in a container so should be ARM, so the steps I followed were fine? I can try again
Author
Owner

@dhiltgen commented on GitHub (Feb 21, 2024):

In that case, perhaps some build dependency isn't satisfied. Have you follow the developer guide instructions for installing the required minimum tools? https://github.com/ollama/ollama/blob/main/docs/development.md#development

If those are satisfied, and the compiled binary is still crashing, maybe there's some AV monitor on your system that is triggering? All the maintainers use ARM macs, and I've never seen this failure mode.

<!-- gh-comment-id:1958378845 --> @dhiltgen commented on GitHub (Feb 21, 2024): In that case, perhaps some build dependency isn't satisfied. Have you follow the developer guide instructions for installing the required minimum tools? https://github.com/ollama/ollama/blob/main/docs/development.md#development If those are satisfied, and the compiled binary is still crashing, maybe there's some AV monitor on your system that is triggering? All the maintainers use ARM macs, and I've never seen this failure mode.
Author
Owner

@hoyyeva commented on GitHub (Mar 11, 2024):

Hi @kuro337 are you still experiencing the issue? Let us know if there is anything else that we can help you to resolve the issue.

<!-- gh-comment-id:1989431789 --> @hoyyeva commented on GitHub (Mar 11, 2024): Hi @kuro337 are you still experiencing the issue? Let us know if there is anything else that we can help you to resolve the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1534