[GH-ISSUE #12144] Ollama hangs on mac OSX #70132

Closed
opened 2026-05-04 20:26:24 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @mmgreiner on GitHub (Sep 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12144

What is the issue?

Ollama run hangs on OSX with error attach failed: attach failed (Not allowed to attach to process. :

% sw_vers
ProductName:		macOS
ProductVersion:		15.6.1
BuildVersion:		24G90
% ollama --version
[GIN] 2025/09/01 - 19:55:07 | 200 |     414.564µs |       127.0.0.1 | GET      "/api/version"
ollama version is 0.11.8
 % ollama run tinyllama "What is ruby?"

This never returns. Tried in the App which also hangs.
The log file is attached below,

Relevant log output

[GIN] 2025/09/01 - 19:51:23 | 200 |      34.618µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/01 - 19:51:23 | 200 |   28.108038ms |       127.0.0.1 | POST     "/api/show"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /Users/.../.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = TinyLlama
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 22
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5632
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 64
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 4
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
⠙ llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q4_0:  155 tensors
llama_model_loader: - type q6_K:    1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_0
print_info: file size   = 606.53 MiB (4.63 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 2 ('</s>')
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 1.10 B
print_info: general.name     = TinyLlama
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 2 '</s>'
print_info: LF token         = 13 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
llama_model_load: vocab only - skipping tensors
time=2025-09-01T19:51:23.367+02:00 level=WARN source=server.go:172 msg="requested context size too large for model" num_ctx=4096 n_ctx_train=2048
time=2025-09-01T19:51:23.368+02:00 level=INFO source=server.go:388 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/.../.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 --port 57788"
time=2025-09-01T19:51:23.371+02:00 level=INFO source=server.go:493 msg="system memory" total="8.0 GiB" free="2.1 GiB" free_swap="0 B"
time=2025-09-01T19:51:23.372+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/Users/.../.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 library=cpu parallel=1 required="0 B" gpus=1
time=2025-09-01T19:51:23.372+02:00 level=INFO source=server.go:533 msg=offload library=cpu layers.requested=-1 layers.model=23 layers.offload=0 layers.split=[] memory.available="[2.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="789.0 MiB" memory.required.partial="0 B" memory.required.kv="44.0 MiB" memory.required.allocations="[789.0 MiB]" memory.weights.total="571.4 MiB" memory.weights.repeating="520.1 MiB" memory.weights.nonrepeating="51.3 MiB" memory.graph.full="148.0 MiB" memory.graph.partial="144.3 MiB"
time=2025-09-01T19:51:23.407+02:00 level=INFO source=runner.go:864 msg="starting go runner"
/Users/runner/work/ollama/ollama/ml/backend/ggml/ggml/src/ggml.cpp:22: GGML_ASSERT(prev != ggml_uncaught_exception) failed
⠼ (lldb) process attach --pid 25527
⠧ error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
SIGABRT: abort
PC=0x7ff8164a2846 m=0 sigcode=0
signal arrived during cgo execution

OS

macOS

GPU

No response

CPU

Intel

Ollama version

0.11.8

Originally created by @mmgreiner on GitHub (Sep 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12144 ### What is the issue? Ollama run hangs on OSX with error `attach failed: attach failed (Not allowed to attach to process. `: ~~~ % sw_vers ProductName: macOS ProductVersion: 15.6.1 BuildVersion: 24G90 % ollama --version [GIN] 2025/09/01 - 19:55:07 | 200 | 414.564µs | 127.0.0.1 | GET "/api/version" ollama version is 0.11.8 % ollama run tinyllama "What is ruby?" ~~~ This never returns. Tried in the App which also hangs. The log file is attached below, ### Relevant log output ```shell [GIN] 2025/09/01 - 19:51:23 | 200 | 34.618µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/01 - 19:51:23 | 200 | 28.108038ms | 127.0.0.1 | POST "/api/show" llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /Users/.../.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = TinyLlama llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 2048 llama_model_loader: - kv 4: llama.block_count u32 = 22 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... ⠙ llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 45 tensors llama_model_loader: - type q4_0: 155 tensors llama_model_loader: - type q6_K: 1 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_0 print_info: file size = 606.53 MiB (4.63 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 2 ('</s>') load: special tokens cache size = 3 load: token to piece cache size = 0.1684 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 1.10 B print_info: general.name = TinyLlama print_info: vocab type = SPM print_info: n_vocab = 32000 print_info: n_merges = 0 print_info: BOS token = 1 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 2 '</s>' print_info: LF token = 13 '<0x0A>' print_info: EOG token = 2 '</s>' print_info: max token length = 48 llama_model_load: vocab only - skipping tensors time=2025-09-01T19:51:23.367+02:00 level=WARN source=server.go:172 msg="requested context size too large for model" num_ctx=4096 n_ctx_train=2048 time=2025-09-01T19:51:23.368+02:00 level=INFO source=server.go:388 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/.../.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 --port 57788" time=2025-09-01T19:51:23.371+02:00 level=INFO source=server.go:493 msg="system memory" total="8.0 GiB" free="2.1 GiB" free_swap="0 B" time=2025-09-01T19:51:23.372+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/Users/.../.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 library=cpu parallel=1 required="0 B" gpus=1 time=2025-09-01T19:51:23.372+02:00 level=INFO source=server.go:533 msg=offload library=cpu layers.requested=-1 layers.model=23 layers.offload=0 layers.split=[] memory.available="[2.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="789.0 MiB" memory.required.partial="0 B" memory.required.kv="44.0 MiB" memory.required.allocations="[789.0 MiB]" memory.weights.total="571.4 MiB" memory.weights.repeating="520.1 MiB" memory.weights.nonrepeating="51.3 MiB" memory.graph.full="148.0 MiB" memory.graph.partial="144.3 MiB" time=2025-09-01T19:51:23.407+02:00 level=INFO source=runner.go:864 msg="starting go runner" /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml/src/ggml.cpp:22: GGML_ASSERT(prev != ggml_uncaught_exception) failed ⠼ (lldb) process attach --pid 25527 ⠧ error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) SIGABRT: abort PC=0x7ff8164a2846 m=0 sigcode=0 signal arrived during cgo execution ``` ### OS macOS ### GPU _No response_ ### CPU Intel ### Ollama version 0.11.8
GiteaMirror added the bug label 2026-05-04 20:26:24 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 1, 2025):

#12072

<!-- gh-comment-id:3243042242 --> @rick-github commented on GitHub (Sep 1, 2025): #12072
Author
Owner

@dhiltgen commented on GitHub (Sep 2, 2025):

The fix for this will be in 0.11.9

<!-- gh-comment-id:3247062691 --> @dhiltgen commented on GitHub (Sep 2, 2025): The fix for this will be in 0.11.9
Author
Owner

@codingdudecom commented on GitHub (Sep 5, 2025):

I have version 0.11.10

openbmb/minicpm-v4.5:latest

500: llama runner process has terminated: error:attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)

<!-- gh-comment-id:3257965373 --> @codingdudecom commented on GitHub (Sep 5, 2025): I have version 0.11.10 openbmb/minicpm-v4.5:latest 500: llama runner process has terminated: error:attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
Author
Owner

@rick-github commented on GitHub (Sep 5, 2025):

minicpm-v4.5 is currently unsupported: #11730

<!-- gh-comment-id:3257975337 --> @rick-github commented on GitHub (Sep 5, 2025): minicpm-v4.5 is currently unsupported: #11730
Author
Owner

@zunami commented on GitHub (Sep 8, 2025):

Describe the bug

On my new Mac mini M4 (24 GB RAM) I cannot run openbmb/minicpm-v4.5:8b with Ollama 0.11.10.
The server starts fine, but as soon as I try to run the model I get:

Error: 500 Internal Server Error: llama runner process has terminated:
error: attach failed: attach failed (Not allowed to attach to process.
Look in the console messages (Console.app), near the debugserver entries,
when the attach failed. The subsystem that denied the attach permission
will likely have logged an informative message about why it was denied.)

System info

  • Mac model: Mac mini M4 (24 GB RAM)
  • macOS version: MaxOS 15.6.1
  • Ollama version: 0.11.10
  • Model: openbmb/minicpm-v4.5:8b
  • Other models tested: llama3.1:8b runs fine (no error).

What I tried

  • Joined _developer group (dseditgroup -o checkmember -m "$USER" _developer → yes).
  • Enabled Developer Mode: DevToolsSecurity --enable.
  • Added Terminal + Ollama.app to Privacy & Security → Developer Tools.
  • Added Terminal to Full Disk Access.
  • Reset TCC with tccutil reset All com.apple.Terminal.
  • Reinstalled/repaired Xcode Command Line Tools.
  • Tried different context lengths (4096 / 8192 / 32768).
  • Restarted, logged out/in multiple times.
  • Also tested with sudo ollama run ... → same error.
  • Llama3 models work, MiniCPM-V 4.5 fails.

Logs

From log stream --predicate 'process == "debugserver" OR process == "ollama"':

debugserver[...] error: Attach failed: "Not allowed to attach to process.
MachTask::TaskPortForProcessID task_for_pid(...) failed: (os/kern) failure

<!-- gh-comment-id:3264683060 --> @zunami commented on GitHub (Sep 8, 2025): ### Describe the bug On my new Mac mini M4 (24 GB RAM) I cannot run `openbmb/minicpm-v4.5:8b` with Ollama 0.11.10. The server starts fine, but as soon as I try to run the model I get: ``` Error: 500 Internal Server Error: llama runner process has terminated: error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) ``` ### System info - **Mac model:** Mac mini M4 (24 GB RAM) - **macOS version:** MaxOS 15.6.1 - **Ollama version:** 0.11.10 - **Model:** `openbmb/minicpm-v4.5:8b` - **Other models tested:** `llama3.1:8b` runs fine (no error). ### What I tried - Joined `_developer` group (`dseditgroup -o checkmember -m "$USER" _developer` → yes). - Enabled Developer Mode: `DevToolsSecurity --enable`. - Added Terminal + Ollama.app to **Privacy & Security → Developer Tools**. - Added Terminal to **Full Disk Access**. - Reset TCC with `tccutil reset All com.apple.Terminal`. - Reinstalled/repaired Xcode Command Line Tools. - Tried different context lengths (4096 / 8192 / 32768). - Restarted, logged out/in multiple times. - Also tested with `sudo ollama run ...` → same error. - Llama3 models work, MiniCPM-V 4.5 fails. ### Logs From `log stream --predicate 'process == "debugserver" OR process == "ollama"'`: debugserver[...] error: Attach failed: "Not allowed to attach to process. MachTask::TaskPortForProcessID task_for_pid(...) failed: (os/kern) failure
Author
Owner

@rick-github commented on GitHub (Sep 8, 2025):

@zunami https://github.com/ollama/ollama/issues/12144#issuecomment-3257975337

<!-- gh-comment-id:3264833882 --> @rick-github commented on GitHub (Sep 8, 2025): @zunami https://github.com/ollama/ollama/issues/12144#issuecomment-3257975337
Author
Owner

@mmgreiner commented on GitHub (Sep 8, 2025):

works with ollama version 0.11.10, thank you

<!-- gh-comment-id:3267760755 --> @mmgreiner commented on GitHub (Sep 8, 2025): works with ollama version 0.11.10, thank you
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70132