[GH-ISSUE #14611] [0.17.5][macOS Apple Silicon] model runner unexpectedly stopped (EOF/exit status 2) on /api/generate #55980

Open
opened 2026-04-29 10:06:10 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @ksw9722 on GitHub (Mar 4, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14611

What happened

On macOS (Apple Silicon), Ollama API health endpoints are reachable, but inference calls intermittently fail with:

  • {"error":"model runner has unexpectedly stopped, this may be due to resource limitations or an internal error"}
  • HTTP 500 from /api/generate
  • server log shows post predict ... EOF and llama runner terminated: exit status 2

This happened repeatedly during local automation workloads.

Environment

  • Ollama: 0.17.5
  • OS: macOS 26.2 (arm64)
  • Hardware: Apple M4 Pro, 64GB RAM
  • API endpoint: http://127.0.0.1:11434
  • Example model: gpt-oss:20b

Reproduction

  1. Start a clean server:
    pkill -9 -f 'ollama runner' || true
    pkill -9 -f '/usr/local/bin/ollama serve' || true
    pkill -9 -f '/opt/homebrew/bin/ollama serve' || true
    ollama serve
    
  2. Confirm API health:
    curl -sS http://127.0.0.1:11434/api/version
    
  3. Call generate:
    curl -sS http://127.0.0.1:11434/api/generate -d '{
      "model":"gpt-oss:20b",
      "prompt":"hi",
      "stream":false,
      "options":{"num_ctx":512,"num_predict":8}
    }'
    
  4. Sometimes response is:
    {"error":"model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details"}
    

Relevant logs

From runner/server logs around failure:

time=2026-03-04T17:10:17.873+09:00 level=ERROR source=server.go:1611 msg="post predict" error="Post \"http://127.0.0.1:53947/completion\": EOF"
[GIN] 2026/03/04 - 17:10:17 | 500 |  6.983919458s |       127.0.0.1 | POST     "/api/generate"
time=2026-03-04T17:10:17.873+09:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 2"

There is also a native crash dump section with register dump in the same failure window.

Expected behavior

/api/generate should return a normal completion or a stable, actionable error without runner termination.

Notes

  • This still occurred after restarting Ollama and rebooting macOS.
  • We also reproduced with conservative options (num_ctx low, num_predict low).
Originally created by @ksw9722 on GitHub (Mar 4, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14611 ## What happened On macOS (Apple Silicon), Ollama API health endpoints are reachable, but inference calls intermittently fail with: - `{"error":"model runner has unexpectedly stopped, this may be due to resource limitations or an internal error"}` - HTTP 500 from `/api/generate` - server log shows `post predict ... EOF` and `llama runner terminated: exit status 2` This happened repeatedly during local automation workloads. ## Environment - Ollama: `0.17.5` - OS: macOS 26.2 (arm64) - Hardware: Apple M4 Pro, 64GB RAM - API endpoint: `http://127.0.0.1:11434` - Example model: `gpt-oss:20b` ## Reproduction 1. Start a clean server: ```bash pkill -9 -f 'ollama runner' || true pkill -9 -f '/usr/local/bin/ollama serve' || true pkill -9 -f '/opt/homebrew/bin/ollama serve' || true ollama serve ``` 2. Confirm API health: ```bash curl -sS http://127.0.0.1:11434/api/version ``` 3. Call generate: ```bash curl -sS http://127.0.0.1:11434/api/generate -d '{ "model":"gpt-oss:20b", "prompt":"hi", "stream":false, "options":{"num_ctx":512,"num_predict":8} }' ``` 4. Sometimes response is: ```json {"error":"model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details"} ``` ## Relevant logs From runner/server logs around failure: ```text time=2026-03-04T17:10:17.873+09:00 level=ERROR source=server.go:1611 msg="post predict" error="Post \"http://127.0.0.1:53947/completion\": EOF" [GIN] 2026/03/04 - 17:10:17 | 500 | 6.983919458s | 127.0.0.1 | POST "/api/generate" time=2026-03-04T17:10:17.873+09:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 2" ``` There is also a native crash dump section with register dump in the same failure window. ## Expected behavior `/api/generate` should return a normal completion or a stable, actionable error without runner termination. ## Notes - This still occurred after restarting Ollama and rebooting macOS. - We also reproduced with conservative options (`num_ctx` low, `num_predict` low).
Author
Owner

@rick-github commented on GitHub (Mar 4, 2026):

Set OLLAMA_DEBUG=1 and post the full log.

<!-- gh-comment-id:3995970317 --> @rick-github commented on GitHub (Mar 4, 2026): Set `OLLAMA_DEBUG=1` and post the full log.
Author
Owner

@ksw9722 commented on GitHub (Mar 4, 2026):

Thanks — I enabled debug and captured a fresh repro run.

Environment

  • macOS 26.2 (Apple M4 Pro, 64 GB)
  • Ollama 0.17.5

Reproduction Request

curl -sS http://127.0.0.1:11434/api/generate -d '{
  "model":"gpt-oss:20b",
  "prompt":"hi",
  "stream":false,
  "options":{"num_ctx":512,"num_predict":8}
}'

Observed Error

{"error":"model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details"}

Relevant Debug Lines

time=2026-03-04T17:10:17.873+09:00 level=ERROR source=server.go:1611 msg="post predict" error="Post \"http://127.0.0.1:53947/completion\": EOF"
[GIN] 2026/03/04 - 17:10:17 | 500 | 6.98s | 127.0.0.1 | POST "/api/generate"
time=2026-03-04T17:10:17.873+09:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 2"

There is also a native crash dump section in the same window (register dump + goroutine dump). If useful, I can upload a full sanitized debug chunk.

<!-- gh-comment-id:3996004011 --> @ksw9722 commented on GitHub (Mar 4, 2026): Thanks — I enabled debug and captured a fresh repro run. ## Environment - macOS 26.2 (Apple M4 Pro, 64 GB) - Ollama 0.17.5 ## Reproduction Request ```bash curl -sS http://127.0.0.1:11434/api/generate -d '{ "model":"gpt-oss:20b", "prompt":"hi", "stream":false, "options":{"num_ctx":512,"num_predict":8} }' ``` ## Observed Error ```json {"error":"model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details"} ``` ## Relevant Debug Lines ```text time=2026-03-04T17:10:17.873+09:00 level=ERROR source=server.go:1611 msg="post predict" error="Post \"http://127.0.0.1:53947/completion\": EOF" [GIN] 2026/03/04 - 17:10:17 | 500 | 6.98s | 127.0.0.1 | POST "/api/generate" time=2026-03-04T17:10:17.873+09:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 2" ``` There is also a native crash dump section in the same window (register dump + goroutine dump). If useful, I can upload a full sanitized debug chunk.
Author
Owner

@rick-github commented on GitHub (Mar 4, 2026):

Full log.

<!-- gh-comment-id:3996022794 --> @rick-github commented on GitHub (Mar 4, 2026): Full log.
Author
Owner

@ksw9722 commented on GitHub (Mar 4, 2026):

Uploaded full sanitized debug logs (OLLAMA_DEBUG=1 context) here:\n\nhttps://gist.github.com/ksw9722/acf0dbdb7898cf7b8c8b94518ec84fe6\n\nThis includes:\n- manual-serve.log\n- server.log tail\n- full repro serve.log with crash window\n\nIf you want, I can also provide a minimal subset focused only on the exact crash timestamp window.

<!-- gh-comment-id:3996023069 --> @ksw9722 commented on GitHub (Mar 4, 2026): Uploaded full sanitized debug logs (OLLAMA_DEBUG=1 context) here:\n\nhttps://gist.github.com/ksw9722/acf0dbdb7898cf7b8c8b94518ec84fe6\n\nThis includes:\n- manual-serve.log\n- server.log tail\n- full repro serve.log with crash window\n\nIf you want, I can also provide a minimal subset focused only on the exact crash timestamp window.
Author
Owner

@ksw9722 commented on GitHub (Mar 4, 2026):

Good point — added a file-style reference for convenience:

If preferred, I can split this into smaller per-phase files (startup / reproduce / crash window).

<!-- gh-comment-id:3996024884 --> @ksw9722 commented on GitHub (Mar 4, 2026): Good point — added a file-style reference for convenience: - **ollama-debug-full-sanitized.log** (sanitized) - Gist file: https://gist.github.com/ksw9722/acf0dbdb7898cf7b8c8b94518ec84fe6#file-ollama-debug-full-sanitized-log - Raw: https://gist.githubusercontent.com/ksw9722/acf0dbdb7898cf7b8c8b94518ec84fe6/raw/ollama-debug-full-sanitized.log If preferred, I can split this into smaller per-phase files (startup / reproduce / crash window).
Author
Owner

@ksw9722 commented on GitHub (Mar 6, 2026):

Update from local repro on the same machine (Apple Silicon/macOS): setting options.num_gpu=0 makes /api/generate succeed consistently (CPU path), while default GPU path can still hit runner termination.

Example payload that works for us:

{
  "model": "gpt-oss:20b",
  "prompt": "hi",
  "stream": false,
  "options": {
    "num_ctx": 512,
    "num_predict": 8,
    "num_gpu": 0
  }
}

So at least on our side this looks like a GPU-path-specific issue; num_gpu=0 is a temporary workaround.

<!-- gh-comment-id:4013556973 --> @ksw9722 commented on GitHub (Mar 6, 2026): Update from local repro on the same machine (Apple Silicon/macOS): setting `options.num_gpu=0` makes `/api/generate` succeed consistently (CPU path), while default GPU path can still hit runner termination. Example payload that works for us: ```json { "model": "gpt-oss:20b", "prompt": "hi", "stream": false, "options": { "num_ctx": 512, "num_predict": 8, "num_gpu": 0 } } ``` So at least on our side this looks like a GPU-path-specific issue; `num_gpu=0` is a temporary workaround.
Author
Owner

@alexgeek commented on GitHub (Apr 1, 2026):

Just an FYI if anyone else is stuck with this, I removed -DGGML_METAL_HAS_BF16 and the models are running again. Haven't found a way ti get it working with that flag still enabled.

<!-- gh-comment-id:4172511891 --> @alexgeek commented on GitHub (Apr 1, 2026): Just an FYI if anyone else is stuck with this, I removed `-DGGML_METAL_HAS_BF16` and the models are running again. Haven't found a way ti get it working with that flag still enabled.
Author
Owner

@alexgeek commented on GitHub (Apr 1, 2026):

Ah seems to be fixed by this https://github.com/ollama/ollama/pull/14604

<!-- gh-comment-id:4173202631 --> @alexgeek commented on GitHub (Apr 1, 2026): Ah seems to be fixed by this https://github.com/ollama/ollama/pull/14604
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55980