[GH-ISSUE #13747] Image generation not working on macOS #34771

Closed
opened 2026-04-22 18:36:24 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @csterritt on GitHub (Jan 16, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13747

What is the issue?

Tried x/z-image-turbo and got the following error:

Error: 500 Internal Server Error: failed to start image runner: fork/exec /opt/homebrew/Cellar/ollama/0.14.1/bin/ollama-mlx: no such file or directory

I tried uninstalling and reinstalling ollama, but there is still no ollama-mlx in /opt/homebrew/Cellar/ollama/0.14.1/bin.

Thanks!

Relevant log output

time=2026-01-16T07:28:42.892-05:00 level=INFO source=routes.go:1614 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/chris/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2026-01-16T07:28:42.946-05:00 level=INFO source=images.go:499 msg="total blobs: 2222"
time=2026-01-16T07:28:42.950-05:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-01-16T07:28:42.951-05:00 level=INFO source=routes.go:1667 msg="Listening on 127.0.0.1:11434 (version 0.14.1)"
time=2026-01-16T07:28:42.951-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-16T07:28:42.952-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/opt/homebrew/Cellar/ollama/0.14.1/bin/ollama runner --ollama-engine --port 59072"
time=2026-01-16T07:28:43.027-05:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M2" libdirs="" driver=0.0 pci_id="" type=discrete total="17.8 GiB" available="17.8 GiB"
time=2026-01-16T07:28:43.027-05:00 level=INFO source=routes.go:1708 msg="entering low vram mode" "total vram"="17.8 GiB" threshold="20.0 GiB"
[GIN] 2026/01/16 - 07:28:46 | 200 |     117.292µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/16 - 07:28:46 | 200 |    40.48175ms |       127.0.0.1 | POST     "/api/show"
time=2026-01-16T07:28:52.122-05:00 level=INFO source=server.go:149 msg="starting ollama-mlx image runner subprocess" exe=/opt/homebrew/Cellar/ollama/0.14.1/bin/ollama-mlx model=x/z-image-turbo port=59077
[GIN] 2026/01/16 - 07:28:52 | 500 |   20.152916ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/16 - 07:29:16 | 200 |      17.042µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/16 - 07:29:16 | 200 |   39.903708ms |       127.0.0.1 | POST     "/api/show"
time=2026-01-16T07:29:20.882-05:00 level=INFO source=server.go:149 msg="starting ollama-mlx image runner subprocess" exe=/opt/homebrew/Cellar/ollama/0.14.1/bin/ollama-mlx model=x/z-image-turbo port=59082
[GIN] 2026/01/16 - 07:29:20 | 500 |     19.4415ms |       127.0.0.1 | POST     "/api/generate"

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @csterritt on GitHub (Jan 16, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13747 ### What is the issue? Tried `x/z-image-turbo` and got the following error: Error: 500 Internal Server Error: failed to start image runner: fork/exec /opt/homebrew/Cellar/ollama/0.14.1/bin/ollama-mlx: no such file or directory I tried uninstalling and reinstalling ollama, but there is still no `ollama-mlx` in `/opt/homebrew/Cellar/ollama/0.14.1/bin`. Thanks! ### Relevant log output ```shell time=2026-01-16T07:28:42.892-05:00 level=INFO source=routes.go:1614 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/chris/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2026-01-16T07:28:42.946-05:00 level=INFO source=images.go:499 msg="total blobs: 2222" time=2026-01-16T07:28:42.950-05:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-01-16T07:28:42.951-05:00 level=INFO source=routes.go:1667 msg="Listening on 127.0.0.1:11434 (version 0.14.1)" time=2026-01-16T07:28:42.951-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-16T07:28:42.952-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/opt/homebrew/Cellar/ollama/0.14.1/bin/ollama runner --ollama-engine --port 59072" time=2026-01-16T07:28:43.027-05:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M2" libdirs="" driver=0.0 pci_id="" type=discrete total="17.8 GiB" available="17.8 GiB" time=2026-01-16T07:28:43.027-05:00 level=INFO source=routes.go:1708 msg="entering low vram mode" "total vram"="17.8 GiB" threshold="20.0 GiB" [GIN] 2026/01/16 - 07:28:46 | 200 | 117.292µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/16 - 07:28:46 | 200 | 40.48175ms | 127.0.0.1 | POST "/api/show" time=2026-01-16T07:28:52.122-05:00 level=INFO source=server.go:149 msg="starting ollama-mlx image runner subprocess" exe=/opt/homebrew/Cellar/ollama/0.14.1/bin/ollama-mlx model=x/z-image-turbo port=59077 [GIN] 2026/01/16 - 07:28:52 | 500 | 20.152916ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/16 - 07:29:16 | 200 | 17.042µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/16 - 07:29:16 | 200 | 39.903708ms | 127.0.0.1 | POST "/api/show" time=2026-01-16T07:29:20.882-05:00 level=INFO source=server.go:149 msg="starting ollama-mlx image runner subprocess" exe=/opt/homebrew/Cellar/ollama/0.14.1/bin/ollama-mlx model=x/z-image-turbo port=59082 [GIN] 2026/01/16 - 07:29:20 | 500 | 19.4415ms | 127.0.0.1 | POST "/api/generate" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 18:36:24 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34771