[GH-ISSUE #15642] MLX runner fails on Apple M5 Max — empty libdirs in Metal GPU detection #72038

Open
opened 2026-05-05 03:23:03 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jmvanbuskirk on GitHub (Apr 17, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15642

What is the issue?

Description

The MLX runner cannot initialize on an Apple M5 Max system. The server log shows that Metal GPU detection succeeds and
correctly identifies the chip as "Apple M5 Max", but the libdirs field is populated as an empty string. This empty
path is then passed to the MLX runner subprocess, which fails to locate libmlxc.dylib even though the library is
physically present in the app bundle.

The issue appears to be that Ollama's Metal GPU family detection does not yet recognize the M5 Max's Metal GPU family,
so it cannot select between the bundled mlx_metal_v3 and mlx_metal_v4 variants.

Steps to reproduce

  1. On an Apple M5 Max Mac running macOS 26.4.1
  2. Install Ollama 0.20.7 from the official DMG (https://ollama.com/download/mac)
  3. Run: ollama run x/z-image-turbo

Expected behavior

Model loads and runs successfully using the bundled MLX runtime.

Actual behavior

Error: failed to load model: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX:
libmlxc.dylib not found (exit: exit status 1)

Relevant server log excerpt

time=2026-04-17T01:53:04.277-04:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0
library=Metal compute=0.0 name=Metal description="Apple M5 Max" libdirs="" driver=0.0 pci_id="" type=discrete
total="107.5 GiB" available="107.5 GiB"
...
time=2026-04-17T01:53:16.398-04:00 level=INFO source=server.go:171 msg="starting mlx runner subprocess"
model=x/z-image-turbo:latest port=53004
time=2026-04-17T01:53:16.411-04:00 level=WARN source=server.go:164 msg=mlx-runner
msg="time=2026-04-17T01:53:16.411-04:00 level=ERROR msg="unable to initialize MLX" error="failed to initialize MLX:
libmlxc.dylib not found""

Key line: description="Apple M5 Max" libdirs=""

Confirmation that the dylibs are present

$ find /Applications/Ollama.app -iname "mlx"
/Applications/Ollama.app/Contents/Resources/mlx_metal_v3
/Applications/Ollama.app/Contents/Resources/mlx_metal_v3/libmlx.dylib
/Applications/Ollama.app/Contents/Resources/mlx_metal_v3/mlx.metallib
/Applications/Ollama.app/Contents/Resources/mlx_metal_v3/libmlxc.dylib
/Applications/Ollama.app/Contents/Resources/mlx_metal_v4
/Applications/Ollama.app/Contents/Resources/mlx_metal_v4/libmlx.dylib
/Applications/Ollama.app/Contents/Resources/mlx_metal_v4/mlx.metallib
/Applications/Ollama.app/Contents/Resources/mlx_metal_v4/libmlxc.dylib

Workarounds attempted (all unsuccessful)

  • Symlinked libmlx.dylib and libmlxc.dylib into /usr/local/lib/ — no effect (runner ignores dyld search path)
  • Set DYLD_FALLBACK_LIBRARY_PATH via launchctl setenv — no effect
  • Attempted to copy dylibs alongside the ollama binary in Contents/Resources/ — blocked by SIP on the signed app
    bundle

Environment

  • Hardware: Apple MacBook Pro, M5 Max, 128 GB unified memory
  • macOS: 26.4.1 (Build 25E253)
  • Ollama version: 0.20.7 (from https://ollama.com/download/mac)
  • Install path: /Applications/Ollama.app

Suggested fix

Add M5 Max to the Metal GPU family detection in the runner, and populate libdirs with the appropriate path (likely
mlx_metal_v4).

Relevant log output


OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.20.7

Originally created by @jmvanbuskirk on GitHub (Apr 17, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15642 ### What is the issue? Description The MLX runner cannot initialize on an Apple M5 Max system. The server log shows that Metal GPU detection succeeds and correctly identifies the chip as "Apple M5 Max", but the libdirs field is populated as an empty string. This empty path is then passed to the MLX runner subprocess, which fails to locate libmlxc.dylib even though the library is physically present in the app bundle. The issue appears to be that Ollama's Metal GPU family detection does not yet recognize the M5 Max's Metal GPU family, so it cannot select between the bundled mlx_metal_v3 and mlx_metal_v4 variants. Steps to reproduce 1. On an Apple M5 Max Mac running macOS 26.4.1 2. Install Ollama 0.20.7 from the official DMG (https://ollama.com/download/mac) 3. Run: ollama run x/z-image-turbo Expected behavior Model loads and runs successfully using the bundled MLX runtime. Actual behavior Error: failed to load model: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: libmlxc.dylib not found (exit: exit status 1) Relevant server log excerpt time=2026-04-17T01:53:04.277-04:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M5 Max" libdirs="" driver=0.0 pci_id="" type=discrete total="107.5 GiB" available="107.5 GiB" ... time=2026-04-17T01:53:16.398-04:00 level=INFO source=server.go:171 msg="starting mlx runner subprocess" model=x/z-image-turbo:latest port=53004 time=2026-04-17T01:53:16.411-04:00 level=WARN source=server.go:164 msg=mlx-runner msg="time=2026-04-17T01:53:16.411-04:00 level=ERROR msg=\"unable to initialize MLX\" error=\"failed to initialize MLX: libmlxc.dylib not found\"" Key line: description="Apple M5 Max" libdirs="" Confirmation that the dylibs are present $ find /Applications/Ollama.app -iname "*mlx*" /Applications/Ollama.app/Contents/Resources/mlx_metal_v3 /Applications/Ollama.app/Contents/Resources/mlx_metal_v3/libmlx.dylib /Applications/Ollama.app/Contents/Resources/mlx_metal_v3/mlx.metallib /Applications/Ollama.app/Contents/Resources/mlx_metal_v3/libmlxc.dylib /Applications/Ollama.app/Contents/Resources/mlx_metal_v4 /Applications/Ollama.app/Contents/Resources/mlx_metal_v4/libmlx.dylib /Applications/Ollama.app/Contents/Resources/mlx_metal_v4/mlx.metallib /Applications/Ollama.app/Contents/Resources/mlx_metal_v4/libmlxc.dylib Workarounds attempted (all unsuccessful) - Symlinked libmlx.dylib and libmlxc.dylib into /usr/local/lib/ — no effect (runner ignores dyld search path) - Set DYLD_FALLBACK_LIBRARY_PATH via launchctl setenv — no effect - Attempted to copy dylibs alongside the ollama binary in Contents/Resources/ — blocked by SIP on the signed app bundle Environment - Hardware: Apple MacBook Pro, M5 Max, 128 GB unified memory - macOS: 26.4.1 (Build 25E253) - Ollama version: 0.20.7 (from https://ollama.com/download/mac) - Install path: /Applications/Ollama.app Suggested fix Add M5 Max to the Metal GPU family detection in the runner, and populate libdirs with the appropriate path (likely mlx_metal_v4). ### Relevant log output ```shell ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.20.7
GiteaMirror added the bug label 2026-05-05 03:23:03 -05:00
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15642
Analyzed: 2026-04-18T18:13:42.494535

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274294792 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15642 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15642 **Analyzed**: 2026-04-18T18:13:42.494535 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Author
Owner

@jmvanbuskirk commented on GitHub (Apr 18, 2026):

Symptom resolved in 0.21.0, but root cause appears unchanged

Just a heads-up: after Ollama auto-updated from 0.20.7 → 0.21.0 on my M5 Max (128 GB, macOS 26.4.1), MLX models
including x/z-image-turbo now load and run without any intervention on my end — no reinstall, no reboot, no config
change.

However, the original detection line in ~/.ollama/logs/server.log is unchanged under 0.21.0:

msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M5 Max" libdirs=""
driver=0.0 ...

So the M5 Max still isn't being recognized by the Metal GPU-family detector (libdirs=""), but the MLX runner in 0.21.0
apparently no longer hard-fails when libdirs is empty — presumably a fallback path was added. Wanted to flag this in
case the underlying detection gap matters for other code paths (future MLX variants, diagnostics, etc.) even though
the user-visible failure is gone.

Happy to leave this open or close it — up to the maintainers.

<!-- gh-comment-id:4274437442 --> @jmvanbuskirk commented on GitHub (Apr 18, 2026): Symptom resolved in 0.21.0, but root cause appears unchanged Just a heads-up: after Ollama auto-updated from 0.20.7 → 0.21.0 on my M5 Max (128 GB, macOS 26.4.1), MLX models including x/z-image-turbo now load and run without any intervention on my end — no reinstall, no reboot, no config change. However, the original detection line in ~/.ollama/logs/server.log is unchanged under 0.21.0: msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M5 Max" libdirs="" driver=0.0 ... So the M5 Max still isn't being recognized by the Metal GPU-family detector (libdirs=""), but the MLX runner in 0.21.0 apparently no longer hard-fails when libdirs is empty — presumably a fallback path was added. Wanted to flag this in case the underlying detection gap matters for other code paths (future MLX variants, diagnostics, etc.) even though the user-visible failure is gone. Happy to leave this open or close it — up to the maintainers.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72038