[GH-ISSUE #15433] MLX models fail to load on macOS (Apple Silicon) in Ollama v0.20.4 — dynamic library not found #35624

Closed
opened 2026-04-22 20:16:30 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @charlesdrakon-cmyk on GitHub (Apr 8, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15433

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Title: MLX models fail to load on macOS (Apple Silicon) in Ollama v0.20.4 — dynamic library not found

Environment

  • Host: Apple Silicon (M4 Max)
  • OS: macOS (latest, Apple Silicon)
  • Ollama version: 0.20.4 (Homebrew)
  • Previous working version: 0.20.3
  • Installation method: Homebrew upgrade (brew upgrade ollama)
  • Launch method: launchctl (custom LaunchAgent)

Summary
After upgrading from Ollama v0.20.3 to v0.20.4, MLX-based models fail to run with a dynamic library load error. Non-MLX models (e.g., Llama 3.3) continue to function normally.

This appears to be a regression specific to MLX runner initialization or packaging in 0.20.4.


Steps to Reproduce

  1. Install Ollama v0.20.3 via Homebrew

  2. Run an MLX model (e.g., qwen3.5:35b-a3b-mlx-bf16) → works

  3. Upgrade to v0.20.4:

    brew upgrade ollama
    
  4. Restart Ollama service

  5. Run:

    ollama run qwen3.5:35b-a3b-mlx-bf16 "Reply with exactly: OK"
    

Expected Behavior
Model runs normally and returns:

OK

Actual Behavior
Fails with:

Error: 500 Internal Server Error: mlx runner failed:
Error: MLX not available: failed to load MLX dynamic library
(searched: [/opt/homebrew/Cellar/ollama/0.20.4/bin/lib/ollama
/opt/homebrew/Cellar/ollama/0.20.4/bin
/opt/homebrew/var/build/lib/ollama])
(exit: exit status 1)

Additional Observations

  • Restarting Ollama does not resolve the issue

  • MLX failure is consistent and reproducible

  • Non-MLX models work correctly:

    ollama run llama3.3:70b "Reply with exactly: OK"
    

    → returns OK

  • Rolling back to v0.20.3 fully restores MLX functionality

  • Homebrew uninstall of 0.20.4 removed mlx and mlx-c, which may be related


Impact

  • All MLX-based models unusable on macOS Apple Silicon
  • Breaks Qwen MLX workflows entirely
  • Forces rollback to 0.20.3

Notes
This appears to be either:

  • a packaging issue (missing MLX dynamic libraries), or
  • a runtime path resolution issue for MLX components in 0.20.4

Workaround
Rollback to v0.20.3 restores full functionality.


Request
Please confirm:

  • whether MLX packaging changed in 0.20.4
  • whether additional dependencies are now required
  • or if this is an unintended regression

Thanks for your work on Ollama—performance on Apple Silicon has been excellent, especially with MLX models.


Reproducibility: 100%
Severity: High (breaks MLX inference entirely)

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @charlesdrakon-cmyk on GitHub (Apr 8, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15433 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? **Title:** MLX models fail to load on macOS (Apple Silicon) in Ollama v0.20.4 — dynamic library not found **Environment** * Host: Apple Silicon (M4 Max) * OS: macOS (latest, Apple Silicon) * Ollama version: 0.20.4 (Homebrew) * Previous working version: 0.20.3 * Installation method: Homebrew upgrade (`brew upgrade ollama`) * Launch method: `launchctl` (custom LaunchAgent) **Summary** After upgrading from Ollama v0.20.3 to v0.20.4, MLX-based models fail to run with a dynamic library load error. Non-MLX models (e.g., Llama 3.3) continue to function normally. This appears to be a regression specific to MLX runner initialization or packaging in 0.20.4. --- **Steps to Reproduce** 1. Install Ollama v0.20.3 via Homebrew 2. Run an MLX model (e.g., `qwen3.5:35b-a3b-mlx-bf16`) → works 3. Upgrade to v0.20.4: ```bash brew upgrade ollama ``` 4. Restart Ollama service 5. Run: ```bash ollama run qwen3.5:35b-a3b-mlx-bf16 "Reply with exactly: OK" ``` --- **Expected Behavior** Model runs normally and returns: ``` OK ``` --- **Actual Behavior** Fails with: ``` Error: 500 Internal Server Error: mlx runner failed: Error: MLX not available: failed to load MLX dynamic library (searched: [/opt/homebrew/Cellar/ollama/0.20.4/bin/lib/ollama /opt/homebrew/Cellar/ollama/0.20.4/bin /opt/homebrew/var/build/lib/ollama]) (exit: exit status 1) ``` --- **Additional Observations** * Restarting Ollama does not resolve the issue * MLX failure is consistent and reproducible * Non-MLX models work correctly: ```bash ollama run llama3.3:70b "Reply with exactly: OK" ``` → returns `OK` * Rolling back to v0.20.3 fully restores MLX functionality * Homebrew uninstall of 0.20.4 removed `mlx` and `mlx-c`, which may be related --- **Impact** * All MLX-based models unusable on macOS Apple Silicon * Breaks Qwen MLX workflows entirely * Forces rollback to 0.20.3 --- **Notes** This appears to be either: * a packaging issue (missing MLX dynamic libraries), or * a runtime path resolution issue for MLX components in 0.20.4 --- **Workaround** Rollback to v0.20.3 restores full functionality. --- **Request** Please confirm: * whether MLX packaging changed in 0.20.4 * whether additional dependencies are now required * or if this is an unintended regression --- Thanks for your work on Ollama—performance on Apple Silicon has been excellent, especially with MLX models. --- **Reproducibility:** 100% **Severity:** High (breaks MLX inference entirely) ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 20:16:30 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 8, 2026):

Does the official install work?

<!-- gh-comment-id:4210131837 --> @rick-github commented on GitHub (Apr 8, 2026): Does the [official install](https://ollama.com/download/mac) work?
Author
Owner

@charlesdrakon-cmyk commented on GitHub (Apr 8, 2026):

I have only tested the Homebrew install path so far.

My failing case was:

  • Homebrew upgrade from 0.20.3 → 0.20.4
  • Apple Silicon macOS
  • qwen3.5:35b-a3b-mlx-bf16 fails with:
    MLX not available: failed to load MLX dynamic library

llama3.3:70b still worked, so the failure appeared specific to the MLX path.

I rolled back to 0.20.3 via Homebrew and MLX functionality returned immediately.

I have not yet tested the official installer path. If useful, I can test that separately to help determine whether this is a Homebrew packaging issue versus a broader 0.20.4 MLX regression.

<!-- gh-comment-id:4210403266 --> @charlesdrakon-cmyk commented on GitHub (Apr 8, 2026): I have only tested the Homebrew install path so far. My failing case was: * Homebrew upgrade from 0.20.3 → 0.20.4 * Apple Silicon macOS * `qwen3.5:35b-a3b-mlx-bf16` fails with: `MLX not available: failed to load MLX dynamic library` `llama3.3:70b` still worked, so the failure appeared specific to the MLX path. I rolled back to 0.20.3 via Homebrew and MLX functionality returned immediately. I have not yet tested the official installer path. If useful, I can test that separately to help determine whether this is a Homebrew packaging issue versus a broader 0.20.4 MLX regression.
Author
Owner

@aosama commented on GitHub (Apr 9, 2026):

Tried re-installing from the official install and same beahvour.

Can confirm — same issue on macOS Apple Silicon with the Ollama.app (not Homebrew).

Environment: M-series Mac, Ollama v0.20.4 installed via the macOS app (not Homebrew), launchctl launch.
After the 0.20.4 auto-update, the x/flux2-klein:9b image generation model fails with:
Error: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: MLX: Failed to load /Applications/Ollama.app/Contents/Resources/libmlxc.dylib: dlopen(...) mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')

The libmlxc.dylib and libmlx.dylib shipped inside /Applications/Ollama.app/Contents/Resources/ are both x86_64-only in 0.20.4, while libggml-base.0.0.0.dylib is a universal binary (x86_64 + arm64). On an ARM64 Mac, the MLX runner cannot load these libraries and any MLX-based model fails immediately with exit status 1.

This was working before the 0.20.4 update. Non-MLX models continue to work fine.

file output for the 0.20.4 app bundle:

  • libmlxc.dylib: Mach-O 64-bit dynamically linked shared library x86_64
  • libmlx.dylib: Mach-O 64-bit dynamically linked shared library x86_64
  • libggml-base.0.0.0.dylib: Mach-O universal binary with 2 architectures x86_64 arm64
    The MLX libraries appear to have lost their arm64 slice in the 0.20.4 release packaging.
<!-- gh-comment-id:4210764910 --> @aosama commented on GitHub (Apr 9, 2026): Tried re-installing from the official install and same beahvour. Can confirm — same issue on macOS Apple Silicon with the Ollama.app (not Homebrew). Environment: M-series Mac, Ollama v0.20.4 installed via the macOS app (not Homebrew), launchctl launch. After the 0.20.4 auto-update, the x/flux2-klein:9b image generation model fails with: Error: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: MLX: Failed to load /Applications/Ollama.app/Contents/Resources/libmlxc.dylib: dlopen(...) mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64') The libmlxc.dylib and libmlx.dylib shipped inside /Applications/Ollama.app/Contents/Resources/ are both x86_64-only in 0.20.4, while libggml-base.0.0.0.dylib is a universal binary (x86_64 + arm64). On an ARM64 Mac, the MLX runner cannot load these libraries and any MLX-based model fails immediately with exit status 1. This was working before the 0.20.4 update. Non-MLX models continue to work fine. **file output for the 0.20.4 app bundle:** - libmlxc.dylib: Mach-O 64-bit dynamically linked shared library x86_64 - libmlx.dylib: Mach-O 64-bit dynamically linked shared library x86_64 - libggml-base.0.0.0.dylib: Mach-O universal binary with 2 architectures x86_64 arm64 The MLX libraries appear to have lost their arm64 slice in the 0.20.4 release packaging.
Author
Owner

@alemata2006 commented on GitHub (Apr 9, 2026):

Ollama 0.20.4 downloaded via Brew. Fail to use x/flux2-klein:9b and x/z-image-turbo:latest with error "Error: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: libmlxc.dylib not found (exit: exit status 1)"
I saw an answer for the same issues for the same library, and apply the simlink again:

ln -s brew --prefix mlx-c/lib/libmlxc.dylib brew --prefix ollama/bin

Result: images created.

Test with: ollama run qwen3.5:9b-mlx-bf16 "ok": Works with no issues

No need in my case to downgrade to another version

<!-- gh-comment-id:4211344690 --> @alemata2006 commented on GitHub (Apr 9, 2026): Ollama 0.20.4 downloaded via Brew. Fail to use x/flux2-klein:9b and x/z-image-turbo:latest with error "Error: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: libmlxc.dylib not found (exit: exit status 1)" I saw an answer for the same issues for the same library, and apply the simlink again: ln -s `brew --prefix mlx-c`/lib/libmlxc.dylib `brew --prefix ollama`/bin Result: images created. Test with: ollama run qwen3.5:9b-mlx-bf16 "ok": Works with no issues No need in my case to downgrade to another version
Author
Owner

@pudquick commented on GitHub (Apr 10, 2026):

I can confirm that after building https://github.com/AlexWorland/ollama/tree/fix/m5-metal-tensor-bf16-mismatch, I was able to get it working on my own personal M5 Max.

The current preview build as of this comment (v0.20.5), however still crashes exactly like before - even with this x86_64 dylib removed from it.

<!-- gh-comment-id:4219415512 --> @pudquick commented on GitHub (Apr 10, 2026): I can confirm that after building https://github.com/AlexWorland/ollama/tree/fix/m5-metal-tensor-bf16-mismatch, I was able to get it working on my own personal M5 Max. The current preview build as of this comment ([v0.20.5](https://github.com/ollama/ollama/releases/tag/v0.20.5-rc2)), however still crashes exactly like before - even with this x86_64 dylib removed from it.
Author
Owner

@noomorph commented on GitHub (Apr 10, 2026):

Ran into the same issue on macOS 26.3.1 (Tahoe), M4 Pro, Ollama 0.20.5 installed via .app. The libmlxc.dylib files are present in Contents/Resources/mlx_metal_v4/ but the imagegen runner can't find them.

Dug into the source and I think I found the discrepancy. The regular MLX runner (x/mlxrunner/mlx/dynamic.go:204-221) globs for mlx_* subdirs directly in exeDir:

// tryLoadFromMLXSubdirs globs for mlx_* subdirs within dir
mlxDirs, err := filepath.Glob(filepath.Join(dir, "mlx_*"))

But the imagegen MLX loader (x/imagegen/mlx/mlx.go:1758-1772) only looks for mlx* subdirs inside lib/ollama/, not in exeDir itself:

for _, libOllamaDir := range []string{
    filepath.Join(exeDir, "lib", "ollama"),
    filepath.Join(exeDir, "..", "lib", "ollama"),
} {
    if mlxDirs, err := filepath.Glob(filepath.Join(libOllamaDir, "mlx*")); ...

In the .app bundle, the layout is Contents/Resources/mlx_metal_v4/libmlxc.dylib — so exeDir is Contents/Resources/ but there's no lib/ollama/ subdirectory. The regular runner finds it, the imagegen runner doesn't.

Workaround: setting OLLAMA_LIBRARY_PATH to the Resources directory resolves it:

export OLLAMA_LIBRARY_PATH=/Applications/Ollama.app/Contents/Resources

Hope this helps narrow things down.

<!-- gh-comment-id:4224232807 --> @noomorph commented on GitHub (Apr 10, 2026): Ran into the same issue on macOS 26.3.1 (Tahoe), M4 Pro, Ollama 0.20.5 installed via .app. The `libmlxc.dylib` files are present in `Contents/Resources/mlx_metal_v4/` but the imagegen runner can't find them. Dug into the source and I think I found the discrepancy. The regular MLX runner (`x/mlxrunner/mlx/dynamic.go:204-221`) globs for `mlx_*` subdirs directly in `exeDir`: ```go // tryLoadFromMLXSubdirs globs for mlx_* subdirs within dir mlxDirs, err := filepath.Glob(filepath.Join(dir, "mlx_*")) ``` But the imagegen MLX loader (`x/imagegen/mlx/mlx.go:1758-1772`) only looks for `mlx*` subdirs inside `lib/ollama/`, not in `exeDir` itself: ```go for _, libOllamaDir := range []string{ filepath.Join(exeDir, "lib", "ollama"), filepath.Join(exeDir, "..", "lib", "ollama"), } { if mlxDirs, err := filepath.Glob(filepath.Join(libOllamaDir, "mlx*")); ... ``` In the .app bundle, the layout is `Contents/Resources/mlx_metal_v4/libmlxc.dylib` — so `exeDir` is `Contents/Resources/` but there's no `lib/ollama/` subdirectory. The regular runner finds it, the imagegen runner doesn't. **Workaround:** setting `OLLAMA_LIBRARY_PATH` to the Resources directory resolves it: ``` export OLLAMA_LIBRARY_PATH=/Applications/Ollama.app/Contents/Resources ``` Hope this helps narrow things down.
Author
Owner

@DevdevdevA commented on GitHub (Apr 10, 2026):

I mean mlx mode auto enable without call

<!-- gh-comment-id:4226663633 --> @DevdevdevA commented on GitHub (Apr 10, 2026): I mean mlx mode auto enable without call
Author
Owner

@W-Floyd commented on GitHub (Apr 10, 2026):

ln -s brew --prefix mlx-c/lib/libmlxc.dylib brew --prefix ollama/bin

Formatting:

ln -s "$(brew --prefix mlx-c)/lib/libmlxc.dylib" "$(brew --prefix ollama)/bin"
<!-- gh-comment-id:4226686209 --> @W-Floyd commented on GitHub (Apr 10, 2026): > ln -s `brew --prefix mlx-c`/lib/libmlxc.dylib `brew --prefix ollama`/bin Formatting: ``` ln -s "$(brew --prefix mlx-c)/lib/libmlxc.dylib" "$(brew --prefix ollama)/bin" ```
Author
Owner

@LinReger commented on GitHub (Apr 11, 2026):

I've just ran into the issue. I can confirm that it is still present in v0.20.5.

ollama version is 0.20.5
❯ ollama
Error: error loading model: 500 Internal Server Error: model failed to load, this may be due to resource limitations or an internal error, check ollama server logs for details

rolling back to v0.20.3 solves the problem.

<!-- gh-comment-id:4229540685 --> @LinReger commented on GitHub (Apr 11, 2026): I've just ran into the issue. I can confirm that it is still present in v0.20.5. `ollama version is 0.20.5` `❯ ollama` `Error: error loading model: 500 Internal Server Error: model failed to load, this may be due to resource limitations or an internal error, check ollama server logs for details` rolling back to v0.20.3 solves the problem.
Author
Owner

@aosama commented on GitHub (Apr 11, 2026):

Same behavior here still broken on 0.20.5

<!-- gh-comment-id:4229589832 --> @aosama commented on GitHub (Apr 11, 2026): Same behavior here still broken on 0.20.5
Author
Owner

@alemata2006 commented on GitHub (Apr 11, 2026):

Updated to 0.20.5, and yes, still the same issue, but the solution I proposed a couple of days ago still works. Here the prompt to create an image after updating (Macos 26.4, Ollama using Brew, Ollama version 0.20.5).
You don't need to downgrade and lost the rest of improvements, a simple simlink works as workaround until this is fixed.

Image

ln -s `brew --prefix mlx-c`/lib/libmlxc.dylib `brew --prefix ollama`/bin

PS: Thanks @W-Floyd for the formating (I forgot to add it originally)

<!-- gh-comment-id:4229618401 --> @alemata2006 commented on GitHub (Apr 11, 2026): Updated to 0.20.5, and yes, still the same issue, but the solution I proposed a couple of days ago still works. Here the prompt to create an image after updating (Macos 26.4, Ollama using Brew, Ollama version 0.20.5). You don't need to downgrade and lost the rest of improvements, a simple simlink works as workaround until this is fixed. <img width="1306" height="264" alt="Image" src="https://github.com/user-attachments/assets/f92a2ad9-aa8b-44aa-a730-231d8586bd6a" /> ``ln -s `brew --prefix mlx-c`/lib/libmlxc.dylib `brew --prefix ollama`/bin`` PS: Thanks @W-Floyd for the formating (I forgot to add it originally)
Author
Owner

@wpostma commented on GitHub (Apr 11, 2026):

Is there some partial change in build for go code afoot? head of main repo doesn't even BUILD on non-mac systems.

<!-- gh-comment-id:4229666973 --> @wpostma commented on GitHub (Apr 11, 2026): Is there some partial change in build for go code afoot? head of main repo doesn't even BUILD on non-mac systems.
Author
Owner

@surinrasu commented on GitHub (Apr 15, 2026):

btw, if you serve Ollama via brew service, ln -sf $(brew --prefix mlx-c)/lib/libmlxc.dylib $(brew --prefix)/var/build/lib/ollama/libmlxc.dylib will also work 👀

<!-- gh-comment-id:4248824873 --> @surinrasu commented on GitHub (Apr 15, 2026): btw, if you serve Ollama via `brew service`, `ln -sf $(brew --prefix mlx-c)/lib/libmlxc.dylib $(brew --prefix)/var/build/lib/ollama/libmlxc.dylib` will also work 👀
Author
Owner

@paul90 commented on GitHub (Apr 15, 2026):

Still broken in 0.20.7

<!-- gh-comment-id:4251015697 --> @paul90 commented on GitHub (Apr 15, 2026): Still broken in 0.20.7
Author
Owner

@cdevroe commented on GitHub (Apr 15, 2026):

@dhiltgen unsure why this is closed. But it is still an issue.

<!-- gh-comment-id:4251234624 --> @cdevroe commented on GitHub (Apr 15, 2026): @dhiltgen unsure why this is closed. But it is still an issue.
Author
Owner

@milindmore22 commented on GitHub (Apr 15, 2026):

@rick-github the issue is not fully resolved can you please re-open it

<!-- gh-comment-id:4252727894 --> @milindmore22 commented on GitHub (Apr 15, 2026): @rick-github the issue is not fully resolved can you please re-open it
Author
Owner

@sc0rp10n-py commented on GitHub (Apr 16, 2026):

Ok whoever is facing this issue
run this

ln -s "$(brew --prefix mlx-c)/lib/libmlxc.dylib" "$(brew --prefix ollama)/bin"

this will fix the error

I am on ollama version 0.20.7 installed by homebrew

<!-- gh-comment-id:4262797667 --> @sc0rp10n-py commented on GitHub (Apr 16, 2026): Ok whoever is facing this issue run this ``` ln -s "$(brew --prefix mlx-c)/lib/libmlxc.dylib" "$(brew --prefix ollama)/bin" ``` this will fix the error I am on ollama version 0.20.7 installed by homebrew
Author
Owner

@dhiltgen commented on GitHub (Apr 17, 2026):

The brew tooling isn't part of the main Ollama repo, but my suspicion is it may need some adjustments with the new layout for the MLX libraries. Specifically, there should not be one in lib/ollama any more. We now build and distribute 2 versions to support MacOS v26 with M5 optimizations (mlx_metal_v4) and an older one to support prior versions of MacOS (mlx_metal_v3) which are runtime selected based on which OS we detect.

<!-- gh-comment-id:4264281310 --> @dhiltgen commented on GitHub (Apr 17, 2026): The brew tooling isn't part of the main Ollama repo, but my suspicion is it may need some adjustments with the new layout for the MLX libraries. Specifically, there should not be one in lib/ollama any more. We now build and distribute 2 versions to support MacOS v26 with M5 optimizations (mlx_metal_v4) and an older one to support prior versions of MacOS (mlx_metal_v3) which are runtime selected based on which OS we detect.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35624