[GH-ISSUE #14118] MLX Error #34971

Open
opened 2026-04-22 19:03:56 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @hwitzthum on GitHub (Feb 6, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14118

What is the issue?

Config:
macOS version (26.2)
chip (M5)
Ollama version (0.15.5)
The MLX kernel error: MLX error: [metal::Device] Unable to load kernel.

The model loads successfully into VRAM (11.9 GB), but when it tries to actually generate the image, it fails to load a Metal GPU kernel

Relevant log output

time=2026-02-06T11:22:05.172+01:00 level=INFO source=server.go:219 msg="mlx runner is ready" port=50052
time=2026-02-06T11:22:05.270+01:00 level=INFO source=server.go:133 msg=mlx-runner msg="MLX error: [metal::Device] Unable to load kernel affine_qmm_t_nax_bfloat16_t_gs_32_b_8_bm64_bn64_bk64_wm2_wn2_alN_true_batch_0"
time=2026-02-06T11:22:05.270+01:00 level=INFO source=server.go:133 msg=mlx-runner msg=" at /Users/runner/work/ollama/ollama/build/_deps/mlx-c-src/mlx/c/transforms.cpp:73"
[GIN] 2026/02/06 - 11:22:05 | 500 |  4.254860709s |       127.0.0.1 | POST     "/api/generate"

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.15.5

Originally created by @hwitzthum on GitHub (Feb 6, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14118 ### What is the issue? Config: macOS version (26.2) chip (M5) Ollama version (0.15.5) The MLX kernel error: MLX error: [metal::Device] Unable to load kernel. The model loads successfully into VRAM (11.9 GB), but when it tries to actually generate the image, it fails to load a Metal GPU kernel ### Relevant log output ```shell time=2026-02-06T11:22:05.172+01:00 level=INFO source=server.go:219 msg="mlx runner is ready" port=50052 time=2026-02-06T11:22:05.270+01:00 level=INFO source=server.go:133 msg=mlx-runner msg="MLX error: [metal::Device] Unable to load kernel affine_qmm_t_nax_bfloat16_t_gs_32_b_8_bm64_bn64_bk64_wm2_wn2_alN_true_batch_0" time=2026-02-06T11:22:05.270+01:00 level=INFO source=server.go:133 msg=mlx-runner msg=" at /Users/runner/work/ollama/ollama/build/_deps/mlx-c-src/mlx/c/transforms.cpp:73" [GIN] 2026/02/06 - 11:22:05 | 500 | 4.254860709s | 127.0.0.1 | POST "/api/generate" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.15.5
GiteaMirror added the bug label 2026-04-22 19:03:56 -05:00
Author
Owner

@Extra-Citron-7630 commented on GitHub (Feb 7, 2026):

I think I am getting the same error

ollama --version
MLX: Failed to load symbol: mlx_metal_device_info
ollama version is 0.15.5
<!-- gh-comment-id:3865737706 --> @Extra-Citron-7630 commented on GitHub (Feb 7, 2026): I think I am getting the same error ``` ollama --version MLX: Failed to load symbol: mlx_metal_device_info ollama version is 0.15.5 ```
Author
Owner

@vvanmol commented on GitHub (Feb 8, 2026):

Facing the same issue on Apple M4 Max.

Relevant logs:

source=server.go:147 msg="starting mlx runner subprocess" exe=/opt/homebrew/Cellar/ollama/0.15.5/bin/ollama model=x/flux2-klein:latest port=49495 mode=imagegen
source=server.go:140 msg=mlx-runner msg="MLX: Failed to load symbol: mlx_metal_device_info"
source=server.go:140 msg=mlx-runner msg="MLX: Failed to load symbol: mlx_metal_device_info"
source=server.go:140 msg=mlx-runner msg="level=ERROR msg=\"unable to initialize MLX\" error=\"failed to load MLX function symbols\""
source=server.go:140 msg=mlx-runner msg="Error: failed to load MLX function symbols"
source=server.go:362 msg="stopping mlx runner subprocess" pid=1454

EDIT:

Brew dependencies are installed and up to date:

mlx-c ✔: stable 0.5.0 (bottled)
mlx ✔: stable 0.30.5 (bottled), HEAD
<!-- gh-comment-id:3866847921 --> @vvanmol commented on GitHub (Feb 8, 2026): Facing the same issue on Apple M4 Max. Relevant logs: ``` source=server.go:147 msg="starting mlx runner subprocess" exe=/opt/homebrew/Cellar/ollama/0.15.5/bin/ollama model=x/flux2-klein:latest port=49495 mode=imagegen source=server.go:140 msg=mlx-runner msg="MLX: Failed to load symbol: mlx_metal_device_info" source=server.go:140 msg=mlx-runner msg="MLX: Failed to load symbol: mlx_metal_device_info" source=server.go:140 msg=mlx-runner msg="level=ERROR msg=\"unable to initialize MLX\" error=\"failed to load MLX function symbols\"" source=server.go:140 msg=mlx-runner msg="Error: failed to load MLX function symbols" source=server.go:362 msg="stopping mlx runner subprocess" pid=1454 ``` EDIT: Brew dependencies are installed and up to date: ``` mlx-c ✔: stable 0.5.0 (bottled) mlx ✔: stable 0.30.5 (bottled), HEAD ```
Author
Owner

@Abioy commented on GitHub (Feb 9, 2026):

mlx-c 0.5.0 remove mlx_metal_device_info(). But ollama loads almost all dynamic symbols in libmlxc and fail with InitMLX even though most of them are not used in ollama.

workaround could be downgrade mlx-c to 0.4.1

I think the best way to fix this is remove those unused libmlxc apis in ollama.

<!-- gh-comment-id:3871919890 --> @Abioy commented on GitHub (Feb 9, 2026): mlx-c 0.5.0 remove mlx_metal_device_info(). But ollama loads almost all dynamic symbols in libmlxc and fail with InitMLX even though most of them are not used in ollama. workaround could be downgrade mlx-c to 0.4.1 I think the best way to fix this is remove those unused libmlxc apis in ollama.
Author
Owner

@chairbender commented on GitHub (Feb 9, 2026):

Just ran into this myself. It appears to be only an issue when installing via brew. If I had to speculate (not very familiar with brew myself), the latest version of mlx-c is 0.5.0 and that's the one the brew formula is installing. I'm not sure if brew is just unable to pin a transitive dep to a specific version or if the formula is somehow mis-configured.
Or, perhaps I ran brew upgrade after installing and it upgraded mlx-c as a side-effect.

I uninstalled viaa

# note it seems the ollama models are stored in ~/.ollama/models and AREN'T deleted by this so
# you won't have to re-download
brew uninstall --zap ollama

And then downloading the provided DMG linked from the main readme (and running it once). After that it's available on the command line and no more MLX errors.

<!-- gh-comment-id:3873663316 --> @chairbender commented on GitHub (Feb 9, 2026): Just ran into this myself. It appears to be only an issue when installing via brew. If I had to speculate (not very familiar with brew myself), the latest version of mlx-c is 0.5.0 and that's the one the brew formula is installing. I'm not sure if brew is just unable to pin a transitive dep to a specific version or if the formula is somehow mis-configured. Or, perhaps I ran brew upgrade after installing and it upgraded mlx-c as a side-effect. I uninstalled viaa ```bash # note it seems the ollama models are stored in ~/.ollama/models and AREN'T deleted by this so # you won't have to re-download brew uninstall --zap ollama ``` And then downloading the provided DMG linked from the main readme (and running it once). After that it's available on the command line and no more MLX errors.
Author
Owner

@kbrock commented on GitHub (Feb 10, 2026):

I can verify that downgrading from 5.0 to 4.1.1 fixes the problem.

Please don't script kiddie this.

If you don't know what I'm typing or how homebrew works, I really suggest using the DMG.
I'm including this because this group tends to do some pretty crazy stuff. So I assume you are advanced users...

ollama --version

curl -LO wget https://raw.githubusercontent.com/Homebrew/homebrew-core/e6183090dd4753cc12a9c47eff11832b044a0702/Formula/m/mlx-c.rb
brew unlink mlx-c
HOMEBREW_DEVELOPER=1 brew install --formula ./mlx-c.rb
brew pin mlx-c # <=== WARNING

brew info mlx-c
ollama --version

This says never upgrade it.

Remember to brew unpin mlx-c

UPDATE: fixed curl. thnx

<!-- gh-comment-id:3875272015 --> @kbrock commented on GitHub (Feb 10, 2026): I can verify that downgrading from `5.0` to `4.1.1` fixes the problem. ## Please don't script kiddie this. If you don't know what I'm typing or how homebrew works, I really suggest using the DMG. I'm including this because this group tends to do some pretty crazy stuff. So I assume you are advanced users... ```bash ollama --version curl -LO wget https://raw.githubusercontent.com/Homebrew/homebrew-core/e6183090dd4753cc12a9c47eff11832b044a0702/Formula/m/mlx-c.rb brew unlink mlx-c HOMEBREW_DEVELOPER=1 brew install --formula ./mlx-c.rb brew pin mlx-c # <=== WARNING brew info mlx-c ollama --version ``` This says never upgrade it. ## Remember to `brew unpin mlx-c` UPDATE: fixed curl. thnx
Author
Owner

@Extra-Citron-7630 commented on GitHub (Feb 10, 2026):

I can verify that downgrading from 5.0 to 4.1.1 fixes the problem.

Please don't script kiddie this.

If you don't know what I'm typing or how homebrew works, I really suggest using the DMG. I'm including this because this group tends to do some pretty crazy stuff. So I assume you are advanced users...

ollama --version

curl -O wget https://raw.githubusercontent.com/Homebrew/homebrew-core/e6183090dd4753cc12a9c47eff11832b044a0702/Formula/m/mlx-c.rb
brew unlink mlx-c
HOMEBREW_DEVELOPER=1 brew install --formula ./mlx-c.rb
brew pin mlx-c # <=== WARNING

brew info mlx-c
ollama --version
This says never upgrade it.

Remember to brew unpin mlx-c

curl -LO https://raw.githubusercontent.com/Homebrew/homebrew-core/e6183090dd4753cc12a9c47eff11832b044a0702/Formula/m/mlx-c.rb

This should work, I think you combined curl and wget

<!-- gh-comment-id:3875569463 --> @Extra-Citron-7630 commented on GitHub (Feb 10, 2026): > I can verify that downgrading from `5.0` to `4.1.1` fixes the problem. > > ## Please don't script kiddie this. > > If you don't know what I'm typing or how homebrew works, I really suggest using the DMG. I'm including this because this group tends to do some pretty crazy stuff. So I assume you are advanced users... > > ollama --version > > curl -O wget https://raw.githubusercontent.com/Homebrew/homebrew-core/e6183090dd4753cc12a9c47eff11832b044a0702/Formula/m/mlx-c.rb > brew unlink mlx-c > HOMEBREW_DEVELOPER=1 brew install --formula ./mlx-c.rb > brew pin mlx-c # <=== WARNING > > brew info mlx-c > ollama --version > This says never upgrade it. > > ## Remember to `brew unpin mlx-c` curl -LO https://raw.githubusercontent.com/Homebrew/homebrew-core/e6183090dd4753cc12a9c47eff11832b044a0702/Formula/m/mlx-c.rb This should work, I think you combined curl and wget
Author
Owner

@vvanmol commented on GitHub (Feb 10, 2026):

Indeed downgrading mlx-c to 0.4.1 does fix the MLX errors.

<!-- gh-comment-id:3879132918 --> @vvanmol commented on GitHub (Feb 10, 2026): Indeed downgrading mlx-c to 0.4.1 does fix the MLX errors.
Author
Owner

@michaellmonaghan commented on GitHub (Feb 11, 2026):

Downgrading mlx-c appears to have worked on my system as well. Took a different approach creating a versioned recipe for mlx-c@0.4.1 in a local repo.

<!-- gh-comment-id:3885223111 --> @michaellmonaghan commented on GitHub (Feb 11, 2026): Downgrading mlx-c appears to have worked on my system as well. Took a different approach creating a versioned recipe for mlx-c@0.4.1 in a local repo.
Author
Owner

@michaellmonaghan commented on GitHub (Feb 11, 2026):

Homebrew issue has been filed https://github.com/Homebrew/homebrew-core/issues/266704

<!-- gh-comment-id:3885289111 --> @michaellmonaghan commented on GitHub (Feb 11, 2026): Homebrew issue has been filed https://github.com/Homebrew/homebrew-core/issues/266704
Author
Owner

@senki commented on GitHub (Feb 13, 2026):

Homebrew installed ollama upgraded to 0.15.6 (cli version) solved the problem for me even with 0.5.0 mlx-c. Previously ollama 0.15.5 displayed the MLX: Failed to load symbol: mlx_metal_device_info error.

Note no 0.16.x is available in Homebrew yet.

<!-- gh-comment-id:3894881376 --> @senki commented on GitHub (Feb 13, 2026): Homebrew installed ollama upgraded to 0.15.6 (cli version) solved the problem for me even with 0.5.0 mlx-c. Previously ollama 0.15.5 displayed the `MLX: Failed to load symbol: mlx_metal_device_info` error. Note no 0.16.x is available in Homebrew yet.
Author
Owner

@aihua commented on GitHub (Feb 13, 2026):

  • Reproduce:
  1. Download ollama from release https://github.com/ollama/ollama/releases/download/v0.16.1/ollama-darwin.tgz
  2. tar zxvf ollama-darwin.tgz && ollama serve && ollama run x/z-image-turbo
  • The Error:
    failed to load model: 500 Internal Server Error: mlx runner failed: model.norm.weight (exit: exit status 1)

  • The Log

time=2026-02-13T12:55:02.902+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="time=2026-02-13T12:55:02.902+08:00 level=INFO msg=\"MLX library initialized\""
time=2026-02-13T12:55:02.905+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="time=2026-02-13T12:55:02.905+08:00 level=INFO msg=\"starting mlx runner\" model=x/z-image-turbo:latest port=50799 mode=imagegen"
time=2026-02-13T12:55:02.911+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="time=2026-02-13T12:55:02.911+08:00 level=INFO msg=\"detected image model type\" type=ZImagePipeline"
time=2026-02-13T12:55:02.911+08:00 level=INFO source=server.go:134 msg=mlx-runner msg="Loading Z-Image model from manifest: x/z-image-turbo:latest..."
time=2026-02-13T12:55:03.134+08:00 level=INFO source=server.go:134 msg=mlx-runner msg="  Loading tokenizer... ✓"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="Error: failed to create server: failed to load image model: failed to load zimage model: text encoder: load module: LoadModule: missing weights:"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.embed_tokens.weight"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.self_attn.q_proj: failed to load quantized weight model.layers.0.self_attn.q_proj: tensor \"model.layers.0.self_attn.q_proj.weight\" not found"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.self_attn.k_proj: failed to load quantized weight model.layers.0.self_attn.k_proj: tensor \"model.layers.0.self_attn.k_proj.weight\" not found"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.self_attn.v_proj: failed to load quantized weight model.layers.0.self_attn.v_proj: tensor \"model.layers.0.self_attn.v_proj.weight\" not found"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.self_attn.o_proj: failed to load quantized weight model.layers.0.self_attn.o_proj: tensor \"model.layers.0.self_attn.o_proj.weight\" not found"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.self_attn.q_norm.weight"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.self_attn.k_norm.weight"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.mlp.gate_proj: failed to load quantized weight model.layers.0.mlp.gate_proj: tensor \"model.layers.0.mlp.gate_proj.weight\" not found"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.mlp.up_proj: failed to load quantized weight model.layers.0.mlp.up_proj: tensor \"model.layers.0.mlp.up_proj.weight\" not found"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.mlp.down_proj: failed to load quantized weight model.layers.0.mlp.down_proj: tensor \"model.layers.0.mlp.down_proj.weight\" not found"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.input_layernorm.weight"
time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.0.post_attention_layernorm.weight"
...   ...   ...
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.self_attn.q_proj: failed to load quantized weight model.layers.35.self_attn.q_proj: tensor \"model.layers.35.self_attn.q_proj.weight\" not found"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.self_attn.k_proj: failed to load quantized weight model.layers.35.self_attn.k_proj: tensor \"model.layers.35.self_attn.k_proj.weight\" not found"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.self_attn.v_proj: failed to load quantized weight model.layers.35.self_attn.v_proj: tensor \"model.layers.35.self_attn.v_proj.weight\" not found"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.self_attn.o_proj: failed to load quantized weight model.layers.35.self_attn.o_proj: tensor \"model.layers.35.self_attn.o_proj.weight\" not found"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.self_attn.q_norm.weight"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.self_attn.k_norm.weight"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.mlp.gate_proj: failed to load quantized weight model.layers.35.mlp.gate_proj: tensor \"model.layers.35.mlp.gate_proj.weight\" not found"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.mlp.up_proj: failed to load quantized weight model.layers.35.mlp.up_proj: tensor \"model.layers.35.mlp.up_proj.weight\" not found"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.mlp.down_proj: failed to load quantized weight model.layers.35.mlp.down_proj: tensor \"model.layers.35.mlp.down_proj.weight\" not found"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.input_layernorm.weight"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.layers.35.post_attention_layernorm.weight"
time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="  model.norm.weight"
time=2026-02-13T12:55:04.082+08:00 level=INFO source=server.go:134 msg=mlx-runner msg="  Loading text encoder... "
time=2026-02-13T12:55:04.082+08:00 level=INFO source=server.go:363 msg="stopping mlx runner subprocess" pid=3488
<!-- gh-comment-id:3894931208 --> @aihua commented on GitHub (Feb 13, 2026): - Reproduce: 1. Download ollama from release [https://github.com/ollama/ollama/releases/download/v0.16.1/ollama-darwin.tgz](https://github.com/ollama/ollama/releases/download/v0.16.1/ollama-darwin.tgz) 2. `tar zxvf ollama-darwin.tgz && ollama serve && ollama run x/z-image-turbo` - The Error: `failed to load model: 500 Internal Server Error: mlx runner failed: model.norm.weight (exit: exit status 1)` - The Log ``` time=2026-02-13T12:55:02.902+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="time=2026-02-13T12:55:02.902+08:00 level=INFO msg=\"MLX library initialized\"" time=2026-02-13T12:55:02.905+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="time=2026-02-13T12:55:02.905+08:00 level=INFO msg=\"starting mlx runner\" model=x/z-image-turbo:latest port=50799 mode=imagegen" time=2026-02-13T12:55:02.911+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="time=2026-02-13T12:55:02.911+08:00 level=INFO msg=\"detected image model type\" type=ZImagePipeline" time=2026-02-13T12:55:02.911+08:00 level=INFO source=server.go:134 msg=mlx-runner msg="Loading Z-Image model from manifest: x/z-image-turbo:latest..." time=2026-02-13T12:55:03.134+08:00 level=INFO source=server.go:134 msg=mlx-runner msg=" Loading tokenizer... ✓" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg="Error: failed to create server: failed to load image model: failed to load zimage model: text encoder: load module: LoadModule: missing weights:" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.embed_tokens.weight" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.self_attn.q_proj: failed to load quantized weight model.layers.0.self_attn.q_proj: tensor \"model.layers.0.self_attn.q_proj.weight\" not found" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.self_attn.k_proj: failed to load quantized weight model.layers.0.self_attn.k_proj: tensor \"model.layers.0.self_attn.k_proj.weight\" not found" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.self_attn.v_proj: failed to load quantized weight model.layers.0.self_attn.v_proj: tensor \"model.layers.0.self_attn.v_proj.weight\" not found" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.self_attn.o_proj: failed to load quantized weight model.layers.0.self_attn.o_proj: tensor \"model.layers.0.self_attn.o_proj.weight\" not found" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.self_attn.q_norm.weight" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.self_attn.k_norm.weight" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.mlp.gate_proj: failed to load quantized weight model.layers.0.mlp.gate_proj: tensor \"model.layers.0.mlp.gate_proj.weight\" not found" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.mlp.up_proj: failed to load quantized weight model.layers.0.mlp.up_proj: tensor \"model.layers.0.mlp.up_proj.weight\" not found" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.mlp.down_proj: failed to load quantized weight model.layers.0.mlp.down_proj: tensor \"model.layers.0.mlp.down_proj.weight\" not found" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.input_layernorm.weight" time=2026-02-13T12:55:04.034+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.0.post_attention_layernorm.weight" ... ... ... time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.self_attn.q_proj: failed to load quantized weight model.layers.35.self_attn.q_proj: tensor \"model.layers.35.self_attn.q_proj.weight\" not found" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.self_attn.k_proj: failed to load quantized weight model.layers.35.self_attn.k_proj: tensor \"model.layers.35.self_attn.k_proj.weight\" not found" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.self_attn.v_proj: failed to load quantized weight model.layers.35.self_attn.v_proj: tensor \"model.layers.35.self_attn.v_proj.weight\" not found" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.self_attn.o_proj: failed to load quantized weight model.layers.35.self_attn.o_proj: tensor \"model.layers.35.self_attn.o_proj.weight\" not found" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.self_attn.q_norm.weight" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.self_attn.k_norm.weight" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.mlp.gate_proj: failed to load quantized weight model.layers.35.mlp.gate_proj: tensor \"model.layers.35.mlp.gate_proj.weight\" not found" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.mlp.up_proj: failed to load quantized weight model.layers.35.mlp.up_proj: tensor \"model.layers.35.mlp.up_proj.weight\" not found" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.mlp.down_proj: failed to load quantized weight model.layers.35.mlp.down_proj: tensor \"model.layers.35.mlp.down_proj.weight\" not found" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.input_layernorm.weight" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.layers.35.post_attention_layernorm.weight" time=2026-02-13T12:55:04.036+08:00 level=WARN source=server.go:141 msg=mlx-runner msg=" model.norm.weight" time=2026-02-13T12:55:04.082+08:00 level=INFO source=server.go:134 msg=mlx-runner msg=" Loading text encoder... " time=2026-02-13T12:55:04.082+08:00 level=INFO source=server.go:363 msg="stopping mlx runner subprocess" pid=3488 ```
Author
Owner

@kbrock commented on GitHub (Feb 13, 2026):

@hwitzthum Is this working for you? Can we close this issue?

@michaellmonaghan Thanks. It has been >15 years since I set up a formula. Did you just follow standard instructions? https://docs.brew.sh/How-to-Create-and-Maintain-a-Tap Huh, ollama formula is in json and not ruby? wow. times change.

@aihua We're talking about Homebrew. Update your homebrew formula and see if that works. Find the homebrew formula in https://formulae.brew.sh/formula/ollama
Or if you are not on a mac, then create a different issue.

<!-- gh-comment-id:3898343067 --> @kbrock commented on GitHub (Feb 13, 2026): @hwitzthum Is this working for you? Can we close this issue? @michaellmonaghan Thanks. It has been >15 years since I set up a formula. Did you just follow standard instructions? https://docs.brew.sh/How-to-Create-and-Maintain-a-Tap Huh, ollama formula is in json and not ruby? wow. times change. @aihua We're talking about Homebrew. Update your homebrew formula and see if that works. Find the homebrew formula in https://formulae.brew.sh/formula/ollama Or if you are not on a mac, then create a different issue.
Author
Owner

@humbertowoody commented on GitHub (Feb 14, 2026):

Hi @kbrock, I just wanted to comment before the issue gets closed that I am still presenting this problem, I am running 0.15.6 via homebrew:

❯ ollama --version
ollama version is 0.15.6

Server final logs:

ggml_metal_init: error: failed to initialize the Metal library
ggml_backend_metal_device_init: error: failed to allocate context
llama_init_from_model: failed to initialize the context: failed to initialize Metal backend
panic: unable to create llama context

goroutine 43 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0x14000418140, {{0x0, 0x0, 0x0}, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, ...}, ...)
	github.com/ollama/ollama/runner/llamarunner/runner.go:849 +0x268
created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 40
	github.com/ollama/ollama/runner/llamarunner/runner.go:934 +0x680
time=2026-02-13T20:39:32.446-06:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 2"
time=2026-02-13T20:39:32.560-06:00 level=INFO source=sched.go:490 msg="Load failed" model=/Users/humalcocer/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db error="llama runner process has terminated: error:failed to allocate context\nllama_init_from_model: failed to initialize the context: failed to initialize Metal backend"

I have been solving it by using 0.14.3 from official release as concluded on https://github.com/ollama/ollama/issues/13867 and it actually works, but I am missing out on latest features. I can provide any more information/logs required. MacBook Pro M5, 32G RAM.m

Edit:

I have done this tests by using phi4-mini:latest locally. In every case it works on 0.14.3 via tarball release, it doesn't on 0.15.6 via homebrew. Tried also phi3:latest, llama3:latest, gemma3:270m. Tried wiping out and re-pulling them from each version and had the same results. Happy to provide more details if needed.

<!-- gh-comment-id:3900521817 --> @humbertowoody commented on GitHub (Feb 14, 2026): Hi @kbrock, I just wanted to comment before the issue gets closed that I am still presenting this problem, I am running 0.15.6 via homebrew: ```console ❯ ollama --version ollama version is 0.15.6 ``` Server final logs: ```txt ggml_metal_init: error: failed to initialize the Metal library ggml_backend_metal_device_init: error: failed to allocate context llama_init_from_model: failed to initialize the context: failed to initialize Metal backend panic: unable to create llama context goroutine 43 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0x14000418140, {{0x0, 0x0, 0x0}, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, ...}, ...) github.com/ollama/ollama/runner/llamarunner/runner.go:849 +0x268 created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 40 github.com/ollama/ollama/runner/llamarunner/runner.go:934 +0x680 time=2026-02-13T20:39:32.446-06:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 2" time=2026-02-13T20:39:32.560-06:00 level=INFO source=sched.go:490 msg="Load failed" model=/Users/humalcocer/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db error="llama runner process has terminated: error:failed to allocate context\nllama_init_from_model: failed to initialize the context: failed to initialize Metal backend" ``` I have been solving it by using 0.14.3 from official release as concluded on https://github.com/ollama/ollama/issues/13867 and it actually works, but I am missing out on latest features. I can provide any more information/logs required. MacBook Pro M5, 32G RAM.m Edit: I have done this tests by using `phi4-mini:latest` locally. In every case it works on 0.14.3 via tarball release, it doesn't on 0.15.6 via homebrew. Tried also `phi3:latest`, `llama3:latest`, `gemma3:270m`. Tried wiping out and re-pulling them from each version and had the same results. Happy to provide more details if needed.
Author
Owner

@ghuh commented on GitHub (Feb 18, 2026):

For ollama v0.16.2 installed via brew, I'm getting

2026/02/18 11:17:55 WARN MLX dynamic library not available error="failed to load MLX dynamic library ...

Anyone else seeing this?

<!-- gh-comment-id:3922433052 --> @ghuh commented on GitHub (Feb 18, 2026): For ollama v0.16.2 installed via brew, I'm getting ``` 2026/02/18 11:17:55 WARN MLX dynamic library not available error="failed to load MLX dynamic library ... ``` Anyone else seeing this?
Author
Owner

@AbeEstrada commented on GitHub (Feb 18, 2026):

For ollama v0.16.2 installed via brew, I'm getting

2026/02/18 11:17:55 WARN MLX dynamic library not available error="failed to load MLX dynamic library ...

Anyone else seeing this?

Yes, I get the same warning after the update

<!-- gh-comment-id:3922526999 --> @AbeEstrada commented on GitHub (Feb 18, 2026): > For ollama v0.16.2 installed via brew, I'm getting > > ``` > 2026/02/18 11:17:55 WARN MLX dynamic library not available error="failed to load MLX dynamic library ... > ``` > > Anyone else seeing this? Yes, I get the same warning after the update
Author
Owner

@kbrock commented on GitHub (Feb 18, 2026):

ugh. thought they just fixed this.
Seems to happen every time mlx upgrades

<!-- gh-comment-id:3922878724 --> @kbrock commented on GitHub (Feb 18, 2026): ugh. thought they just fixed this. Seems to happen every time mlx upgrades
Author
Owner

@ghuh commented on GitHub (Feb 19, 2026):

Looks like there is a PR out to fix this

<!-- gh-comment-id:3928134303 --> @ghuh commented on GitHub (Feb 19, 2026): Looks like there is a [PR](https://github.com/ollama/ollama/pull/14322) out to fix this
Author
Owner

@hwitzthum commented on GitHub (Feb 19, 2026):

Great! All the attempts have not been stable enough. Best

Von: Kevin Hayen @.>
Datum: Donnerstag, 19. Februar 2026 um 16:50
An: ollama/ollama @.
>
Cc: hwitzthum @.>, Mention @.>
Betreff: Re: [ollama/ollama] MLX Error (Issue #14118)

[https://avatars.githubusercontent.com/u/7354238?s=20&v=4]ghuh left a comment (ollama/ollama#14118)https://github.com/ollama/ollama/issues/14118#issuecomment-3928134303

Looks like there is a PRhttps://github.com/ollama/ollama/pull/14322 out to fix this


Reply to this email directly, view it on GitHubhttps://github.com/ollama/ollama/issues/14118#issuecomment-3928134303, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BFFJD4VXO5CH2MBYFJMPDPT4MXLVDAVCNFSM6AAAAACUGW6M36VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTSMRYGEZTIMZQGM.
You are receiving this because you were mentioned.

<!-- gh-comment-id:3929472072 --> @hwitzthum commented on GitHub (Feb 19, 2026): Great! All the attempts have not been stable enough. Best Von: Kevin Hayen ***@***.***> Datum: Donnerstag, 19. Februar 2026 um 16:50 An: ollama/ollama ***@***.***> Cc: hwitzthum ***@***.***>, Mention ***@***.***> Betreff: Re: [ollama/ollama] MLX Error (Issue #14118) [https://avatars.githubusercontent.com/u/7354238?s=20&v=4]ghuh left a comment (ollama/ollama#14118)<https://github.com/ollama/ollama/issues/14118#issuecomment-3928134303> Looks like there is a PR<https://github.com/ollama/ollama/pull/14322> out to fix this — Reply to this email directly, view it on GitHub<https://github.com/ollama/ollama/issues/14118#issuecomment-3928134303>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFFJD4VXO5CH2MBYFJMPDPT4MXLVDAVCNFSM6AAAAACUGW6M36VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTSMRYGEZTIMZQGM>. You are receiving this because you were mentioned.
Author
Owner

@jloh888 commented on GitHub (Feb 20, 2026):

A workaround for this, which I think this is a problem with the way homebrew handles dependencies, is to just go to the download section of ollama and d/l it from there.
Do a 'brew uninstall ollama' first.

<!-- gh-comment-id:3934807551 --> @jloh888 commented on GitHub (Feb 20, 2026): A workaround for this, which I think this is a problem with the way homebrew handles dependencies, is to just go to the download section of ollama and d/l it from there. Do a 'brew uninstall ollama' first.
Author
Owner

@LouGrossi commented on GitHub (Feb 20, 2026):

Fix: "MLX dynamic library not available" (Homebrew Ollama on macOS)

When you run ollama you may see:

WARN MLX dynamic library not available error="failed to load MLX dynamic library (searched: [path1 path2])"

Ollama only looks in the directories listed in that searched: line. The Homebrew MLX libs live elsewhere, so it never finds them. Symlinking those libs into one of the searched paths fixes it.


Step 1: Get the exact search paths

Run any Ollama command and copy the full error line (including the searched: part):

ollama list 2>&1

Example:

WARN MLX dynamic library not available error="failed to load MLX dynamic library (searched: [/opt/homebrew/Cellar/ollama/0.16.2/bin /Users/YOUR_USERNAME/build/lib/ollama])"
NAME    ID    SIZE    MODIFIED

From that line, note the two paths inside searched: [...]:

  • First path: something like /opt/homebrew/Cellar/ollama/<version>/bin
  • Second path: something like /Users/<your_username>/build/lib/ollama

You'll symlink the libs into one of these (Step 3).


Step 2: Find where the MLX libraries are

Homebrew put them under its prefix. Get the paths:

# MLX (Apple)
ls "$(brew --prefix mlx)/lib/libmlx.dylib"

# MLX C API (used by Ollama)
ls "$(brew --prefix mlx-c)/lib/libmlxc.dylib"

If both commands print a path, you're good. Typical values:

  • $(brew --prefix mlx)/lib → e.g. /opt/homebrew/opt/mlx/lib
  • $(brew --prefix mlx-c)/lib → e.g. /opt/homebrew/opt/mlx-c/lib

Use one of the two paths from Step 1:

Option Path (from error) Who can write
A Second path: .../build/lib/ollama (under your home) You (no sudo)
B First path: .../Cellar/ollama/.../bin Needs sudo

Recommended: Use the second path (the one under your home, e.g. /Users/<you>/build/lib/ollama). Then you don't need sudo and the fix is per user.

Set a variable to that directory (replace with the real path from your error):

# Example: use the path under your home (second path from the error)
OLLAMA_LIB_SEARCH_DIR="$HOME/build/lib/ollama"

If you chose the Cellar path (Option B), use that path instead and run the next steps with sudo where needed.


# Create the directory Ollama searches
mkdir -p "$OLLAMA_LIB_SEARCH_DIR"

# Symlink the MLX libraries into it
ln -sf "$(brew --prefix mlx)/lib/libmlx.dylib"    "$OLLAMA_LIB_SEARCH_DIR/"
ln -sf "$(brew --prefix mlx-c)/lib/libmlxc.dylib" "$OLLAMA_LIB_SEARCH_DIR/"

If you chose the Cellar path and don't have write access, use sudo for mkdir and ln and set OLLAMA_LIB_SEARCH_DIR to that path (e.g. /opt/homebrew/Cellar/ollama/0.16.2/bin).


Step 5: Confirm the fix

Run Ollama again; the MLX warning should be gone:

ollama list

You should see only the model table (and no "MLX dynamic library not available" line).


One-liner (after you know the search path)

If your error lists $HOME/build/lib/ollama as one of the search paths, you can do everything in one go:

mkdir -p "$HOME/build/lib/ollama" && \
ln -sf "$(brew --prefix mlx)/lib/libmlx.dylib" "$HOME/build/lib/ollama/" && \
ln -sf "$(brew --prefix mlx-c)/lib/libmlxc.dylib" "$HOME/build/lib/ollama/"

Then run ollama list again to verify.


Why this works

  • The error message tells you exactly which directories Ollama searches for the MLX library (searched: [path1 path2]).
  • Homebrew installs the real libraries under brew --prefix mlx and brew --prefix mlx-c.
  • By creating one of the searched directories and putting symlinks to those libs there, Ollama finds them and the warning goes away.
<!-- gh-comment-id:3937573121 --> @LouGrossi commented on GitHub (Feb 20, 2026): # Fix: "MLX dynamic library not available" (Homebrew Ollama on macOS) When you run `ollama` you may see: ``` WARN MLX dynamic library not available error="failed to load MLX dynamic library (searched: [path1 path2])" ``` Ollama only looks in the directories listed in that `searched:` line. The Homebrew MLX libs live elsewhere, so it never finds them. Symlinking those libs into one of the searched paths fixes it. --- ## Step 1: Get the exact search paths Run any Ollama command and copy the full error line (including the `searched:` part): ```bash ollama list 2>&1 ``` Example: ``` WARN MLX dynamic library not available error="failed to load MLX dynamic library (searched: [/opt/homebrew/Cellar/ollama/0.16.2/bin /Users/YOUR_USERNAME/build/lib/ollama])" NAME ID SIZE MODIFIED ``` From that line, note the **two paths** inside `searched: [...]`: - **First path:** something like `/opt/homebrew/Cellar/ollama/<version>/bin` - **Second path:** something like `/Users/<your_username>/build/lib/ollama` You'll symlink the libs into **one** of these (Step 3). --- ## Step 2: Find where the MLX libraries are Homebrew put them under its prefix. Get the paths: ```bash # MLX (Apple) ls "$(brew --prefix mlx)/lib/libmlx.dylib" # MLX C API (used by Ollama) ls "$(brew --prefix mlx-c)/lib/libmlxc.dylib" ``` If both commands print a path, you're good. Typical values: - `$(brew --prefix mlx)/lib` → e.g. `/opt/homebrew/opt/mlx/lib` - `$(brew --prefix mlx-c)/lib` → e.g. `/opt/homebrew/opt/mlx-c/lib` --- ## Step 3: Choose the directory to symlink into Use **one** of the two paths from Step 1: | Option | Path (from error) | Who can write | |--------|-------------------|----------------| | **A** | Second path: `.../build/lib/ollama` (under your home) | You (no sudo) | | **B** | First path: `.../Cellar/ollama/.../bin` | Needs `sudo` | **Recommended:** Use the **second** path (the one under your home, e.g. `/Users/<you>/build/lib/ollama`). Then you don't need sudo and the fix is per user. Set a variable to that directory (replace with the real path from your error): ```bash # Example: use the path under your home (second path from the error) OLLAMA_LIB_SEARCH_DIR="$HOME/build/lib/ollama" ``` If you chose the Cellar path (Option B), use that path instead and run the next steps with `sudo` where needed. --- ## Step 4: Create the directory and symlink the libraries ```bash # Create the directory Ollama searches mkdir -p "$OLLAMA_LIB_SEARCH_DIR" # Symlink the MLX libraries into it ln -sf "$(brew --prefix mlx)/lib/libmlx.dylib" "$OLLAMA_LIB_SEARCH_DIR/" ln -sf "$(brew --prefix mlx-c)/lib/libmlxc.dylib" "$OLLAMA_LIB_SEARCH_DIR/" ``` If you chose the Cellar path and don't have write access, use `sudo` for `mkdir` and `ln` and set `OLLAMA_LIB_SEARCH_DIR` to that path (e.g. `/opt/homebrew/Cellar/ollama/0.16.2/bin`). --- ## Step 5: Confirm the fix Run Ollama again; the MLX warning should be gone: ```bash ollama list ``` You should see only the model table (and no "MLX dynamic library not available" line). --- ## One-liner (after you know the search path) If your error lists `$HOME/build/lib/ollama` as one of the search paths, you can do everything in one go: ```bash mkdir -p "$HOME/build/lib/ollama" && \ ln -sf "$(brew --prefix mlx)/lib/libmlx.dylib" "$HOME/build/lib/ollama/" && \ ln -sf "$(brew --prefix mlx-c)/lib/libmlxc.dylib" "$HOME/build/lib/ollama/" ``` Then run `ollama list` again to verify. --- ## Why this works - The **error message tells you exactly which directories** Ollama searches for the MLX library (`searched: [path1 path2]`). - Homebrew installs the real libraries under `brew --prefix mlx` and `brew --prefix mlx-c`. - By creating one of the searched directories and putting symlinks to those libs there, Ollama finds them and the warning goes away.
Author
Owner

@dvessel commented on GitHub (Feb 21, 2026):

Recent update through homebrew still outputs the error (0.16.3). A temp fix is to just soft link the dylib into the ollama bin directory. Next time it updates, the link will be discarded and hopefully it gets fixed.

ln -s `brew --prefix mlx-c`/lib/libmlxc.dylib `brew --prefix ollama`/bin
<!-- gh-comment-id:3938612368 --> @dvessel commented on GitHub (Feb 21, 2026): Recent update through homebrew still outputs the error (0.16.3). A temp fix is to just soft link the dylib into the ollama bin directory. Next time it updates, the link will be discarded *and hopefully it gets fixed*. ```sh ln -s `brew --prefix mlx-c`/lib/libmlxc.dylib `brew --prefix ollama`/bin ```
Author
Owner

@Mottl commented on GitHub (Feb 21, 2026):

Another option is to add OLLAMA_LIBRARY_PATH to environment vars.
For bash:
export OLLAMA_LIBRARY_PATH=$(brew --prefix)/lib
For fish:
set -Ux OLLAMA_LIBRARY_PATH $(brew --prefix)/lib

<!-- gh-comment-id:3938763662 --> @Mottl commented on GitHub (Feb 21, 2026): Another option is to add `OLLAMA_LIBRARY_PATH` to environment vars. For bash: `export OLLAMA_LIBRARY_PATH=$(brew --prefix)/lib` For fish: `set -Ux OLLAMA_LIBRARY_PATH $(brew --prefix)/lib`
Author
Owner

@kbrock commented on GitHub (Feb 22, 2026):

@LouGrossi Nice write up. For me, the path A is actually using $PWD and not $HOME. Think that worked for you b/c you were in ~ when you did your research?

@Mottl - Good find. Think the issue is that we set the mlx path via rpath() rather than an absolute path. Since mlx-c is linked into $(brew --path)/lib, we should set that into the formula. Then we can avoid having the end user set the OLLAMA_LIBRARY_PATH.

Looking into the formula. Someone please ping if you already solved this.

<!-- gh-comment-id:3939881435 --> @kbrock commented on GitHub (Feb 22, 2026): @LouGrossi Nice write up. For me, the path A is actually using `$PWD` and not `$HOME`. Think that worked for you b/c you were in ~ when you did your research? @Mottl - Good find. Think the issue is that we set the mlx path via `rpath()` rather than an absolute path. Since `mlx-c` is linked into `$(brew --path)/lib`, we should set that into the formula. Then we can avoid having the end user set the `OLLAMA_LIBRARY_PATH`. Looking into the formula. Someone please ping if you already solved this.
Author
Owner

@ilaikim99 commented on GitHub (Feb 28, 2026):

This is a transitive dependency version conflict — mlx-c 0.5.0 removed a symbol that Ollama loads at startup. Three ways to fix it: https://cacheoverflow.dev/blog/fJkb7kkq

<!-- gh-comment-id:3977283830 --> @ilaikim99 commented on GitHub (Feb 28, 2026): This is a transitive dependency version conflict — mlx-c 0.5.0 removed a symbol that Ollama loads at startup. Three ways to fix it: https://cacheoverflow.dev/blog/fJkb7kkq
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34971