[GH-ISSUE #14016] Docker 0.15.3 failed to initialize MLX #71221

Open
opened 2026-05-05 00:43:17 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @Slawka on GitHub (Feb 1, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14016

What is the issue?

image: ollama/ollama:0.15.3

The problem occurred in version 0.15.3
ollama run x/z-image-turbo:latest
ollama run x/flux2-klein

Relevant log output

root@907ff5fce378:/# ollama run x/z-image-turbo:latest
Error: failed to load model: 500 Internal Server Error: image runner failed: Error: failed to initialize MLX: MLX: Failed to load libmlxc library. Tried: ./build/lib/ollama/libmlxc.so, libmlxc.so. Last error: libquadmath.so.0: cannot open shared object file: No such file or directory (exit: exit status 1)

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.15.3

Originally created by @Slawka on GitHub (Feb 1, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14016 ### What is the issue? image: ollama/ollama:0.15.3 The problem occurred in version 0.15.3 ollama run x/z-image-turbo:latest ollama run x/flux2-klein ### Relevant log output ```shell root@907ff5fce378:/# ollama run x/z-image-turbo:latest Error: failed to load model: 500 Internal Server Error: image runner failed: Error: failed to initialize MLX: MLX: Failed to load libmlxc library. Tried: ./build/lib/ollama/libmlxc.so, libmlxc.so. Last error: libquadmath.so.0: cannot open shared object file: No such file or directory (exit: exit status 1) ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.15.3
GiteaMirror added the bug label 2026-05-05 00:43:17 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 1, 2026):

https://github.com/ollama/ollama/pull/13984

<!-- gh-comment-id:3831729243 --> @rick-github commented on GitHub (Feb 1, 2026): https://github.com/ollama/ollama/pull/13984
Author
Owner

@rick-github commented on GitHub (Feb 1, 2026):

Use the following Dockerfile to create an ollama container with the required installs without doing a full build:

FROM ollama/ollama:latest
RUN apt update && apt install -y libquadmath0 wget
ENV CUDA_LIBS="                                           \
        cuda-toolkit-config-common_13.0.48-1_all.deb      \
        cuda-toolkit-13-config-common_13.0.48-1_all.deb   \
        cuda-toolkit-13-0-config-common_13.0.48-1_all.deb \
        cuda-cudart-13-0_13.0.48-1_amd64.deb              \
        cuda-cccl-13-0_13.0.50-1_amd64.deb                \
        cuda-culibos-dev-13-0_13.0.39-1_amd64.deb         \
        cuda-driver-dev-13-0_13.0.48-1_amd64.deb          \
        cuda-cudart-dev-13-0_13.0.48-1_amd64.deb          \
        cuda-nvrtc-13-0_13.0.48-1_amd64.deb               \
        "
RUN for i in $CUDA_LIBS ; do wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/$i ; done && dpkg -i $CUDA_LIBS
RUN cd /usr/local/cuda-13.0/targets/x86_64-linux && ln -s cccl/cuda include/cuda && ln -s /usr/local/cuda-13.0/targets/x86_64-linux/include /usr/lib/ollama/mlx_cuda_v13/include
$ docker build -t ollama/ollama:imagegen . -f Dockerfile
$ docker run --rm -d --gpus all -v /usr/share/ollama/.ollama:/root/.ollama --name imagegen ollama/ollama:imagegen
$ docker exec -it imagegen ollama run x/z-image-turbo:fp8 "tux penguin and llama sitting side by side, with a sign between them that says 'docker'"
Image
<!-- gh-comment-id:3831904450 --> @rick-github commented on GitHub (Feb 1, 2026): Use the following Dockerfile to create an ollama container with the required installs without doing a full build: ```dockerfile FROM ollama/ollama:latest RUN apt update && apt install -y libquadmath0 wget ENV CUDA_LIBS=" \ cuda-toolkit-config-common_13.0.48-1_all.deb \ cuda-toolkit-13-config-common_13.0.48-1_all.deb \ cuda-toolkit-13-0-config-common_13.0.48-1_all.deb \ cuda-cudart-13-0_13.0.48-1_amd64.deb \ cuda-cccl-13-0_13.0.50-1_amd64.deb \ cuda-culibos-dev-13-0_13.0.39-1_amd64.deb \ cuda-driver-dev-13-0_13.0.48-1_amd64.deb \ cuda-cudart-dev-13-0_13.0.48-1_amd64.deb \ cuda-nvrtc-13-0_13.0.48-1_amd64.deb \ " RUN for i in $CUDA_LIBS ; do wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/$i ; done && dpkg -i $CUDA_LIBS RUN cd /usr/local/cuda-13.0/targets/x86_64-linux && ln -s cccl/cuda include/cuda && ln -s /usr/local/cuda-13.0/targets/x86_64-linux/include /usr/lib/ollama/mlx_cuda_v13/include ``` ```console $ docker build -t ollama/ollama:imagegen . -f Dockerfile $ docker run --rm -d --gpus all -v /usr/share/ollama/.ollama:/root/.ollama --name imagegen ollama/ollama:imagegen $ docker exec -it imagegen ollama run x/z-image-turbo:fp8 "tux penguin and llama sitting side by side, with a sign between them that says 'docker'" ``` <img width="993" height="1010" alt="Image" src="https://github.com/user-attachments/assets/54b87f1f-d05d-49e9-9f9f-7d90491f5f87" />
Author
Owner

@Slawka commented on GitHub (Feb 8, 2026):

ollama run x/z-image-turbo:fp8 "tux penguin and llama sitting side by side, with a sign between them that says 'docker'"

`root@f57de88e0670:/# ollama run x/flux2-klein "tux penguin and llama sitting side by side, with a sign between them that says 'docker'"

Error: 500 Internal Server Error: mlx runner exited unexpectedly: exit status 255"
`

<!-- gh-comment-id:3867527920 --> @Slawka commented on GitHub (Feb 8, 2026): > ollama run x/z-image-turbo:fp8 "tux penguin and llama sitting side by side, with a sign between them that says 'docker'" `root@f57de88e0670:/# ollama run x/flux2-klein "tux penguin and llama sitting side by side, with a sign between them that says 'docker'" Error: 500 Internal Server Error: mlx runner exited unexpectedly: exit status 255" `
Author
Owner

@rick-github commented on GitHub (Feb 8, 2026):

Logs?

<!-- gh-comment-id:3867541143 --> @rick-github commented on GitHub (Feb 8, 2026): Logs?
Author
Owner

@rick-github commented on GitHub (Feb 8, 2026):

Probably #14046.

<!-- gh-comment-id:3867543555 --> @rick-github commented on GitHub (Feb 8, 2026): Probably #14046.
Author
Owner

@Hello-World-Traveler commented on GitHub (Feb 14, 2026):

@rick-github Do you know when your PR for docker will be added?

<!-- gh-comment-id:3901725648 --> @Hello-World-Traveler commented on GitHub (Feb 14, 2026): @rick-github Do you know when your PR for docker will be added?
Author
Owner

@rick-github commented on GitHub (Feb 14, 2026):

No idea. I have PRs that are over a year old, I have no clue what conditions have to be met for a PR to even be reviewed let alone approved.

<!-- gh-comment-id:3902162191 --> @rick-github commented on GitHub (Feb 14, 2026): No idea. I have PRs that are over a year old, I have no clue what conditions have to be met for a PR to even be reviewed let alone approved.
Author
Owner

@Hello-World-Traveler commented on GitHub (Feb 14, 2026):

@rick-github Thank you for the PR. I did notice that it hasn't been requested for a review yet, could that also be why the delay?

<!-- gh-comment-id:3902458639 --> @Hello-World-Traveler commented on GitHub (Feb 14, 2026): @rick-github Thank you for the PR. I did notice that it hasn't been requested for a review yet, could that also be why the delay?
Author
Owner

@onoraba commented on GitHub (Mar 19, 2026):

$ ollama run x/z-image-turbo:bf16 "test"

Error: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: libmlxc.so not found (exit: exit status 1)

alpine v3.23.3, ollama v0.18.0

<!-- gh-comment-id:4092862063 --> @onoraba commented on GitHub (Mar 19, 2026): $ ollama run x/z-image-turbo:bf16 "test" Error: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: libmlxc.so not found (exit: exit status 1) alpine v3.23.3, ollama v0.18.0
Author
Owner

@rick-github commented on GitHub (Mar 19, 2026):

How was ollama installed?

<!-- gh-comment-id:4092895757 --> @rick-github commented on GitHub (Mar 19, 2026): How was ollama installed?
Author
Owner

@adjenks commented on GitHub (Apr 21, 2026):

I'm getting:

"failed to initialize MLX: MLX: Failed to load /usr/lib/ollama/mlx_cuda_v13/libmlxc.so"

in the latest docker container.

<!-- gh-comment-id:4287382485 --> @adjenks commented on GitHub (Apr 21, 2026): I'm getting: "failed to initialize MLX: MLX: Failed to load /usr/lib/ollama/mlx_cuda_v13/libmlxc.so" in the latest docker container.
Author
Owner

@jeff-kelley commented on GitHub (Apr 22, 2026):

On bare metal :

$ ollama run x/z-image-turbo
Error: failed to load model: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: MLX: Failed to load /usr/local/lib/ollama/mlx_cuda_v13/libmlxc.so: libmlx.so: cannot open shared object file: No such file or directory (exit: exit status 1)

$ ls -l /usr/local/lib/ollama/mlx_cuda_v13/libmlxc.so
-rwxr-xr-x 1 root root 674360 Apr 22 04:22 /usr/local/lib/ollama/mlx_cuda_v13/libmlxc.so

ollama version: 0.21.1
NVIDIA Driver Version: 550.163.01
CUDA Version: 12.4
Debian 13

<!-- gh-comment-id:4300653634 --> @jeff-kelley commented on GitHub (Apr 22, 2026): On bare metal : ``` $ ollama run x/z-image-turbo Error: failed to load model: 500 Internal Server Error: mlx runner failed: Error: failed to initialize MLX: MLX: Failed to load /usr/local/lib/ollama/mlx_cuda_v13/libmlxc.so: libmlx.so: cannot open shared object file: No such file or directory (exit: exit status 1) $ ls -l /usr/local/lib/ollama/mlx_cuda_v13/libmlxc.so -rwxr-xr-x 1 root root 674360 Apr 22 04:22 /usr/local/lib/ollama/mlx_cuda_v13/libmlxc.so ``` ollama version: 0.21.1 NVIDIA Driver Version: 550.163.01 CUDA Version: 12.4 Debian 13
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71221