[GH-ISSUE #12703] docker build not successful #8431

Open
opened 2026-04-12 21:06:45 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @fahadshery on GitHub (Oct 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12703

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I have 16 x Nvidia A16 GPUs.
I am trying to change the number of backends at this location:

0aa8b371dd/ml/backend/ggml/ggml/src/ggml-backend.cpp (L611-L613)

Instructions were given here: https://github.com/ollama/ollama/issues/10705

I tried different combinations like:

#ifndef GGML_SCHED_MAX_BACKENDS
define GGML_SCHED_MAX_BACKENDS 16
#endif

and

ifndef GGML_SCHED_MAX_BACKENDS
define GGML_SCHED_MAX_BACKENDS 16
endif

then tried to build docker image using:

docker build .

I get the following error when building the docker image:

6.286 -- Found Threads: TRUE
6.287 -- Enabling coopmat glslc support
6.287 -- Enabling coopmat2 glslc support
6.287 -- Enabling dot glslc support
6.287 -- Enabling bfloat16 glslc support
6.287 -- Configuring done (1.0s)
6.294 -- Generating done (0.0s)
6.294 -- Build files have been written to: //build/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders-gen-prefix/src/vulkan-shaders-gen-build
6.315 [  5%] Performing build step for 'vulkan-shaders-gen'
6.369 [ 50%] Building CXX object CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o
15.12 [100%] Linking CXX executable vulkan-shaders-gen
15.21 [100%] Built target vulkan-shaders-gen
15.23 [  7%] Performing install step for 'vulkan-shaders-gen'
15.24 -- Installing: //build/Release/./vulkan-shaders-gen
15.24 [  8%] Completed 'vulkan-shaders-gen'
15.28 [  8%] Built target vulkan-shaders-gen
15.28 gmake[1]: *** [CMakeFiles/Makefile2:767: ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rule] Error 2
15.28 gmake: *** [Makefile:391: ggml-vulkan] Error 2
------
Dockerfile:121
--------------------
 120 |     FROM base AS vulkan
 121 | >>> RUN --mount=type=cache,target=/root/.ccache \
 122 | >>>     cmake --preset 'Vulkan' -DOLLAMA_RUNNER_DIR="vulkan" \
 123 | >>>         && cmake --build --parallel --preset 'Vulkan' \
 124 | >>>         && cmake --install build --component Vulkan --strip --parallel 8
 125 |
--------------------
ERROR: failed to solve: process "/bin/sh -c cmake --preset 'Vulkan' -DOLLAMA_RUNNER_DIR=\"vulkan\"         && cmake --build --parallel --preset 'Vulkan'         && cmake --install build --component Vulkan --strip --parallel 8" did not complete successfully: exit code: 2

any ideas?

Relevant log output


OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

ollama version is 0.6.8 Warning: client version is 0.3.6

Originally created by @fahadshery on GitHub (Oct 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12703 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I have 16 x Nvidia A16 GPUs. I am trying to change the number of backends at this location: https://github.com/ollama/ollama/blob/0aa8b371ddd24a2d0ce859903a9284e9544f5c78/ml/backend/ggml/ggml/src/ggml-backend.cpp#L611-L613 Instructions were given here: https://github.com/ollama/ollama/issues/10705 I tried different combinations like: ``` #ifndef GGML_SCHED_MAX_BACKENDS define GGML_SCHED_MAX_BACKENDS 16 #endif ``` and ``` ifndef GGML_SCHED_MAX_BACKENDS define GGML_SCHED_MAX_BACKENDS 16 endif ``` then tried to build docker image using: ``` docker build . ``` I get the following error when building the docker image: ``` 6.286 -- Found Threads: TRUE 6.287 -- Enabling coopmat glslc support 6.287 -- Enabling coopmat2 glslc support 6.287 -- Enabling dot glslc support 6.287 -- Enabling bfloat16 glslc support 6.287 -- Configuring done (1.0s) 6.294 -- Generating done (0.0s) 6.294 -- Build files have been written to: //build/ml/backend/ggml/ggml/src/ggml-vulkan/vulkan-shaders-gen-prefix/src/vulkan-shaders-gen-build 6.315 [ 5%] Performing build step for 'vulkan-shaders-gen' 6.369 [ 50%] Building CXX object CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o 15.12 [100%] Linking CXX executable vulkan-shaders-gen 15.21 [100%] Built target vulkan-shaders-gen 15.23 [ 7%] Performing install step for 'vulkan-shaders-gen' 15.24 -- Installing: //build/Release/./vulkan-shaders-gen 15.24 [ 8%] Completed 'vulkan-shaders-gen' 15.28 [ 8%] Built target vulkan-shaders-gen 15.28 gmake[1]: *** [CMakeFiles/Makefile2:767: ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/rule] Error 2 15.28 gmake: *** [Makefile:391: ggml-vulkan] Error 2 ------ Dockerfile:121 -------------------- 120 | FROM base AS vulkan 121 | >>> RUN --mount=type=cache,target=/root/.ccache \ 122 | >>> cmake --preset 'Vulkan' -DOLLAMA_RUNNER_DIR="vulkan" \ 123 | >>> && cmake --build --parallel --preset 'Vulkan' \ 124 | >>> && cmake --install build --component Vulkan --strip --parallel 8 125 | -------------------- ERROR: failed to solve: process "/bin/sh -c cmake --preset 'Vulkan' -DOLLAMA_RUNNER_DIR=\"vulkan\" && cmake --build --parallel --preset 'Vulkan' && cmake --install build --component Vulkan --strip --parallel 8" did not complete successfully: exit code: 2 ``` any ideas? ### Relevant log output ```shell ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version ollama version is 0.6.8 Warning: client version is 0.3.6
GiteaMirror added the bug label 2026-04-12 21:06:45 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 20, 2025):

#ifndef GGML_SCHED_MAX_BACKENDS
#define GGML_SCHED_MAX_BACKENDS 17
#endif
<!-- gh-comment-id:3422029833 --> @rick-github commented on GitHub (Oct 20, 2025): ``` #ifndef GGML_SCHED_MAX_BACKENDS #define GGML_SCHED_MAX_BACKENDS 17 #endif ```
Author
Owner

@fahadshery commented on GitHub (Oct 20, 2025):

#ifndef GGML_SCHED_MAX_BACKENDS
#define GGML_SCHED_MAX_BACKENDS 17
#endif

I think this is not a recommended way to do. I didn't change any backends from the given file.

0aa8b371dd/ml/backend/ggml/ggml/src/ggml-backend.cpp (L611-L613)

There were two issues:

Missing Vulkan stuff to build the image. I had to add the following lines after the line FROM base AS vulkan in the Dockerfile:

FROM base AS vulkan
# Install Vulkan runtime + development loader to support ggml-vulkan
RUN dnf install -y \
      vulkan-headers \
      vulkan-loader-devel \
      mesa-vulkan-drivers \
      mesa-libGL-devel \
      && dnf clean all

# Ensure the copied Vulkan SDK libraries are discoverable
ENV LD_LIBRARY_PATH=/usr/local/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
RUN ldconfig

And then built the image by simply specifying the CMAKE_DEFINES="-DGGML_SCHED_MAX_BACKENDS=16":

docker build --build-arg PARALLEL=8 --no-cache --build-arg CMAKE_DEFINES="-DGGML_SCHED_MAX_BACKENDS=16" -t ollama-by-source .

This built the image successfully.

I then ran the image and see if I can see all of my GPUs:

docker run -it --gpus all --device=/dev/dri --entrypoint /bin/bash ollama-by-source:latest

And then:

root@3f09533fb4de:/# nvidia-smi -L
GPU 0: NVIDIA A16 (UUID: GPU-5d1b9c39-3f66-bf0b-ba6e-9a8bad9fd2c2)
GPU 1: NVIDIA A16 (UUID: GPU-6b8708fd-f465-4471-825a-1cc3c7806171)
GPU 2: NVIDIA A16 (UUID: GPU-42749374-23a2-32e6-41ec-320b7f7ee0f2)
...[snip

I am happy to create a PR if you want?

<!-- gh-comment-id:3422448728 --> @fahadshery commented on GitHub (Oct 20, 2025): > ``` > #ifndef GGML_SCHED_MAX_BACKENDS > #define GGML_SCHED_MAX_BACKENDS 17 > #endif > ``` I think this is not a recommended way to do. I didn't change any backends from the given file. https://github.com/ollama/ollama/blob/0aa8b371ddd24a2d0ce859903a9284e9544f5c78/ml/backend/ggml/ggml/src/ggml-backend.cpp#L611-L613 There were two issues: Missing Vulkan stuff to build the image. I had to add the following lines after the line `FROM base AS vulkan` in the `Dockerfile`: ``` FROM base AS vulkan # Install Vulkan runtime + development loader to support ggml-vulkan RUN dnf install -y \ vulkan-headers \ vulkan-loader-devel \ mesa-vulkan-drivers \ mesa-libGL-devel \ && dnf clean all # Ensure the copied Vulkan SDK libraries are discoverable ENV LD_LIBRARY_PATH=/usr/local/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH} RUN ldconfig ``` And then built the image by simply specifying the `CMAKE_DEFINES="-DGGML_SCHED_MAX_BACKENDS=16"`: ``` docker build --build-arg PARALLEL=8 --no-cache --build-arg CMAKE_DEFINES="-DGGML_SCHED_MAX_BACKENDS=16" -t ollama-by-source . ``` This built the image successfully. I then ran the image and see if I can see all of my GPUs: ``` docker run -it --gpus all --device=/dev/dri --entrypoint /bin/bash ollama-by-source:latest ``` And then: ``` root@3f09533fb4de:/# nvidia-smi -L GPU 0: NVIDIA A16 (UUID: GPU-5d1b9c39-3f66-bf0b-ba6e-9a8bad9fd2c2) GPU 1: NVIDIA A16 (UUID: GPU-6b8708fd-f465-4471-825a-1cc3c7806171) GPU 2: NVIDIA A16 (UUID: GPU-42749374-23a2-32e6-41ec-320b7f7ee0f2) ...[snip ``` I am happy to create a PR if you want?
Author
Owner

@rick-github commented on GitHub (Oct 20, 2025):

And then built the image by simply specifying the CMAKE_DEFINES="-DGGML_SCHED_MAX_BACKENDS=16":

If you set "-DGGML_SCHED_MAX_BACKENDS=16" then you have made no change to the compiled code.

<!-- gh-comment-id:3422470463 --> @rick-github commented on GitHub (Oct 20, 2025): ``` And then built the image by simply specifying the CMAKE_DEFINES="-DGGML_SCHED_MAX_BACKENDS=16": ``` If you set `"-DGGML_SCHED_MAX_BACKENDS=16"` then you have made no change to the compiled code.
Author
Owner

@fahadshery commented on GitHub (Oct 20, 2025):

If you set "-DGGML_SCHED_MAX_BACKENDS=16" then you have made no change to the compiled code.

I was unable to run qwen3:30b model before this image. Now I am able to run it...Albeit, I am still not utilising all of my GPUs?

Image

Additionally, I am not sure why I am getting the error msg for the other models now which used to work fine like dolphin3:8b ??

Could you also confirm if #define GGML_SCHED_MAX_BACKENDS 16 is including the cpu??

and if I have to make a change to #define GGML_SCHED_MAX_BACKENDS 16 what exactly do I have to change? According to this line it's already defaulting to 16 GPUs? It's a little confusing now

<!-- gh-comment-id:3422635962 --> @fahadshery commented on GitHub (Oct 20, 2025): > > If you set `"-DGGML_SCHED_MAX_BACKENDS=16"` then you have made no change to the compiled code. I was unable to run qwen3:30b model before this image. Now I am able to run it...Albeit, I am still not utilising all of my GPUs? <img width="1428" height="683" alt="Image" src="https://github.com/user-attachments/assets/d77317ed-4b88-4570-9573-3277ad083147" /> Additionally, I am not sure why I am getting the error msg for the other models now which used to work fine like dolphin3:8b ?? Could you also confirm if `#define GGML_SCHED_MAX_BACKENDS 16` is including the `cpu`?? and if I have to make a change to `#define GGML_SCHED_MAX_BACKENDS 16` what exactly do I have to change? According to this line it's already defaulting to 16 GPUs? It's a little confusing now
Author
Owner

@rick-github commented on GitHub (Oct 20, 2025):

GGML_SCHED_MAX_BACKENDS is the number of backends. You have 16 GPUs and at least one CPU. 16 + 1 = 17.

ollama will schedule a model to the minimum number of GPUs that can hold the model, it's more efficient. If you wish to spread a model across all available GPUs, set OLLAMA_SCHED_SPREAD=1.

<!-- gh-comment-id:3422803515 --> @rick-github commented on GitHub (Oct 20, 2025): `GGML_SCHED_MAX_BACKENDS` is the number of backends. You have 16 GPUs and at least one CPU. 16 + 1 = 17. ollama will schedule a model to the minimum number of GPUs that can hold the model, it's more efficient. If you wish to spread a model across all available GPUs, set `OLLAMA_SCHED_SPREAD=1`.
Author
Owner

@fahadshery commented on GitHub (Oct 20, 2025):

Here is what I did, Could you reconfirm if that's all I needed:

#ifndef GGML_SCHED_MAX_BACKENDS
#define GGML_SCHED_MAX_BACKENDS 17
#endif

Dockerfile

Add the following lines to include headers for Vulken etc.:

FROM base AS vulkan
# Install Vulkan runtime + development loader to support ggml-vulkan
RUN dnf install -y \
      vulkan-headers \
      vulkan-loader-devel \
      mesa-vulkan-drivers \
      mesa-libGL-devel \
      && dnf clean all

# Ensure the copied Vulkan SDK libraries are discoverable
ENV LD_LIBRARY_PATH=/usr/local/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
RUN ldconfig

# Build Vulkan backend
RUN --mount=type=cache,target=/root/.ccache \
    cmake --preset 'Vulkan' -DOLLAMA_RUNNER_DIR="vulkan" \
    && cmake --build --parallel --preset 'Vulkan' \
    && cmake --install build --component Vulkan --strip --parallel 8

Build the image:

docker build --no-cache -t ollama-by-source .

Then simply run the image and get a shell inside the container. Confirm you can see all your GPUs inside the container:

# nvidia-smi -L
GPU 0: NVIDIA A16 (UUID: GPU-5d1b9c39-3f66-bf0b-ba6e-9a8bad9fd2c2)
GPU 1: NVIDIA A16 (UUID: GPU-6b8708fd-f465-4471-825a-1cc3c7806171)
GPU 2: NVIDIA A16 (UUID: GPU-42749374-23a2-32e6-41ec-320b7f7ee0f2)
GPU 3: NVIDIA A16 (UUID: GPU-fde2ba35-9e05-9566-7af7-a75010f0f1c0)
``` [snip]
And finally, where do I set `OLLAMA_SCHED_SPREAD=1`?? can't find this option in the cpp file
<!-- gh-comment-id:3424077150 --> @fahadshery commented on GitHub (Oct 20, 2025): Here is what I did, Could you reconfirm if that's all I needed: ``` #ifndef GGML_SCHED_MAX_BACKENDS #define GGML_SCHED_MAX_BACKENDS 17 #endif ``` `Dockerfile` Add the following lines to include headers for Vulken etc.: ``` FROM base AS vulkan # Install Vulkan runtime + development loader to support ggml-vulkan RUN dnf install -y \ vulkan-headers \ vulkan-loader-devel \ mesa-vulkan-drivers \ mesa-libGL-devel \ && dnf clean all # Ensure the copied Vulkan SDK libraries are discoverable ENV LD_LIBRARY_PATH=/usr/local/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH} RUN ldconfig # Build Vulkan backend RUN --mount=type=cache,target=/root/.ccache \ cmake --preset 'Vulkan' -DOLLAMA_RUNNER_DIR="vulkan" \ && cmake --build --parallel --preset 'Vulkan' \ && cmake --install build --component Vulkan --strip --parallel 8 ``` Build the image: ``` docker build --no-cache -t ollama-by-source . ``` Then simply run the image and get a shell inside the container. Confirm you can see all your GPUs inside the container: ``` # nvidia-smi -L GPU 0: NVIDIA A16 (UUID: GPU-5d1b9c39-3f66-bf0b-ba6e-9a8bad9fd2c2) GPU 1: NVIDIA A16 (UUID: GPU-6b8708fd-f465-4471-825a-1cc3c7806171) GPU 2: NVIDIA A16 (UUID: GPU-42749374-23a2-32e6-41ec-320b7f7ee0f2) GPU 3: NVIDIA A16 (UUID: GPU-fde2ba35-9e05-9566-7af7-a75010f0f1c0) ``` [snip] And finally, where do I set `OLLAMA_SCHED_SPREAD=1`?? can't find this option in the cpp file
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8431