[GH-ISSUE #15151] compiling ollama in debian 12. seems to try to run yum from redhat using build_linux.sh #56213

Closed
opened 2026-04-29 10:26:19 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @jcharth on GitHub (Mar 30, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15151

What is the issue?

i am trying to compile from source to use my m60 gpu. it is really hard? any sugestions

 => ERROR [linux/arm64 base-arm64 2/2] RUN yum install -y yum-utils epel-release     && dnf install -y clang ccache git     && yum-config-manager --add-repo https://develop  4.4s
 => CANCELED [linux/arm64 jetpack-6 2/5] RUN apt-get update && apt-get install -y curl ccache unzip     && curl -fsSL https://github.com/Kitware/CMake/releases/download/v3.  0.0s
 => CANCELED [linux/arm64 jetpack-5 2/5] RUN apt-get update && apt-get install -y curl ccache unzip     && curl -fsSL https://github.com/Kitware/CMake/releases/download/v3.  0.0s
 => CANCELED [linux/amd64 base-amd64 2/2] RUN dnf install -y yum-utils ccache gcc-toolset-11-gcc gcc-toolset-11-gcc-c++ gcc-toolset-11-binutils     && yum-config-manager --  0.0s
------
 > [linux/arm64 base-arm64 2/2] RUN yum install -y yum-utils epel-release     && dnf install -y clang ccache git     && yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo:
4.249 exec /bin/sh: exec format error
------

 2 warnings found (use docker --debug to expand):
 - FromPlatformFlagConstDisallowed: FROM --platform flag should not use constant value "linux/arm64" (line 95)
 - FromPlatformFlagConstDisallowed: FROM --platform flag should not use constant value "linux/arm64" (line 111)
Dockerfile:24
--------------------
  23 |     # install epel-release for ccache
  24 | >>> RUN yum install -y yum-utils epel-release \
  25 | >>>     && dnf install -y clang ccache git \
  26 | >>>     && yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo
  27 |     ENV CC=clang CXX=clang++
--------------------
ERROR: failed to build: failed to solve: process "/bin/sh -c yum install -y yum-utils epel-release     && dnf install -y clang ccache git     && yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo" did not complete successfully: exit code: 255

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @jcharth on GitHub (Mar 30, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15151 ### What is the issue? i am trying to compile from source to use my m60 gpu. it is really hard? any sugestions ``` => ERROR [linux/arm64 base-arm64 2/2] RUN yum install -y yum-utils epel-release && dnf install -y clang ccache git && yum-config-manager --add-repo https://develop 4.4s => CANCELED [linux/arm64 jetpack-6 2/5] RUN apt-get update && apt-get install -y curl ccache unzip && curl -fsSL https://github.com/Kitware/CMake/releases/download/v3. 0.0s => CANCELED [linux/arm64 jetpack-5 2/5] RUN apt-get update && apt-get install -y curl ccache unzip && curl -fsSL https://github.com/Kitware/CMake/releases/download/v3. 0.0s => CANCELED [linux/amd64 base-amd64 2/2] RUN dnf install -y yum-utils ccache gcc-toolset-11-gcc gcc-toolset-11-gcc-c++ gcc-toolset-11-binutils && yum-config-manager -- 0.0s ------ > [linux/arm64 base-arm64 2/2] RUN yum install -y yum-utils epel-release && dnf install -y clang ccache git && yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo: 4.249 exec /bin/sh: exec format error ------ 2 warnings found (use docker --debug to expand): - FromPlatformFlagConstDisallowed: FROM --platform flag should not use constant value "linux/arm64" (line 95) - FromPlatformFlagConstDisallowed: FROM --platform flag should not use constant value "linux/arm64" (line 111) Dockerfile:24 -------------------- 23 | # install epel-release for ccache 24 | >>> RUN yum install -y yum-utils epel-release \ 25 | >>> && dnf install -y clang ccache git \ 26 | >>> && yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo 27 | ENV CC=clang CXX=clang++ -------------------- ERROR: failed to build: failed to solve: process "/bin/sh -c yum install -y yum-utils epel-release && dnf install -y clang ccache git && yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo" did not complete successfully: exit code: 255 ``` ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 10:26:19 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

What CPU achitecture?

<!-- gh-comment-id:4158297809 --> @rick-github commented on GitHub (Mar 30, 2026): What CPU achitecture?
Author
Owner

@jcharth commented on GitHub (Mar 30, 2026):

6.1.0-44-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.164-1 (2026-03-09) x86_64 GNU/Linux
it is running on a i7 3770 in an old dell 7010
it seems like it will only compile in redhat. may be i need to run ollama in fedora instead of debian.

<!-- gh-comment-id:4158775768 --> @jcharth commented on GitHub (Mar 30, 2026): 6.1.0-44-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.164-1 (2026-03-09) x86_64 GNU/Linux it is running on a i7 3770 in an old dell 7010 it seems like it will only compile in redhat. may be i need to run ollama in fedora instead of debian.
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

If you are compiling in docker, the host system shouldn't matter. Try:

docker build -t ollama/ollama:m60 .
<!-- gh-comment-id:4158810121 --> @rick-github commented on GitHub (Mar 30, 2026): If you are compiling in docker, the host system shouldn't matter. Try: ``` docker build -t ollama/ollama:m60 . ```
Author
Owner

@jcharth commented on GitHub (Mar 31, 2026):

i was able to use the snap go and it compiled without gpu support

ill give it a try i got an error almost at the end. it could be because of version cuda 12.6

cmake -B build
cmake --build build

/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function ‘void ggml_vk_print_gpu_info(size_t)’:
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5053:5: error: ‘VkPhysicalDeviceCooperativeMatrixFeaturesKHR’ was not declared in this scope; did you mean ‘VkPhysicalDeviceCooperativeMatrixFeaturesNV’?
 5053 |     VkPhysicalDeviceCooperativeMatrixFeaturesKHR coopmat_features;
      |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |     VkPhysicalDeviceCooperativeMatrixFeaturesNV
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5054:5: error: ‘coopmat_features’ was not declared in this scope
 5054 |     coopmat_features.pNext = nullptr;
      |     ^~~~~~~~~~~~~~~~
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5055:30: error: ‘VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_COOPERATIVE_MATRIX_FEATURES_KHR’ was not declared in this scope; did you mean ‘VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_COOPERATIVE_MATRIX_FEATURES_NV’?
 5055 |     coopmat_features.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_COOPERATIVE_MATRIX_FEATURES_KHR;
      |                              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                              VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_COOPERATIVE_MATRIX_FEATURES_NV
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function ‘void ggml_vk_instance_init()’:
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5168:21: error: ‘LayerSettingEXT’ is not a member of ‘vk’
 5168 |     std::vector<vk::LayerSettingEXT> settings = {
      |                     ^~~~~~~~~~~~~~~
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5168:36: error: template argument 1 is invalid
 5168 |     std::vector<vk::LayerSettingEXT> settings = {
      |                                    ^
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5168:36: error: template argument 2 is invalid
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5172:17: error: ‘vk::LayerSettingTypeEXT’ has not been declared
 5172 |             vk::LayerSettingTypeEXT::eBool32,
      |                 ^~~~~~~~~~~~~~~~~~~
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5176:5: error: too many braces around scalar initializer for type ‘int’
 5176 |     };
      |     ^
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5177:9: error: ‘LayerSettingsCreateInfoEXT’ is not a member of ‘vk’; did you mean ‘ImageViewMinLodCreateInfoEXT’?
 5177 |     vk::LayerSettingsCreateInfoEXT layer_setting_info(settings);
      |         ^~~~~~~~~~~~~~~~~~~~~~~~~~
      |         ImageViewMinLodCreateInfoEXT
/home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5178:108: error: ‘layer_setting_info’ was not declared in this scope; did you mean ‘layer_settings’?
 5178 |     vk::InstanceCreateInfo instance_create_info(vk::InstanceCreateFlags{}, &app_info, layers, extensions, &layer_setting_info);
      |                                                                                                            ^~~~~~~~~~~~~~~~~~
      |                                                                                                            layer_settings
gmake[2]: *** [ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/build.make:938: ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:629: ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/all] Error 2
gmake: *** [Makefile:136: all] Error 2
<!-- gh-comment-id:4158947692 --> @jcharth commented on GitHub (Mar 31, 2026): i was able to use the snap go and it compiled without gpu support ill give it a try i got an error almost at the end. it could be because of version cuda 12.6 cmake -B build cmake --build build ``` /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function ‘void ggml_vk_print_gpu_info(size_t)’: /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5053:5: error: ‘VkPhysicalDeviceCooperativeMatrixFeaturesKHR’ was not declared in this scope; did you mean ‘VkPhysicalDeviceCooperativeMatrixFeaturesNV’? 5053 | VkPhysicalDeviceCooperativeMatrixFeaturesKHR coopmat_features; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | VkPhysicalDeviceCooperativeMatrixFeaturesNV /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5054:5: error: ‘coopmat_features’ was not declared in this scope 5054 | coopmat_features.pNext = nullptr; | ^~~~~~~~~~~~~~~~ /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5055:30: error: ‘VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_COOPERATIVE_MATRIX_FEATURES_KHR’ was not declared in this scope; did you mean ‘VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_COOPERATIVE_MATRIX_FEATURES_NV’? 5055 | coopmat_features.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_COOPERATIVE_MATRIX_FEATURES_KHR; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_COOPERATIVE_MATRIX_FEATURES_NV /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function ‘void ggml_vk_instance_init()’: /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5168:21: error: ‘LayerSettingEXT’ is not a member of ‘vk’ 5168 | std::vector<vk::LayerSettingEXT> settings = { | ^~~~~~~~~~~~~~~ /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5168:36: error: template argument 1 is invalid 5168 | std::vector<vk::LayerSettingEXT> settings = { | ^ /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5168:36: error: template argument 2 is invalid /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5172:17: error: ‘vk::LayerSettingTypeEXT’ has not been declared 5172 | vk::LayerSettingTypeEXT::eBool32, | ^~~~~~~~~~~~~~~~~~~ /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5176:5: error: too many braces around scalar initializer for type ‘int’ 5176 | }; | ^ /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5177:9: error: ‘LayerSettingsCreateInfoEXT’ is not a member of ‘vk’; did you mean ‘ImageViewMinLodCreateInfoEXT’? 5177 | vk::LayerSettingsCreateInfoEXT layer_setting_info(settings); | ^~~~~~~~~~~~~~~~~~~~~~~~~~ | ImageViewMinLodCreateInfoEXT /home/john/test/ollama/ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5178:108: error: ‘layer_setting_info’ was not declared in this scope; did you mean ‘layer_settings’? 5178 | vk::InstanceCreateInfo instance_create_info(vk::InstanceCreateFlags{}, &app_info, layers, extensions, &layer_setting_info); | ^~~~~~~~~~~~~~~~~~ | layer_settings gmake[2]: *** [ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/build.make:938: ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o] Error 1 gmake[1]: *** [CMakeFiles/Makefile2:629: ml/backend/ggml/ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/all] Error 2 gmake: *** [Makefile:136: all] Error 2 ```
Author
Owner

@jcharth commented on GitHub (Mar 31, 2026):

docker build -t ollama/ollama:m60 .
has frozen multiple times. i am trying again without Xorg running. i got 24gb of ram. i notice that the vulkan libraries are old in debian 12. normal cmake fails because of missing functions. i tried work arounds with no luck. I might need to use a different distribution to compile ollama or simply run ollama. i tried debian 13 and i did not like it much but i might try it again. what linux is best to compile ollama? any free ones?

<!-- gh-comment-id:4163724136 --> @jcharth commented on GitHub (Mar 31, 2026): docker build -t ollama/ollama:m60 . has frozen multiple times. i am trying again without Xorg running. i got 24gb of ram. i notice that the vulkan libraries are old in debian 12. normal cmake fails because of missing functions. i tried work arounds with no luck. I might need to use a different distribution to compile ollama or simply run ollama. i tried debian 13 and i did not like it much but i might try it again. what linux is best to compile ollama? any free ones?
Author
Owner

@jcharth commented on GitHub (Apr 1, 2026):

i was able to compile ollama in debian 13 with nvidia support. using the non-free repo and the 12.4 cuda toolkit. i did not fix the problem with my tesla m60. ill keep testing more things.

<!-- gh-comment-id:4166851584 --> @jcharth commented on GitHub (Apr 1, 2026): i was able to compile ollama in debian 13 with nvidia support. using the non-free repo and the 12.4 cuda toolkit. i did not fix the problem with my tesla m60. ill keep testing more things.
Author
Owner

@jcharth commented on GitHub (Apr 10, 2026):

got ollama running in both gpus. no need to compile. i had to switch bios to disable csm and enable 4g. i used gpumodeswitch to unable compute mode.

<!-- gh-comment-id:4223534926 --> @jcharth commented on GitHub (Apr 10, 2026): got ollama running in both gpus. no need to compile. i had to switch bios to disable csm and enable 4g. i used gpumodeswitch to unable compute mode.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56213