[GH-ISSUE #12842] Ollama 0.12.6 fails to find CUDA during build (fixed with work around) #8508

Closed
opened 2026-04-12 21:12:08 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @abcbarryn on GitHub (Oct 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12842

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Ollama 0.11.2 builds fine with these settings on my system, but 0.12.6 does not.

gcc version 11.3.0 (SUSE Linux)
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

To build, I ran:

export CMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc
export INCLUDES="-I /usr/local/cuda/include"
export PATH="/usr/local/cuda/nvvm/bin:/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/lib64:/usr/lib64:/usr/local/cuda/lib64:/usr/local/cuda/targets/x86_64-linux/lib"
cmake -B build
cmake --build build

Relevant log output

-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- GGML_SYSTEM_ARCH: x86
-- Including CPU backend
-- x86 detected
-- Adding CPU backend variant ggml-cpu-x64:
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX
-- x86 detected
-- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2
-- x86 detected
-- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512
-- x86 detected
-- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI
-- x86 detected
-- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI
-- Unable to find cudart library.
-- Could NOT find CUDAToolkit (missing: CUDA_CUDART) (found version "11.8.89")
-- Unable to find cudart library.
-- Could NOT find CUDAToolkit (missing: CUDA_CUDART) (found version "11.8.89")
CMake Error at ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt:189 (message):
  CUDA Toolkit not found


-- Configuring incomplete, errors occurred!
gmake: Makefile: No such file or directory
gmake: *** No rule to make target 'Makefile'.  Stop.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.12.6

Originally created by @abcbarryn on GitHub (Oct 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12842 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Ollama 0.11.2 builds fine with these settings on my system, but 0.12.6 does not. gcc version 11.3.0 (SUSE Linux) nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:33:58_PDT_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0 To build, I ran: export CMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc export INCLUDES="-I /usr/local/cuda/include" export PATH="/usr/local/cuda/nvvm/bin:/usr/local/cuda/bin:$PATH" export LD_LIBRARY_PATH="/lib64:/usr/lib64:/usr/local/cuda/lib64:/usr/local/cuda/targets/x86_64-linux/lib" cmake -B build cmake --build build ### Relevant log output ```shell -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Unable to find cudart library. -- Could NOT find CUDAToolkit (missing: CUDA_CUDART) (found version "11.8.89") -- Unable to find cudart library. -- Could NOT find CUDAToolkit (missing: CUDA_CUDART) (found version "11.8.89") CMake Error at ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt:189 (message): CUDA Toolkit not found -- Configuring incomplete, errors occurred! gmake: Makefile: No such file or directory gmake: *** No rule to make target 'Makefile'. Stop. ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.6
GiteaMirror added the linuxbuildbug labels 2026-04-12 21:12:08 -05:00
Author
Owner

@abcbarryn commented on GitHub (Oct 30, 2025):

I tried 0.11.11 and got the same error, but 0.11.2 works.

<!-- gh-comment-id:3465514304 --> @abcbarryn commented on GitHub (Oct 30, 2025): I tried 0.11.11 and got the same error, but 0.11.2 works.
Author
Owner

@abcbarryn commented on GitHub (Oct 30, 2025):

Adding /usr/local/cuda/lib64 to the executable PATH variable works around the problem.
export PATH="/usr/local/cuda/nvvm/bin:/usr/local/cuda/bin:/usr/local/cuda/lib64:$PATH"
But you shouldn't have to add a lib folder to the executable path to find a library.

<!-- gh-comment-id:3470460626 --> @abcbarryn commented on GitHub (Oct 30, 2025): Adding /usr/local/cuda/lib64 to the executable PATH variable works around the problem. `export PATH="/usr/local/cuda/nvvm/bin:/usr/local/cuda/bin:/usr/local/cuda/lib64:$PATH"` But you shouldn't have to add a lib folder to the executable path to find a library.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8508