[GH-ISSUE #5152] libcuda.so.1 is not bundled #49757

Closed
opened 2026-04-28 12:52:32 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @vt-alt on GitHub (Jun 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5152

What is the issue?

(Excuse me if I misinterpret internal mechanics of ollama working with llama.cpp.)
But it seems that 3 libraries are bundled with binary, but libcuda.so.1 is not.

ollama-0.1.44$ ls -l ./llm/build/linux/*/cu*/bin/*
  70337839 Jun 19 10:53 ./llm/build/linux/x86_64/cuda_v12/bin/libcublas.so.12.gz
 341823554 Jun 19 10:53 ./llm/build/linux/x86_64/cuda_v12/bin/libcublasLt.so.12.gz
    201627 Jun 19 10:53 ./llm/build/linux/x86_64/cuda_v12/bin/libcudart.so.12.gz
  82213360 Jun 19 10:53 ./llm/build/linux/x86_64/cuda_v12/bin/ollama_llama_server

I ungzipped last binary to show this:

ollama-0.1.44$ ldd ./llm/build/linux/x86_64/cuda_v12/bin/ollama_llama_server
        linux-vdso.so.1 (0x00007fff98046000)
        libcudart.so.12 => /lib64/libcudart.so.12 (0x00007f2f78200000)
        libcublas.so.12 => /lib64/libcublas.so.12 (0x00007f2f71a00000)
        libcuda.so.1 => /lib64/libcuda.so.1 (0x00007f2f6fe55000)         <-------<3-------
        libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f2f6fb59000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f2f7855b000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f2f78536000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f2f6f971000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2f7d492000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f2f7852f000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f2f7852a000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f2f78525000)
        libcublasLt.so.12 => /lib64/libcublasLt.so.12 (0x00007f2f4d000000)

See it still requires libcuda.so.1.

Is it intentional?

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.44

Originally created by @vt-alt on GitHub (Jun 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5152 ### What is the issue? (Excuse me if I misinterpret internal mechanics of ollama working with llama.cpp.) But it seems that 3 libraries are bundled with binary, but `libcuda.so.1` is not. ``` ollama-0.1.44$ ls -l ./llm/build/linux/*/cu*/bin/* 70337839 Jun 19 10:53 ./llm/build/linux/x86_64/cuda_v12/bin/libcublas.so.12.gz 341823554 Jun 19 10:53 ./llm/build/linux/x86_64/cuda_v12/bin/libcublasLt.so.12.gz 201627 Jun 19 10:53 ./llm/build/linux/x86_64/cuda_v12/bin/libcudart.so.12.gz 82213360 Jun 19 10:53 ./llm/build/linux/x86_64/cuda_v12/bin/ollama_llama_server ``` I ungzipped last binary to show this: ``` ollama-0.1.44$ ldd ./llm/build/linux/x86_64/cuda_v12/bin/ollama_llama_server linux-vdso.so.1 (0x00007fff98046000) libcudart.so.12 => /lib64/libcudart.so.12 (0x00007f2f78200000) libcublas.so.12 => /lib64/libcublas.so.12 (0x00007f2f71a00000) libcuda.so.1 => /lib64/libcuda.so.1 (0x00007f2f6fe55000) <-------<3------- libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f2f6fb59000) libm.so.6 => /lib64/libm.so.6 (0x00007f2f7855b000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f2f78536000) libc.so.6 => /lib64/libc.so.6 (0x00007f2f6f971000) /lib64/ld-linux-x86-64.so.2 (0x00007f2f7d492000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f2f7852f000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f2f7852a000) librt.so.1 => /lib64/librt.so.1 (0x00007f2f78525000) libcublasLt.so.12 => /lib64/libcublasLt.so.12 (0x00007f2f4d000000) ``` See it still requires `libcuda.so.1`. Is it intentional? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.44
GiteaMirror added the bug label 2026-04-28 12:52:32 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jun 19, 2024):

libcuda.so is the driver library, and must be bundled with the cuda driver as they're tightly coupled. When you install the nvidia driver, this will be included in the packaging.

<!-- gh-comment-id:2179547605 --> @dhiltgen commented on GitHub (Jun 19, 2024): libcuda.so is the driver library, and must be bundled with the cuda driver as they're tightly coupled. When you install the nvidia driver, this will be included in the packaging.
Author
Owner

@vt-alt commented on GitHub (Jun 19, 2024):

Thanks for the answer. Well, libcudart, libcublas, and libcublaslt are in appropriate packages too, so why bundle them in ollama.

<!-- gh-comment-id:2179562180 --> @vt-alt commented on GitHub (Jun 19, 2024): Thanks for the answer. Well, libcudart, libcublas, and libcublaslt are in appropriate packages too, so why bundle them in ollama.
Author
Owner

@dhiltgen commented on GitHub (Jun 19, 2024):

Given the way the API contracts work between the driver, driver library and the rest of the cuda libraries, we get the best compatibility matrix if we carry the exact version of the cuda libraries we built against and we don't have to force users to install potentially multiple versions of the cuda runtime libraries.

<!-- gh-comment-id:2179579421 --> @dhiltgen commented on GitHub (Jun 19, 2024): Given the way the API contracts work between the driver, driver library and the rest of the cuda libraries, we get the best compatibility matrix if we carry the exact version of the cuda libraries we built against and we don't have to force users to install potentially multiple versions of the cuda runtime libraries.
Author
Owner

@vt-alt commented on GitHub (Jun 19, 2024):

IC. Thanks for the explanation.

<!-- gh-comment-id:2179581744 --> @vt-alt commented on GitHub (Jun 19, 2024): IC. Thanks for the explanation.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49757