[PR #10569] [MERGED] Remove cuda v11 to reduce download size #18548

Closed
opened 2026-04-16 06:38:53 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/10569
Author: @dhiltgen
Created: 5/5/2025
Status: Merged
Merged: 5/7/2025
Merged by: @dhiltgen

Base: mainHead: size_v11


📝 Commits (1)

📊 Changes

11 files changed (+11 additions, -58 deletions)

View changed files

📝 .github/workflows/release.yaml (+0 -6)
📝 .github/workflows/test.yaml (+3 -3)
📝 CMakePresets.json (+0 -13)
📝 Dockerfile (+1 -16)
📝 discover/cuda_common.go (+3 -0)
📝 discover/path.go (+1 -1)
📝 docs/gpu.md (+1 -1)
📝 docs/troubleshooting.md (+1 -1)
📝 llm/server.go (+1 -1)
📝 scripts/build_windows.ps1 (+0 -14)
📝 scripts/env.sh (+0 -2)

📄 Description

This reduces the size of our installer payloads by ~256M by dropping support for nvidia drivers older than Feb 2023. Hardware support is unchanged.

Behavior on an old driver:

PS C:\users\daniel> nvidia-smi.exe
Mon May  5 13:13:39 2025
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 522.25       Driver Version: 522.25       CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
...

In our server logs when running a model

time=2025-05-05T13:13:42.926-07:00 level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.8
time=2025-05-05T13:13:42.942-07:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 library=cuda variant=v11 compute=7.5 driver=11.8 name="NVIDIA GeForce RTX 2080 Ti" total="11.0 GiB" available="9.9 GiB"
...
time=2025-05-05T13:13:51.457-07:00 level=INFO source=sched.go:756 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\daniel\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 parallel=2 available=10623135744 required="3.7 GiB"
...
time=2025-05-05T13:13:51.702-07:00 level=INFO source=server.go:410 msg="starting llama server" cmd="C:\\Users\\daniel\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\daniel\\.ollama\\models\\blobs\\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 50056"
...
time=2025-05-05T13:13:51.731-07:00 level=INFO source=runner.go:853 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\daniel\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from C:\Users\daniel\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-05-05T13:13:53.039-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)

and inference runs on CPU ultimately. Our output in ollama ps reports GPU usage since it doesn't understand the backend is falling back.

I also verified on a cuda 12.1 driver (oldest we can support) and things worked properly on the GPU.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/10569 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 5/5/2025 **Status:** ✅ Merged **Merged:** 5/7/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `size_v11` --- ### 📝 Commits (1) - [`c58cce8`](https://github.com/ollama/ollama/commit/c58cce8e18cdc410e107d6eabfcb1bf09c2afea4) remove cuda v11 ### 📊 Changes **11 files changed** (+11 additions, -58 deletions) <details> <summary>View changed files</summary> 📝 `.github/workflows/release.yaml` (+0 -6) 📝 `.github/workflows/test.yaml` (+3 -3) 📝 `CMakePresets.json` (+0 -13) 📝 `Dockerfile` (+1 -16) 📝 `discover/cuda_common.go` (+3 -0) 📝 `discover/path.go` (+1 -1) 📝 `docs/gpu.md` (+1 -1) 📝 `docs/troubleshooting.md` (+1 -1) 📝 `llm/server.go` (+1 -1) 📝 `scripts/build_windows.ps1` (+0 -14) 📝 `scripts/env.sh` (+0 -2) </details> ### 📄 Description This reduces the size of our installer payloads by ~256M by dropping support for nvidia drivers older than Feb 2023. Hardware support is unchanged. Behavior on an old driver: ``` PS C:\users\daniel> nvidia-smi.exe Mon May 5 13:13:39 2025 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 522.25 Driver Version: 522.25 CUDA Version: 11.8 | |-------------------------------+----------------------+----------------------+ ... ``` In our server logs when running a model ``` time=2025-05-05T13:13:42.926-07:00 level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.8 time=2025-05-05T13:13:42.942-07:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 library=cuda variant=v11 compute=7.5 driver=11.8 name="NVIDIA GeForce RTX 2080 Ti" total="11.0 GiB" available="9.9 GiB" ... time=2025-05-05T13:13:51.457-07:00 level=INFO source=sched.go:756 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\daniel\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 parallel=2 available=10623135744 required="3.7 GiB" ... time=2025-05-05T13:13:51.702-07:00 level=INFO source=server.go:410 msg="starting llama server" cmd="C:\\Users\\daniel\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\daniel\\.ollama\\models\\blobs\\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 50056" ... time=2025-05-05T13:13:51.731-07:00 level=INFO source=runner.go:853 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\daniel\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version load_backend: loaded CUDA backend from C:\Users\daniel\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-05T13:13:53.039-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) ``` and inference runs on CPU ultimately. Our output in `ollama ps` reports GPU usage since it doesn't understand the backend is falling back. I also verified on a cuda 12.1 driver (oldest we can support) and things worked properly on the GPU. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-16 06:38:53 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#18548