[PR #10569] Remove cuda v11 to reduce download size #13277

Closed
opened 2026-04-13 00:22:43 -05:00 by GiteaMirror · 0 comments
Owner

Original Pull Request: https://github.com/ollama/ollama/pull/10569

State: closed
Merged: Yes


This reduces the size of our installer payloads by ~256M by dropping support for nvidia drivers older than Feb 2023. Hardware support is unchanged.

Behavior on an old driver:

PS C:\users\daniel> nvidia-smi.exe
Mon May  5 13:13:39 2025
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 522.25       Driver Version: 522.25       CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
...

In our server logs when running a model

time=2025-05-05T13:13:42.926-07:00 level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.8
time=2025-05-05T13:13:42.942-07:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 library=cuda variant=v11 compute=7.5 driver=11.8 name="NVIDIA GeForce RTX 2080 Ti" total="11.0 GiB" available="9.9 GiB"
...
time=2025-05-05T13:13:51.457-07:00 level=INFO source=sched.go:756 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\daniel\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 parallel=2 available=10623135744 required="3.7 GiB"
...
time=2025-05-05T13:13:51.702-07:00 level=INFO source=server.go:410 msg="starting llama server" cmd="C:\\Users\\daniel\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\daniel\\.ollama\\models\\blobs\\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 50056"
...
time=2025-05-05T13:13:51.731-07:00 level=INFO source=runner.go:853 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\daniel\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from C:\Users\daniel\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-05-05T13:13:53.039-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)

and inference runs on CPU ultimately. Our output in ollama ps reports GPU usage since it doesn't understand the backend is falling back.

I also verified on a cuda 12.1 driver (oldest we can support) and things worked properly on the GPU.

**Original Pull Request:** https://github.com/ollama/ollama/pull/10569 **State:** closed **Merged:** Yes --- This reduces the size of our installer payloads by ~256M by dropping support for nvidia drivers older than Feb 2023. Hardware support is unchanged. Behavior on an old driver: ``` PS C:\users\daniel> nvidia-smi.exe Mon May 5 13:13:39 2025 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 522.25 Driver Version: 522.25 CUDA Version: 11.8 | |-------------------------------+----------------------+----------------------+ ... ``` In our server logs when running a model ``` time=2025-05-05T13:13:42.926-07:00 level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.8 time=2025-05-05T13:13:42.942-07:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 library=cuda variant=v11 compute=7.5 driver=11.8 name="NVIDIA GeForce RTX 2080 Ti" total="11.0 GiB" available="9.9 GiB" ... time=2025-05-05T13:13:51.457-07:00 level=INFO source=sched.go:756 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\daniel\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 parallel=2 available=10623135744 required="3.7 GiB" ... time=2025-05-05T13:13:51.702-07:00 level=INFO source=server.go:410 msg="starting llama server" cmd="C:\\Users\\daniel\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\daniel\\.ollama\\models\\blobs\\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 50056" ... time=2025-05-05T13:13:51.731-07:00 level=INFO source=runner.go:853 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\daniel\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version load_backend: loaded CUDA backend from C:\Users\daniel\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-05T13:13:53.039-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) ``` and inference runs on CPU ultimately. Our output in `ollama ps` reports GPU usage since it doesn't understand the backend is falling back. I also verified on a cuda 12.1 driver (oldest we can support) and things worked properly on the GPU.
GiteaMirror added the pull-request label 2026-04-13 00:22:43 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13277