[GH-ISSUE #12268] Explore ways to optimize installation size #70216

Open
opened 2026-05-04 20:40:43 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @dhiltgen on GitHub (Sep 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12268

Originally assigned to: @dhiltgen on GitHub.

With the recent addition of CUDA v13 to enable new GPUs, the bundle and installer sizes for Linux and Windows have increased. We should explore options to optimize the size and/or layout of the bundles and improve the Linux and Windows installers to only install the components applicable to the GPU(s) present on the users system, and ideally only download those applicable components at install/upgrade time.

Originally created by @dhiltgen on GitHub (Sep 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12268 Originally assigned to: @dhiltgen on GitHub. With the recent addition of CUDA v13 to enable new GPUs, the bundle and installer sizes for Linux and Windows have increased. We should explore options to optimize the size and/or layout of the bundles and improve the Linux and Windows installers to only install the components applicable to the GPU(s) present on the users system, and ideally only download those applicable components at install/upgrade time.
GiteaMirror added the installfeature request labels 2026-05-04 20:40:43 -05:00
Author
Owner

@inforithmics commented on GitHub (Sep 30, 2025):

What about changing the compressed format from zip to 7z (7zip). I recompressed the ollama-windows-amd64.zip to ollama-windows-amd64.7z which reduced the size from 1.8 GB to 1.0GB.

<!-- gh-comment-id:3353110481 --> @inforithmics commented on GitHub (Sep 30, 2025): What about changing the compressed format from zip to 7z (7zip). I recompressed the ollama-windows-amd64.zip to ollama-windows-amd64.7z which reduced the size from 1.8 GB to 1.0GB.
Author
Owner

@zsyo commented on GitHub (Nov 1, 2025):

Can the basic version load my locally installed CUDA to switch to GPU operation?

<!-- gh-comment-id:3476158333 --> @zsyo commented on GitHub (Nov 1, 2025): Can the basic version load my locally installed CUDA to switch to GPU operation?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70216