[PR #6400] [CLOSED] Add arm64 cuda jetpack variants #74396

Closed
opened 2026-05-05 06:27:16 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/6400
Author: @dhiltgen
Created: 8/17/2024
Status: Closed

Base: mainHead: jetson_reapply


📝 Commits (1)

  • c51a892 Add arm64 cuda jetpack variants

📊 Changes

1 file changed (+49 additions, -0 deletions)

View changed files

📝 Dockerfile (+49 -0)

📄 Description

This adds 2 new variants for the arm64 build to support nvidia jetson systems based on jetpack 5 and 6. Jetpack 4 is too old to be built with our toolchain (the older cuda requires an old gcc which can't build llama.cpp) and will remain unsupported.

The sbsa discrete GPU cuda libraries we bundle in the existing arm64 build are incompatible with jetson iGPU systems. Unfortunately swapping them at runtime isn't viable given the way nvcc compilation/linking works, so we need to actually build and link against those specific cuda libraries, and bundle them.

Fixes #2408
Fixes #4693
Fixes #5100
Fixes #4861

Resulting artifacts:

% ls -lh dist/ollama-linux-arm64.tgz
-rw-r--r--  1 daniel  staff   2.1G Aug 17 10:47 dist/ollama-linux-arm64.tgz
% ls -lh dist/linux-arm64/bin/ollama
-rwxr-xr-x  1 daniel  staff   868M Aug 17 10:47 dist/linux-arm64/bin/ollama

Draft until #5049 merges


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/6400 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 8/17/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `jetson_reapply` --- ### 📝 Commits (1) - [`c51a892`](https://github.com/ollama/ollama/commit/c51a89221ded4d3e2ad649e490b967cda382b075) Add arm64 cuda jetpack variants ### 📊 Changes **1 file changed** (+49 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `Dockerfile` (+49 -0) </details> ### 📄 Description This adds 2 new variants for the arm64 build to support nvidia jetson systems based on jetpack 5 and 6. Jetpack 4 is too old to be built with our toolchain (the older cuda requires an old gcc which can't build llama.cpp) and will remain unsupported. The sbsa discrete GPU cuda libraries we bundle in the existing arm64 build are incompatible with jetson iGPU systems. Unfortunately swapping them at runtime isn't viable given the way nvcc compilation/linking works, so we need to actually build and link against those specific cuda libraries, and bundle them. Fixes #2408 Fixes #4693 Fixes #5100 Fixes #4861 Resulting artifacts: ``` % ls -lh dist/ollama-linux-arm64.tgz -rw-r--r-- 1 daniel staff 2.1G Aug 17 10:47 dist/ollama-linux-arm64.tgz % ls -lh dist/linux-arm64/bin/ollama -rwxr-xr-x 1 daniel staff 868M Aug 17 10:47 dist/linux-arm64/bin/ollama ``` Draft until #5049 merges --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 06:27:16 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#74396