[PR #518] [CLOSED] amd64 linux build runner #15464

Closed
opened 2026-04-16 05:00:09 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/518
Author: @BruceMacD
Created: 9/12/2023
Status: Closed

Base: mainHead: brucemacd/release-linux


📝 Commits (7)

📊 Changes

6 files changed (+206 additions, -36 deletions)

View changed files

.github/workflows/release-linux.yaml (+103 -0)
📝 llm/ggml.go (+1 -7)
📝 llm/gguf.go (+1 -7)
📝 llm/llama.cpp/generate_linux.go (+6 -4)
llm/llama.cpp/generate_linux.sh (+13 -0)
📝 llm/llama.go (+82 -18)

📄 Description

Add automation that automatically creates a single ollama binary for amd64 linux builds.

Limitations:

  • Requires glibc 2.29 (the glibc version ubuntu 20.04 has packed in), ideally we build on an ubuntu 16.04 or 18.04 runner instead to maximize glibc compatibility, but that will require a custom runner. glibc is used by linux to access kernal functionality so it cant really be updated by an end-user without updating their OS.

Future work:

  • Ideally I'd rather just install both version of nvcc on one runner and swap between them. I tried this and I hit some issues with the wrong cuda version being referenced during builds.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/518 **Author:** [@BruceMacD](https://github.com/BruceMacD) **Created:** 9/12/2023 **Status:** ❌ Closed **Base:** `main` ← **Head:** `brucemacd/release-linux` --- ### 📝 Commits (7) - [`e95bc89`](https://github.com/ollama/ollama/commit/e95bc890ecad3932d5807c0a8d74710370569732) linux gpu support - [`e7b3247`](https://github.com/ollama/ollama/commit/e7b324715125f9e828cd9b92552e9ece60ed90b0) Update generate_linux.sh - [`659d612`](https://github.com/ollama/ollama/commit/659d612e4cb14b6c80b63987f0fdb9c61973ae14) add cuda docker image (#488) - [`1180b0d`](https://github.com/ollama/ollama/commit/1180b0d13f17e87baab3ef7aa16c3d4da2cd91ec) enable packaging multiple cuda versions - [`c01504a`](https://github.com/ollama/ollama/commit/c01504ae5fa0d35659a34fe89c82fcd86f7bccc0) use nvcc cuda version if available - [`ce5cc39`](https://github.com/ollama/ollama/commit/ce5cc397b9681bafcec23016e8084a8dc4db9142) cpu builds - [`6a6a451`](https://github.com/ollama/ollama/commit/6a6a4519bad9378f42defd1fe38a35135a71b742) amd64 linux build runner ### 📊 Changes **6 files changed** (+206 additions, -36 deletions) <details> <summary>View changed files</summary> ➕ `.github/workflows/release-linux.yaml` (+103 -0) 📝 `llm/ggml.go` (+1 -7) 📝 `llm/gguf.go` (+1 -7) 📝 `llm/llama.cpp/generate_linux.go` (+6 -4) ➕ `llm/llama.cpp/generate_linux.sh` (+13 -0) 📝 `llm/llama.go` (+82 -18) </details> ### 📄 Description Add automation that automatically creates a single ollama binary for amd64 linux builds. Limitations: - Requires glibc 2.29 (the glibc version ubuntu 20.04 has packed in), ideally we build on an ubuntu 16.04 or 18.04 runner instead to maximize glibc compatibility, but that will require a custom runner. `glibc` is used by linux to access kernal functionality so it cant really be updated by an end-user without updating their OS. Future work: - Ideally I'd rather just install both version of nvcc on one runner and swap between them. I tried this and I hit some issues with the wrong cuda version being referenced during builds. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-16 05:00:09 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#15464