[PR #8539] [MERGED] next build #75017

Closed
opened 2026-05-05 07:22:01 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/8539
Author: @mxyng
Created: 1/22/2025
Status: Merged
Merged: 1/29/2025
Merged by: @mxyng

Base: mainHead: mxyng/next-build


📝 Commits (10+)

📊 Changes

542 files changed (+5778 additions, -11451 deletions)

View changed files

📝 .dockerignore (+3 -1)
📝 .gitattributes (+9 -0)
📝 .github/workflows/release.yaml (+266 -631)
📝 .github/workflows/test.yaml (+88 -264)
📝 .gitignore (+3 -2)
CMakeLists.txt (+112 -0)
CMakePresets.json (+110 -0)
📝 Dockerfile (+117 -190)
Makefile (+0 -103)
Makefile.sync (+56 -0)
📝 discover/amd_common.go (+4 -9)
📝 discover/amd_linux.go (+2 -4)
📝 discover/amd_windows.go (+3 -7)
📝 discover/gpu.go (+24 -64)
📝 discover/gpu_darwin.go (+0 -3)
discover/path.go (+53 -0)
📝 discover/types.go (+1 -2)
📝 docs/development.md (+63 -108)
📝 envconfig/config.go (+0 -9)
📝 go.mod (+2 -1)

...and 80 more files

📄 Description

split from #7913

this changes updates the directory structure splitting llama.cpp and ggml into separate, reusable packages. as a result the build has also changed significantly. the build now uses cmake to build dependencies as shared objects which will be dynamically loaded when necessary.

current (work in progress) build instructions:

  • go build . to build ollama. this includes a default, basic cpu runner
  • cmake --preset Default; cmake --build --preset Default to configure and build the default targets. this will configure and build cuda and rocm if those are available
  • cmake --preset CPU; cmake --build --preset CPU to configure and build only CPU variants
  • cmake --preset CUDA; cmake --build --preset CUDA to configure and build only CUDA
  • cmake --preset ROCm; cmake --build --preset ROCm to configure and build only ROCm

TODO:

  • Windows CI
  • Build docs
  • Update CMake output directory

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/8539 **Author:** [@mxyng](https://github.com/mxyng) **Created:** 1/22/2025 **Status:** ✅ Merged **Merged:** 1/29/2025 **Merged by:** [@mxyng](https://github.com/mxyng) **Base:** `main` ← **Head:** `mxyng/next-build` --- ### 📝 Commits (10+) - [`144f63e`](https://github.com/ollama/ollama/commit/144f63e2fbd349f529ec0c71aef0ee1636a05780) next build - [`09320e8`](https://github.com/ollama/ollama/commit/09320e8e3df5e146bb597223b016342659375d3b) add build to .dockerignore - [`988de7b`](https://github.com/ollama/ollama/commit/988de7b7356025afdd4d0d5d3f19bc9a5f94eb1c) test: only build one arch - [`303248d`](https://github.com/ollama/ollama/commit/303248d031b083a47ca125b74c40c25436895ffd) add build to .gitignore - [`214eb1a`](https://github.com/ollama/ollama/commit/214eb1a2e8c1215f0278b2d978fb9af6dad87e0e) fix ccache path - [`25bf875`](https://github.com/ollama/ollama/commit/25bf87545ae47048e1a15637a507229c4caeb7a1) filter amdgpu targets - [`861e17a`](https://github.com/ollama/ollama/commit/861e17a0377d03f0269aba138eca10d30c080c04) only filter if autodetecting - [`6a75786`](https://github.com/ollama/ollama/commit/6a75786a451be2b4a9932806b455527e10235bd9) Don't clobber gpu list for default runner - [`4ec1765`](https://github.com/ollama/ollama/commit/4ec1765e78d10550fea2c7e1173b555c7aad220e) explicitly set CXX compiler for HIP - [`5a9c704`](https://github.com/ollama/ollama/commit/5a9c704c77509c4e4b09609139beeebd58880150) Update build_windows.ps1 ### 📊 Changes **542 files changed** (+5778 additions, -11451 deletions) <details> <summary>View changed files</summary> 📝 `.dockerignore` (+3 -1) 📝 `.gitattributes` (+9 -0) 📝 `.github/workflows/release.yaml` (+266 -631) 📝 `.github/workflows/test.yaml` (+88 -264) 📝 `.gitignore` (+3 -2) ➕ `CMakeLists.txt` (+112 -0) ➕ `CMakePresets.json` (+110 -0) 📝 `Dockerfile` (+117 -190) ➖ `Makefile` (+0 -103) ➕ `Makefile.sync` (+56 -0) 📝 `discover/amd_common.go` (+4 -9) 📝 `discover/amd_linux.go` (+2 -4) 📝 `discover/amd_windows.go` (+3 -7) 📝 `discover/gpu.go` (+24 -64) 📝 `discover/gpu_darwin.go` (+0 -3) ➕ `discover/path.go` (+53 -0) 📝 `discover/types.go` (+1 -2) 📝 `docs/development.md` (+63 -108) 📝 `envconfig/config.go` (+0 -9) 📝 `go.mod` (+2 -1) _...and 80 more files_ </details> ### 📄 Description split from #7913 this changes updates the directory structure splitting `llama.cpp` and `ggml` into separate, reusable packages. as a result the build has also changed significantly. the build now uses `cmake` to build dependencies as shared objects which will be dynamically loaded when necessary. current (work in progress) build instructions: - `go build .` to build ollama. this includes a default, basic cpu runner - `cmake --preset Default; cmake --build --preset Default` to configure and build the default targets. this will configure and build cuda and rocm if those are available - `cmake --preset CPU; cmake --build --preset CPU` to configure and build _only_ CPU variants - `cmake --preset CUDA; cmake --build --preset CUDA` to configure and build _only_ CUDA - `cmake --preset ROCm; cmake --build --preset ROCm` to configure and build _only_ ROCm TODO: - [x] Windows CI - [x] Build docs - [x] Update CMake output directory --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 07:22:01 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#75017