[PR #7137] [MERGED] llama: add compiler tags for cpu features #59019

Closed
opened 2026-04-29 13:54:09 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/7137
Author: @dhiltgen
Created: 10/8/2024
Status: Merged
Merged: 10/17/2024
Merged by: @dhiltgen

Base: mainHead: go_server_gpu_vector_flags


📝 Commits (1)

  • dcafde2 llama: add compiler tags for cpu features

📊 Changes

6 files changed (+49 additions, -25 deletions)

View changed files

📝 llama/Makefile (+1 -0)
📝 llama/llama.go (+39 -18)
📝 llama/make/Makefile.default (+6 -4)
📝 llama/make/common-defs.make (+0 -2)
📝 llama/make/gpu.make (+1 -1)
📝 scripts/env.sh (+2 -0)

📄 Description

Replaces #7009 now on main

Support local builds with customized CPU flags for both the CPU runner, and GPU runners.

Some users want no vector flags in the GPU runners. Others want ~all the vector extensions enabled. Each runner we add to the official build adds significant overhead (size and build time) so this enhancement makes it much easier for users to build their own customized version if our default runners CPU: [none,avx,avx2] and GPU:[avx] don't address their needs.

This PR does not wire up runtime discovery of the requirements, so will only be suitable for adding additional vector flags to GPU runners for now. I'll follow up in the future with support for GPU runners without any vector flags along with docs.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/7137 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 10/8/2024 **Status:** ✅ Merged **Merged:** 10/17/2024 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `go_server_gpu_vector_flags` --- ### 📝 Commits (1) - [`dcafde2`](https://github.com/ollama/ollama/commit/dcafde2656326c71caea6cd0e266cb763534adca) llama: add compiler tags for cpu features ### 📊 Changes **6 files changed** (+49 additions, -25 deletions) <details> <summary>View changed files</summary> 📝 `llama/Makefile` (+1 -0) 📝 `llama/llama.go` (+39 -18) 📝 `llama/make/Makefile.default` (+6 -4) 📝 `llama/make/common-defs.make` (+0 -2) 📝 `llama/make/gpu.make` (+1 -1) 📝 `scripts/env.sh` (+2 -0) </details> ### 📄 Description Replaces #7009 now on main Support local builds with customized CPU flags for both the CPU runner, and GPU runners. Some users want no vector flags in the GPU runners. Others want ~all the vector extensions enabled. Each runner we add to the official build adds significant overhead (size and build time) so this enhancement makes it much easier for users to build their own customized version if our default runners CPU: [none,avx,avx2] and GPU:[avx] don't address their needs. This PR does not wire up runtime discovery of the requirements, so will only be suitable for adding additional vector flags to GPU runners for now. I'll follow up in the future with support for GPU runners without any vector flags along with docs. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 13:54:09 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#59019