[PR #12245] [MERGED] Update GGML to b6646 - drop MacOS v12 and v13 support #13747

Closed
opened 2026-04-13 00:35:00 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12245
Author: @dhiltgen
Created: 9/10/2025
Status: Merged
Merged: 10/2/2025
Merged by: @dhiltgen

Base: mainHead: bump


📝 Commits (1)

📊 Changes

326 files changed (+30731 additions, -20740 deletions)

View changed files

📝 CMakeLists.txt (+3 -3)
📝 CMakePresets.json (+1 -1)
📝 Makefile.sync (+1 -1)
📝 docs/gpu.md (+6 -8)
📝 docs/macos.md (+1 -1)
📝 integration/context_test.go (+1 -1)
📝 llama/build-info.cpp (+1 -1)
📝 llama/llama.cpp/common/common.cpp (+86 -24)
📝 llama/llama.cpp/common/common.h (+74 -21)
📝 llama/llama.cpp/common/json-schema-to-grammar.cpp (+28 -7)
📝 llama/llama.cpp/common/log.cpp (+53 -2)
📝 llama/llama.cpp/common/log.h (+10 -4)
📝 llama/llama.cpp/common/sampling.cpp (+24 -2)
📝 llama/llama.cpp/common/sampling.h (+3 -1)
📝 llama/llama.cpp/include/llama.h (+73 -128)
📝 llama/llama.cpp/src/llama-adapter.cpp (+101 -4)
📝 llama/llama.cpp/src/llama-adapter.h (+6 -0)
📝 llama/llama.cpp/src/llama-arch.cpp (+163 -12)
📝 llama/llama.cpp/src/llama-arch.h (+23 -0)
📝 llama/llama.cpp/src/llama-batch.cpp (+1 -1)

...and 80 more files

📄 Description

Performance improvements on metal require dropping old MacOS support.

This also drops AMD gfx900 and gfx906 GPU support.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12245 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 9/10/2025 **Status:** ✅ Merged **Merged:** 10/2/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `bump` --- ### 📝 Commits (1) - [`78412cd`](https://github.com/ollama/ollama/commit/78412cd27d6e0456fb76f7698ed34eceb9005ac4) Update GGML to b6646 ### 📊 Changes **326 files changed** (+30731 additions, -20740 deletions) <details> <summary>View changed files</summary> 📝 `CMakeLists.txt` (+3 -3) 📝 `CMakePresets.json` (+1 -1) 📝 `Makefile.sync` (+1 -1) 📝 `docs/gpu.md` (+6 -8) 📝 `docs/macos.md` (+1 -1) 📝 `integration/context_test.go` (+1 -1) 📝 `llama/build-info.cpp` (+1 -1) 📝 `llama/llama.cpp/common/common.cpp` (+86 -24) 📝 `llama/llama.cpp/common/common.h` (+74 -21) 📝 `llama/llama.cpp/common/json-schema-to-grammar.cpp` (+28 -7) 📝 `llama/llama.cpp/common/log.cpp` (+53 -2) 📝 `llama/llama.cpp/common/log.h` (+10 -4) 📝 `llama/llama.cpp/common/sampling.cpp` (+24 -2) 📝 `llama/llama.cpp/common/sampling.h` (+3 -1) 📝 `llama/llama.cpp/include/llama.h` (+73 -128) 📝 `llama/llama.cpp/src/llama-adapter.cpp` (+101 -4) 📝 `llama/llama.cpp/src/llama-adapter.h` (+6 -0) 📝 `llama/llama.cpp/src/llama-arch.cpp` (+163 -12) 📝 `llama/llama.cpp/src/llama-arch.h` (+23 -0) 📝 `llama/llama.cpp/src/llama-batch.cpp` (+1 -1) _...and 80 more files_ </details> ### 📄 Description Performance improvements on metal require dropping old MacOS support. This also drops AMD gfx900 and gfx906 GPU support. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:35:00 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13747