[PR #13832] [MERGED] Update vendored llama.cpp to b7847 #76705

Closed
opened 2026-05-05 09:21:32 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13832
Author: @jmorganca
Created: 1/22/2026
Status: Merged
Merged: 2/3/2026
Merged by: @jmorganca

Base: mainHead: llama-update


📝 Commits (3)

📊 Changes

241 files changed (+21226 additions, -5029 deletions)

View changed files

📝 .github/workflows/release.yaml (+1 -1)
📝 CMakePresets.json (+16 -1)
📝 Makefile.sync (+1 -1)
📝 integration/embed_test.go (+12 -2)
📝 integration/tools_test.go (+8 -3)
📝 llama/README.md (+17 -14)
📝 llama/build-info.cpp (+1 -1)
📝 llama/llama.cpp/common/common.cpp (+44 -25)
📝 llama/llama.cpp/common/common.h (+59 -34)
📝 llama/llama.cpp/common/sampling.cpp (+160 -69)
📝 llama/llama.cpp/common/sampling.h (+9 -4)
📝 llama/llama.cpp/include/llama-cpp.h (+3 -1)
📝 llama/llama.cpp/include/llama.h (+136 -17)
📝 llama/llama.cpp/src/llama-adapter.cpp (+5 -2)
📝 llama/llama.cpp/src/llama-adapter.h (+4 -0)
📝 llama/llama.cpp/src/llama-arch.cpp (+114 -1)
📝 llama/llama.cpp/src/llama-arch.h (+9 -0)
📝 llama/llama.cpp/src/llama-chat.cpp (+31 -0)
📝 llama/llama.cpp/src/llama-chat.h (+2 -0)
📝 llama/llama.cpp/src/llama-context.cpp (+847 -177)

...and 80 more files

📄 Description

Updates llama.cpp by 361 commits from ec98e2002 to a5bb8ba4c50257437630c136210396810741bbf7.

GGML updates

Metal

  • Enable flash attention for MLA heads
  • MoE kernel specialization for ne20=5
  • Extended ggml_pool_1d support

Vulkan

  • Flash attention GQA/split_k for small batches
  • Optimized mul_mat_vec_id for small n values
  • AMD GPU matmul optimization with Coopmat
  • Large mat_mul support via 64-bit indexing
  • Intel Xe2/Xe3 warptile tuning
  • SSM scan optimization
  • Intel fp16 mmq bug workaround
  • buffer_from_host_ptr support

CUDA

  • SSM scan warp-level reduction optimization
  • CUDA graph usage refactoring
  • Register spill alignment fix for FA
  • Build fixes for CUDA 12.8 and older CCCL

CPU

  • Optimized ggml_vec_dot_bf16 for Power9
  • AVX512BF16 build fix

General ggml

  • KV-cache KQ mask construction optimization
  • Added ggml_build_forward_select

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13832 **Author:** [@jmorganca](https://github.com/jmorganca) **Created:** 1/22/2026 **Status:** ✅ Merged **Merged:** 2/3/2026 **Merged by:** [@jmorganca](https://github.com/jmorganca) **Base:** `main` ← **Head:** `llama-update` --- ### 📝 Commits (3) - [`9f62f63`](https://github.com/ollama/ollama/commit/9f62f63c601b033226a2bc80a6508d3227dd34a5) llama: update to b7847 - [`4fccafd`](https://github.com/ollama/ollama/commit/4fccafd9324452a36da98e73a092fdceb2a87072) test: improve integration test reliability - [`a00d201`](https://github.com/ollama/ollama/commit/a00d201e424cc1a9599514749578018502c069cc) Llama update powerpc sync (#13972) ### 📊 Changes **241 files changed** (+21226 additions, -5029 deletions) <details> <summary>View changed files</summary> 📝 `.github/workflows/release.yaml` (+1 -1) 📝 `CMakePresets.json` (+16 -1) 📝 `Makefile.sync` (+1 -1) 📝 `integration/embed_test.go` (+12 -2) 📝 `integration/tools_test.go` (+8 -3) 📝 `llama/README.md` (+17 -14) 📝 `llama/build-info.cpp` (+1 -1) 📝 `llama/llama.cpp/common/common.cpp` (+44 -25) 📝 `llama/llama.cpp/common/common.h` (+59 -34) 📝 `llama/llama.cpp/common/sampling.cpp` (+160 -69) 📝 `llama/llama.cpp/common/sampling.h` (+9 -4) 📝 `llama/llama.cpp/include/llama-cpp.h` (+3 -1) 📝 `llama/llama.cpp/include/llama.h` (+136 -17) 📝 `llama/llama.cpp/src/llama-adapter.cpp` (+5 -2) 📝 `llama/llama.cpp/src/llama-adapter.h` (+4 -0) 📝 `llama/llama.cpp/src/llama-arch.cpp` (+114 -1) 📝 `llama/llama.cpp/src/llama-arch.h` (+9 -0) 📝 `llama/llama.cpp/src/llama-chat.cpp` (+31 -0) 📝 `llama/llama.cpp/src/llama-chat.h` (+2 -0) 📝 `llama/llama.cpp/src/llama-context.cpp` (+847 -177) _...and 80 more files_ </details> ### 📄 Description Updates llama.cpp by 361 commits from ec98e2002 to a5bb8ba4c50257437630c136210396810741bbf7. **GGML updates** Metal - Enable flash attention for MLA heads - MoE kernel specialization for ne20=5 - Extended ggml_pool_1d support Vulkan - Flash attention GQA/split_k for small batches - Optimized mul_mat_vec_id for small n values - AMD GPU matmul optimization with Coopmat - Large mat_mul support via 64-bit indexing - Intel Xe2/Xe3 warptile tuning - SSM scan optimization - Intel fp16 mmq bug workaround - buffer_from_host_ptr support CUDA - SSM scan warp-level reduction optimization - CUDA graph usage refactoring - Register spill alignment fix for FA - Build fixes for CUDA 12.8 and older CCCL CPU - Optimized ggml_vec_dot_bf16 for Power9 - AVX512BF16 build fix General ggml - KV-cache KQ mask construction optimization - Added ggml_build_forward_select --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 09:21:33 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#76705