[PR #13563] [CLOSED] Update to llama.cpp b7540 #76567

Closed
opened 2026-05-05 09:12:00 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13563
Author: @inforithmics
Created: 12/25/2025
Status: Closed

Base: mainHead: UpdateSeedOss


📝 Commits (7)

  • c3365f4 update to 408616adbdae2494b8bf23e048ef059fb681a474
  • 8d572f5 update llama.cpp
  • 0bb8a79 update patches
  • af4ac52 Merge remote-tracking branch 'upstream/main' into UpdateSeedOss
  • aaad893 reapply patches
  • 21174f1 romove cpp.orig
  • 23b9bf8 sync patches

📊 Changes

87 files changed (+9479 additions, -803 deletions)

View changed files

📝 Makefile.sync (+1 -1)
📝 llama/build-info.cpp (+1 -1)
📝 llama/llama.cpp/common/common.cpp (+3 -1)
📝 llama/llama.cpp/common/common.h (+6 -2)
📝 llama/llama.cpp/common/sampling.cpp (+51 -37)
📝 llama/llama.cpp/common/sampling.h (+6 -3)
📝 llama/llama.cpp/src/llama-arch.cpp (+42 -1)
📝 llama/llama.cpp/src/llama-arch.h (+5 -0)
📝 llama/llama.cpp/src/llama-context.cpp (+16 -17)
📝 llama/llama.cpp/src/llama-hparams.h (+4 -3)
📝 llama/llama.cpp/src/llama-mmap.cpp (+123 -28)
📝 llama/llama.cpp/src/llama-mmap.h (+5 -1)
📝 llama/llama.cpp/src/llama-model-loader.cpp (+79 -13)
📝 llama/llama.cpp/src/llama-model-loader.h (+2 -0)
📝 llama/llama.cpp/src/llama-model.cpp (+150 -13)
📝 llama/llama.cpp/src/llama-model.h (+4 -0)
📝 llama/llama.cpp/src/llama-sampling.cpp (+16 -0)
📝 llama/llama.cpp/src/llama-vocab.cpp (+9 -1)
📝 llama/llama.cpp/src/llama.cpp (+66 -53)
📝 llama/llama.cpp/src/models/llama.cpp (+19 -6)

...and 67 more files

📄 Description

Some vulkan and cuda performance and bugfix improvements


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13563 **Author:** [@inforithmics](https://github.com/inforithmics) **Created:** 12/25/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `UpdateSeedOss` --- ### 📝 Commits (7) - [`c3365f4`](https://github.com/ollama/ollama/commit/c3365f4e79fff59982d33eb23cbc2a97f44b6b9b) update to 408616adbdae2494b8bf23e048ef059fb681a474 - [`8d572f5`](https://github.com/ollama/ollama/commit/8d572f5529dca1867538bb99597e58ff5cb486e8) update llama.cpp - [`0bb8a79`](https://github.com/ollama/ollama/commit/0bb8a79e8832412c80577162ae03950b9421e21a) update patches - [`af4ac52`](https://github.com/ollama/ollama/commit/af4ac52ce0d11c55606a91ebabc2c106e553ce92) Merge remote-tracking branch 'upstream/main' into UpdateSeedOss - [`aaad893`](https://github.com/ollama/ollama/commit/aaad893c101fc3c33cc69bd53b407611c0f7599a) reapply patches - [`21174f1`](https://github.com/ollama/ollama/commit/21174f1669ffb849ec93ff31381b6d82df27ae42) romove cpp.orig - [`23b9bf8`](https://github.com/ollama/ollama/commit/23b9bf82b3c9177dd2b2ee1874e09d9642b7e0a4) sync patches ### 📊 Changes **87 files changed** (+9479 additions, -803 deletions) <details> <summary>View changed files</summary> 📝 `Makefile.sync` (+1 -1) 📝 `llama/build-info.cpp` (+1 -1) 📝 `llama/llama.cpp/common/common.cpp` (+3 -1) 📝 `llama/llama.cpp/common/common.h` (+6 -2) 📝 `llama/llama.cpp/common/sampling.cpp` (+51 -37) 📝 `llama/llama.cpp/common/sampling.h` (+6 -3) 📝 `llama/llama.cpp/src/llama-arch.cpp` (+42 -1) 📝 `llama/llama.cpp/src/llama-arch.h` (+5 -0) 📝 `llama/llama.cpp/src/llama-context.cpp` (+16 -17) 📝 `llama/llama.cpp/src/llama-hparams.h` (+4 -3) 📝 `llama/llama.cpp/src/llama-mmap.cpp` (+123 -28) 📝 `llama/llama.cpp/src/llama-mmap.h` (+5 -1) 📝 `llama/llama.cpp/src/llama-model-loader.cpp` (+79 -13) 📝 `llama/llama.cpp/src/llama-model-loader.h` (+2 -0) 📝 `llama/llama.cpp/src/llama-model.cpp` (+150 -13) 📝 `llama/llama.cpp/src/llama-model.h` (+4 -0) 📝 `llama/llama.cpp/src/llama-sampling.cpp` (+16 -0) 📝 `llama/llama.cpp/src/llama-vocab.cpp` (+9 -1) 📝 `llama/llama.cpp/src/llama.cpp` (+66 -53) 📝 `llama/llama.cpp/src/models/llama.cpp` (+19 -6) _...and 67 more files_ </details> ### 📄 Description Some vulkan and cuda performance and bugfix improvements --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 09:12:00 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#76567