[PR #8101] [MERGED] llama: vendor commit ba1cb19c #12622

Closed
opened 2026-04-13 00:04:59 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/8101
Author: @jmorganca
Created: 12/14/2024
Status: Merged
Merged: 12/14/2024
Merged by: @jmorganca

Base: mainHead: jmorganca/vendor-ba1cb19c


📝 Commits (5)

  • 8ae07f4 llama: vendor commit ba1cb19c
  • 752e116 make: use c++17
  • 3f6575d fix missing arg in static assert on windows
  • 406e477 make: don't build ggml-aarch64.c for cuda
  • f22f32d make: use gnu++17 for hipcc build

📊 Changes

273 files changed (+3194 additions, -1900 deletions)

View changed files

📝 llama/amx.cpp (+94 -70)
📝 llama/amx.h (+2 -14)
📝 llama/build-info.cpp (+1 -1)
📝 llama/clip.cpp (+206 -26)
📝 llama/clip.h (+9 -3)
📝 llama/common.cpp (+3 -41)
📝 llama/common.h (+12 -7)
llama/ggml-aarch64.c (+0 -155)
📝 llama/ggml-alloc.c (+1 -1)
📝 llama/ggml-alloc.h (+1 -1)
📝 llama/ggml-backend-impl.h (+1 -1)
📝 llama/ggml-backend-reg.cpp (+48 -15)
📝 llama/ggml-backend.cpp (+1 -1)
📝 llama/ggml-backend.h (+2 -1)
📝 llama/ggml-blas.cpp (+1 -1)
📝 llama/ggml-blas.h (+1 -1)
📝 llama/ggml-common.h (+43 -49)
📝 llama/ggml-cpp.h (+1 -1)
📝 llama/ggml-cpu-aarch64.cpp (+591 -152)
📝 llama/ggml-cpu-aarch64.h (+3 -27)

...and 80 more files

📄 Description

No description provided


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/8101 **Author:** [@jmorganca](https://github.com/jmorganca) **Created:** 12/14/2024 **Status:** ✅ Merged **Merged:** 12/14/2024 **Merged by:** [@jmorganca](https://github.com/jmorganca) **Base:** `main` ← **Head:** `jmorganca/vendor-ba1cb19c` --- ### 📝 Commits (5) - [`8ae07f4`](https://github.com/ollama/ollama/commit/8ae07f4319b9efb17b1b3c7fceb6ab36ada48437) llama: vendor commit ba1cb19c - [`752e116`](https://github.com/ollama/ollama/commit/752e116532e7790a164616a1137bac9ac3238e30) make: use c++17 - [`3f6575d`](https://github.com/ollama/ollama/commit/3f6575de800c8a220fb2fb4dc1e4df11cf37ed70) fix missing arg in static assert on windows - [`406e477`](https://github.com/ollama/ollama/commit/406e4777ea7aae2e896f64da1e61b4722c879ae8) make: don't build ggml-aarch64.c for cuda - [`f22f32d`](https://github.com/ollama/ollama/commit/f22f32d97a189d819e46e41aafa699ea9bfbb509) make: use gnu++17 for hipcc build ### 📊 Changes **273 files changed** (+3194 additions, -1900 deletions) <details> <summary>View changed files</summary> 📝 `llama/amx.cpp` (+94 -70) 📝 `llama/amx.h` (+2 -14) 📝 `llama/build-info.cpp` (+1 -1) 📝 `llama/clip.cpp` (+206 -26) 📝 `llama/clip.h` (+9 -3) 📝 `llama/common.cpp` (+3 -41) 📝 `llama/common.h` (+12 -7) ➖ `llama/ggml-aarch64.c` (+0 -155) 📝 `llama/ggml-alloc.c` (+1 -1) 📝 `llama/ggml-alloc.h` (+1 -1) 📝 `llama/ggml-backend-impl.h` (+1 -1) 📝 `llama/ggml-backend-reg.cpp` (+48 -15) 📝 `llama/ggml-backend.cpp` (+1 -1) 📝 `llama/ggml-backend.h` (+2 -1) 📝 `llama/ggml-blas.cpp` (+1 -1) 📝 `llama/ggml-blas.h` (+1 -1) 📝 `llama/ggml-common.h` (+43 -49) 📝 `llama/ggml-cpp.h` (+1 -1) 📝 `llama/ggml-cpu-aarch64.cpp` (+591 -152) 📝 `llama/ggml-cpu-aarch64.h` (+3 -27) _...and 80 more files_ </details> ### 📄 Description _No description provided_ --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:04:59 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#12622