[PR #7124] [CLOSED] llama: Decouple patching script from submodule #17588

Closed
opened 2026-04-16 06:07:36 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/7124
Author: @dhiltgen
Created: 10/8/2024
Status: Closed

Base: jmorganca/llamaHead: go_server_patching


📝 Commits (2)

  • 1796ce8 llama: Decouple patching script from submodule
  • 2e0f71f Run new sync script

📊 Changes

237 files changed (+368 additions, -343 deletions)

View changed files

📝 llama/build-info.cpp (+2 -2)
📝 llama/clip.cpp (+1 -1)
📝 llama/clip.h (+1 -1)
📝 llama/common.cpp (+1 -1)
📝 llama/common.h (+1 -1)
📝 llama/ggml-aarch64.c (+1 -1)
📝 llama/ggml-aarch64.h (+1 -1)
📝 llama/ggml-alloc.c (+1 -1)
📝 llama/ggml-alloc.h (+1 -1)
📝 llama/ggml-backend-impl.h (+1 -1)
📝 llama/ggml-backend.c (+1 -1)
📝 llama/ggml-backend.h (+1 -1)
📝 llama/ggml-blas.cpp (+1 -1)
📝 llama/ggml-blas.h (+1 -1)
📝 llama/ggml-common.h (+1 -1)
📝 llama/ggml-cuda.cu (+1 -1)
📝 llama/ggml-cuda.h (+1 -1)
📝 llama/ggml-cuda/acc.cu (+1 -1)
📝 llama/ggml-cuda/acc.cuh (+1 -1)
📝 llama/ggml-cuda/arange.cu (+1 -1)

...and 80 more files

📄 Description

Replaced by #7139 on main


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/7124 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 10/8/2024 **Status:** ❌ Closed **Base:** `jmorganca/llama` ← **Head:** `go_server_patching` --- ### 📝 Commits (2) - [`1796ce8`](https://github.com/ollama/ollama/commit/1796ce8c7ae87df63391862a1e563f912443cf09) llama: Decouple patching script from submodule - [`2e0f71f`](https://github.com/ollama/ollama/commit/2e0f71f15b71d49fe125367cd0e28871c5755d93) Run new sync script ### 📊 Changes **237 files changed** (+368 additions, -343 deletions) <details> <summary>View changed files</summary> 📝 `llama/build-info.cpp` (+2 -2) 📝 `llama/clip.cpp` (+1 -1) 📝 `llama/clip.h` (+1 -1) 📝 `llama/common.cpp` (+1 -1) 📝 `llama/common.h` (+1 -1) 📝 `llama/ggml-aarch64.c` (+1 -1) 📝 `llama/ggml-aarch64.h` (+1 -1) 📝 `llama/ggml-alloc.c` (+1 -1) 📝 `llama/ggml-alloc.h` (+1 -1) 📝 `llama/ggml-backend-impl.h` (+1 -1) 📝 `llama/ggml-backend.c` (+1 -1) 📝 `llama/ggml-backend.h` (+1 -1) 📝 `llama/ggml-blas.cpp` (+1 -1) 📝 `llama/ggml-blas.h` (+1 -1) 📝 `llama/ggml-common.h` (+1 -1) 📝 `llama/ggml-cuda.cu` (+1 -1) 📝 `llama/ggml-cuda.h` (+1 -1) 📝 `llama/ggml-cuda/acc.cu` (+1 -1) 📝 `llama/ggml-cuda/acc.cuh` (+1 -1) 📝 `llama/ggml-cuda/arange.cu` (+1 -1) _...and 80 more files_ </details> ### 📄 Description Replaced by #7139 on main --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-16 06:07:36 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#17588