[PR #14864] [CLOSED] ggml: update to 0beb8db3a0 #77175

Closed
opened 2026-05-05 09:51:49 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14864
Author: @dhiltgen
Created: 3/15/2026
Status: Closed

Base: mainHead: ggml-bump


📝 Commits (3)

📊 Changes

361 files changed (+41688 additions, -13339 deletions)

View changed files

📝 .github/workflows/release.yaml (+1 -1)
📝 CMakePresets.json (+16 -1)
📝 Makefile.sync (+38 -3)
📝 discover/runner.go (+7 -13)
📝 integration/embed_test.go (+12 -2)
📝 integration/tools_test.go (+8 -3)
📝 kvcache/causal.go (+4 -1)
📝 llama/README.md (+26 -24)
📝 llama/build-info.cpp (+1 -1)
📝 llama/llama.cpp/.rsync-filter (+2 -0)
📝 llama/llama.cpp/LICENSE (+1 -1)
📝 llama/llama.cpp/common/common.cpp (+140 -163)
📝 llama/llama.cpp/common/common.go (+1 -1)
📝 llama/llama.cpp/common/common.h (+180 -85)
📝 llama/llama.cpp/common/json-schema-to-grammar.cpp (+86 -65)
llama/llama.cpp/common/peg-parser.cpp (+2040 -0)
llama/llama.cpp/common/peg-parser.h (+517 -0)
📝 llama/llama.cpp/common/sampling.cpp (+160 -69)
📝 llama/llama.cpp/common/sampling.h (+9 -4)
llama/llama.cpp/common/unicode.cpp (+108 -0)

...and 80 more files

📄 Description

New Patches (GGML patch set)

  1. 0033-ggml-metal-solve_tri.patch — New patch adding solve_tri Metal kernel
    support
  2. 0034-ggml-metal-guard-mul_mat_id-map0-and-add-ne20-22-spe.patch — New
    patch for Metal mul_mat_id safety guards and ne20=22 specialization
  3. 0035-ggml-cuda-add-GGML_CUDA_GRAPH_NODES_ONLY-fast-path-f.patch — New
    patch adding env-var-gated simplified CUDA graph validation to fix ~20%
    inference regression caused by expanded upstream property checking (PRs
    #19165, #19186, #19383)

Substantially Modified Patches

  1. 0032-ggml-enable-MLA-flash-attention-for-GLM-4.7-flash.patch —
    Significantly shrunk because upstream absorbed most of the GLM-4.7 MLA flash
    attention work; our patch now only carries the remaining delta
  2. 0018-ggml-Add-batch-size-hint.patch and 0020-ggml-No-alloc-mode.patch —
    Large diffs due to upstream refactoring of the areas they touch; required
    non-trivial conflict resolution

Ollama Codebase Changes

  1. ml/backend/ggml/ggml.go — Context arena buffer pool: Pre-allocates 2
    reusable arena buffers so tensor metadata pointers remain stable across
    batches, enabling CUDA/Metal graph cache hits. NewContext() uses the pool;
    NewContextSize() does not. Pool buffers returned in Close().
  2. kvcache/causal.go — KV cache shift now uses NewContextSize(512) instead of
    NewContext() to avoid stealing pool buffers from the main inference
    pipeline.
  3. llm/server.go — Sets GGML_CUDA_GRAPH_NODES_ONLY=1 in the subprocess
    environment, but only for the ollamarunner (ollamaEngine == true), not
    llamarunner.
  4. discover/runner.go — Bootstrap discovery timeout increased from 30s (90s
    Windows) to 120s universally. New GGML compiles Metal shaders from source on
    first launch, taking 30+ seconds on high-core-count Apple Silicon.
  5. model/models/glm4moelite/model.go — Fixed Concat ordering for MLA
    key/query assembly (kvLoraRank || rope, not rope || kvLoraRank), and fixed
    Shift() to only apply RoPE to the rope portion of the key, not the entire
    concatenated key.
  6. llama/llama.go — Updated LoRA adapter API from llama_set_adapter_lora to
    llama_set_adapters_lora (upstream API change).
  7. llama/sampling_ext.cpp — Updated llama_model_loader constructor call to
    match new upstream signature.
  8. CMakePresets.json + scripts/build_windows.ps1 +
    .github/workflows/release.yaml — New "CUDA 13 Windows" preset with reduced
    architecture set to avoid MSVC template compilation issues. Removed arch 87
    from CUDA 13 Linux preset.
  9. Makefile.sync — Added rebase-patches target and improved help text for
    the patch management workflow.
  10. integration/embed_test.go + integration/tools_test.go — Added soft
    timeout mechanism to skip remaining models when time is running low, and
    increased hard timeouts.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14864 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 3/15/2026 **Status:** ❌ Closed **Base:** `main` ← **Head:** `ggml-bump` --- ### 📝 Commits (3) - [`a4c0137`](https://github.com/ollama/ollama/commit/a4c01378cc05fb3c9ecc54d614cbf255e116352b) ggml: update to 0beb8db3a0 - [`2641450`](https://github.com/ollama/ollama/commit/2641450885b2c838625eb706a3ed18cbb68008cf) Allow more time for discovery - [`2647195`](https://github.com/ollama/ollama/commit/264719513c109111c8f8abea82161bed73559bbc) Performance fixes ### 📊 Changes **361 files changed** (+41688 additions, -13339 deletions) <details> <summary>View changed files</summary> 📝 `.github/workflows/release.yaml` (+1 -1) 📝 `CMakePresets.json` (+16 -1) 📝 `Makefile.sync` (+38 -3) 📝 `discover/runner.go` (+7 -13) 📝 `integration/embed_test.go` (+12 -2) 📝 `integration/tools_test.go` (+8 -3) 📝 `kvcache/causal.go` (+4 -1) 📝 `llama/README.md` (+26 -24) 📝 `llama/build-info.cpp` (+1 -1) 📝 `llama/llama.cpp/.rsync-filter` (+2 -0) 📝 `llama/llama.cpp/LICENSE` (+1 -1) 📝 `llama/llama.cpp/common/common.cpp` (+140 -163) 📝 `llama/llama.cpp/common/common.go` (+1 -1) 📝 `llama/llama.cpp/common/common.h` (+180 -85) 📝 `llama/llama.cpp/common/json-schema-to-grammar.cpp` (+86 -65) ➕ `llama/llama.cpp/common/peg-parser.cpp` (+2040 -0) ➕ `llama/llama.cpp/common/peg-parser.h` (+517 -0) 📝 `llama/llama.cpp/common/sampling.cpp` (+160 -69) 📝 `llama/llama.cpp/common/sampling.h` (+9 -4) ➕ `llama/llama.cpp/common/unicode.cpp` (+108 -0) _...and 80 more files_ </details> ### 📄 Description ## New Patches (GGML patch set) 1. 0033-ggml-metal-solve_tri.patch — New patch adding solve_tri Metal kernel support 2. 0034-ggml-metal-guard-mul_mat_id-map0-and-add-ne20-22-spe.patch — New patch for Metal mul_mat_id safety guards and ne20=22 specialization 3. 0035-ggml-cuda-add-GGML_CUDA_GRAPH_NODES_ONLY-fast-path-f.patch — New patch adding env-var-gated simplified CUDA graph validation to fix ~20% inference regression caused by expanded upstream property checking (PRs #19165, #19186, #19383) ## Substantially Modified Patches 4. 0032-ggml-enable-MLA-flash-attention-for-GLM-4.7-flash.patch — Significantly shrunk because upstream absorbed most of the GLM-4.7 MLA flash attention work; our patch now only carries the remaining delta 5. 0018-ggml-Add-batch-size-hint.patch and 0020-ggml-No-alloc-mode.patch — Large diffs due to upstream refactoring of the areas they touch; required non-trivial conflict resolution ## Ollama Codebase Changes 6. ml/backend/ggml/ggml.go — Context arena buffer pool: Pre-allocates 2 reusable arena buffers so tensor metadata pointers remain stable across batches, enabling CUDA/Metal graph cache hits. NewContext() uses the pool; NewContextSize() does not. Pool buffers returned in Close(). 7. kvcache/causal.go — KV cache shift now uses NewContextSize(512) instead of NewContext() to avoid stealing pool buffers from the main inference pipeline. 8. llm/server.go — Sets GGML_CUDA_GRAPH_NODES_ONLY=1 in the subprocess environment, but only for the ollamarunner (ollamaEngine == true), not llamarunner. 9. discover/runner.go — Bootstrap discovery timeout increased from 30s (90s Windows) to 120s universally. New GGML compiles Metal shaders from source on first launch, taking 30+ seconds on high-core-count Apple Silicon. 10. model/models/glm4moelite/model.go — Fixed Concat ordering for MLA key/query assembly (kvLoraRank || rope, not rope || kvLoraRank), and fixed Shift() to only apply RoPE to the rope portion of the key, not the entire concatenated key. 11. llama/llama.go — Updated LoRA adapter API from llama_set_adapter_lora to llama_set_adapters_lora (upstream API change). 12. llama/sampling_ext.cpp — Updated llama_model_loader constructor call to match new upstream signature. 13. CMakePresets.json + scripts/build_windows.ps1 + .github/workflows/release.yaml — New "CUDA 13 Windows" preset with reduced architecture set to avoid MSVC template compilation issues. Removed arch 87 from CUDA 13 Linux preset. 14. Makefile.sync — Added rebase-patches target and improved help text for the patch management workflow. 15. integration/embed_test.go + integration/tools_test.go — Added soft timeout mechanism to skip remaining models when time is running low, and increased hard timeouts. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 09:51:49 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#77175