[PR #12656] [MERGED] Remove unnecessary MacOs 13 and lower Patches #76198

Closed
opened 2026-05-05 08:42:35 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12656
Author: @inforithmics
Created: 10/16/2025
Status: Merged
Merged: 11/6/2025
Merged by: @dhiltgen

Base: mainHead: RemoveUnnecessaryMacOs13Patch


📝 Commits (7)

  • 4cb5f75 Remove unnecessary macos 13 Patch
  • 73f2421 Remove unnecessary MacOs Version Guard patch
  • 7a5e6e7 rename patchesw
  • 513d7b4 Merge remote-tracking branch 'upstream/main' into RemoveUnnecessaryMacOs13Patch
  • f9efb6e Merge remote-tracking branch 'upstream/main' into RemoveUnnecessaryMacOs13Patch
  • d9d8482 remove again macos13 patch
  • fde8253 rename files

📊 Changes

15 files changed (+1 additions, -64 deletions)

View changed files

llama/patches/0018-BF16-macos-version-guard.patch (+0 -28)
📝 llama/patches/0018-ggml-Add-batch-size-hint.patch (+0 -0)
📝 llama/patches/0019-fix-mtmd-audio.cpp-build-on-windows.patch (+0 -0)
llama/patches/0020-Disable-ggml-blas-on-macos-v13-and-older.patch (+0 -25)
📝 llama/patches/0020-ggml-No-alloc-mode.patch (+0 -0)
📝 llama/patches/0021-decode-disable-output_all.patch (+0 -0)
📝 llama/patches/0022-ggml-Enable-resetting-backend-devices.patch (+0 -0)
📝 llama/patches/0023-harden-uncaught-exception-registration.patch (+0 -0)
📝 llama/patches/0024-GPU-discovery-enhancements.patch (+0 -0)
📝 llama/patches/0025-NVML-fallback-for-unified-memory-GPUs.patch (+0 -0)
📝 llama/patches/0026-report-LoadLibrary-failures.patch (+0 -0)
📝 llama/patches/0027-interleave-multi-rope.patch (+0 -0)
📝 llama/patches/0028-Add-memory-detection-using-DXGI-PDH.patch (+0 -0)
📝 ml/backend/ggml/ggml/src/ggml-blas/ggml-blas.cpp (+0 -5)
📝 ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-context.m (+1 -6)

📄 Description

While applying the patches. I saw this macos 13 patches which are probably not necessary anymore because macos13 is not supported anymore by ollama. See https://github.com/ollama/ollama/releases/tag/v0.12.5

  1. Patch 0020-Diable-ggml-blas-on-macos-v13-and-older.patch
  2. Patch 0018-BF16-macos-version-guard.patch

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12656 **Author:** [@inforithmics](https://github.com/inforithmics) **Created:** 10/16/2025 **Status:** ✅ Merged **Merged:** 11/6/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `RemoveUnnecessaryMacOs13Patch` --- ### 📝 Commits (7) - [`4cb5f75`](https://github.com/ollama/ollama/commit/4cb5f75f960cdb9e1cca2350babdca4a51ac37fb) Remove unnecessary macos 13 Patch - [`73f2421`](https://github.com/ollama/ollama/commit/73f2421a554204fca98796acc1dd4e6f8384071b) Remove unnecessary MacOs Version Guard patch - [`7a5e6e7`](https://github.com/ollama/ollama/commit/7a5e6e75aa4a447b1b94db56bdb3d0d3811bedcb) rename patchesw - [`513d7b4`](https://github.com/ollama/ollama/commit/513d7b4f7ab804d533a6ae58030c160de48a2cea) Merge remote-tracking branch 'upstream/main' into RemoveUnnecessaryMacOs13Patch - [`f9efb6e`](https://github.com/ollama/ollama/commit/f9efb6e3539db9ac166d24264175a7fdc5377925) Merge remote-tracking branch 'upstream/main' into RemoveUnnecessaryMacOs13Patch - [`d9d8482`](https://github.com/ollama/ollama/commit/d9d8482627c83097393d2448aa58871fc956d4fc) remove again macos13 patch - [`fde8253`](https://github.com/ollama/ollama/commit/fde8253a6ca5c9e904109cad4259b714dc07c33c) rename files ### 📊 Changes **15 files changed** (+1 additions, -64 deletions) <details> <summary>View changed files</summary> ➖ `llama/patches/0018-BF16-macos-version-guard.patch` (+0 -28) 📝 `llama/patches/0018-ggml-Add-batch-size-hint.patch` (+0 -0) 📝 `llama/patches/0019-fix-mtmd-audio.cpp-build-on-windows.patch` (+0 -0) ➖ `llama/patches/0020-Disable-ggml-blas-on-macos-v13-and-older.patch` (+0 -25) 📝 `llama/patches/0020-ggml-No-alloc-mode.patch` (+0 -0) 📝 `llama/patches/0021-decode-disable-output_all.patch` (+0 -0) 📝 `llama/patches/0022-ggml-Enable-resetting-backend-devices.patch` (+0 -0) 📝 `llama/patches/0023-harden-uncaught-exception-registration.patch` (+0 -0) 📝 `llama/patches/0024-GPU-discovery-enhancements.patch` (+0 -0) 📝 `llama/patches/0025-NVML-fallback-for-unified-memory-GPUs.patch` (+0 -0) 📝 `llama/patches/0026-report-LoadLibrary-failures.patch` (+0 -0) 📝 `llama/patches/0027-interleave-multi-rope.patch` (+0 -0) 📝 `llama/patches/0028-Add-memory-detection-using-DXGI-PDH.patch` (+0 -0) 📝 `ml/backend/ggml/ggml/src/ggml-blas/ggml-blas.cpp` (+0 -5) 📝 `ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-context.m` (+1 -6) </details> ### 📄 Description While applying the patches. I saw this macos 13 patches which are probably not necessary anymore because macos13 is not supported anymore by ollama. See https://github.com/ollama/ollama/releases/tag/v0.12.5 1. Patch 0020-Diable-ggml-blas-on-macos-v13-and-older.patch 2. Patch 0018-BF16-macos-version-guard.patch --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 08:42:35 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#76198