[PR #13408] [MERGED] feat: llama.cpp bump (17f7f4) for SSM performance improvements #14194

Closed
opened 2026-04-13 00:48:02 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13408
Author: @gabe-l-hart
Created: 12/10/2025
Status: Merged
Merged: 12/10/2025
Merged by: @dhiltgen

Base: mainHead: LlamaCPPMetalSSMImprovements


📝 Commits (7)

  • 35196a2 feat: Bump llama.cpp to the latest master (17f7f4b)
  • 904e70c feat: Update patches 1-4
  • ce3113f fix: Update patches 5-12
  • a380057 feat: Update patches 13-18
  • ce98194 feat: Update patch 20
  • 7510aea feat: Update patches 21-31
  • 6dd1bba feat: Sync vendored code

📊 Changes

115 files changed (+5136 additions, -2545 deletions)

View changed files

📝 Makefile.sync (+1 -1)
📝 llama/build-info.cpp (+1 -1)
📝 llama/llama.cpp/common/common.cpp (+70 -7)
📝 llama/llama.cpp/common/common.h (+23 -5)
📝 llama/llama.cpp/common/json-schema-to-grammar.cpp (+1 -1)
📝 llama/llama.cpp/common/log.cpp (+18 -27)
📝 llama/llama.cpp/common/log.h (+19 -12)
📝 llama/llama.cpp/src/llama-arch.cpp (+30 -1)
📝 llama/llama.cpp/src/llama-arch.h (+3 -0)
📝 llama/llama.cpp/src/llama-context.cpp (+6 -6)
📝 llama/llama.cpp/src/llama-context.h (+1 -1)
📝 llama/llama.cpp/src/llama-grammar.cpp (+232 -33)
📝 llama/llama.cpp/src/llama-grammar.h (+20 -1)
📝 llama/llama.cpp/src/llama-graph.cpp (+4 -7)
📝 llama/llama.cpp/src/llama-hparams.h (+2 -2)
📝 llama/llama.cpp/src/llama-impl.h (+1 -1)
📝 llama/llama.cpp/src/llama-mmap.cpp (+1 -1)
📝 llama/llama.cpp/src/llama-model.cpp (+74 -14)
📝 llama/llama.cpp/src/llama-quant.cpp (+0 -29)
📝 llama/llama.cpp/src/llama-vocab.cpp (+1 -2)

...and 80 more files

📄 Description

Description

This PR bumps the vendored copy of llama.cpp to 17f7f4. It brings in several key PRs for performance improvements around recurrent models (mamba, mamba2, granite4, qwen3next, falcon-h, nemotron-h, etc...):

full PR set

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13408 **Author:** [@gabe-l-hart](https://github.com/gabe-l-hart) **Created:** 12/10/2025 **Status:** ✅ Merged **Merged:** 12/10/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `LlamaCPPMetalSSMImprovements` --- ### 📝 Commits (7) - [`35196a2`](https://github.com/ollama/ollama/commit/35196a2c0e974beeb02ad882f94f747fae298438) feat: Bump llama.cpp to the latest master (17f7f4b) - [`904e70c`](https://github.com/ollama/ollama/commit/904e70ca1ff0af1485315fa852638cdda6d82901) feat: Update patches 1-4 - [`ce3113f`](https://github.com/ollama/ollama/commit/ce3113fe10bc8c1a7cf887392e0f560e62044159) fix: Update patches 5-12 - [`a380057`](https://github.com/ollama/ollama/commit/a3800572386c0c0282004c0b135663d5b4e469b9) feat: Update patches 13-18 - [`ce98194`](https://github.com/ollama/ollama/commit/ce98194df9f9160a36129ac87d500c494314194e) feat: Update patch 20 - [`7510aea`](https://github.com/ollama/ollama/commit/7510aea388675ef5a0b7f9a8c6212e5d1fe5c0f7) feat: Update patches 21-31 - [`6dd1bba`](https://github.com/ollama/ollama/commit/6dd1bba330d351ba314c6f17fb7e84d3b023885d) feat: Sync vendored code ### 📊 Changes **115 files changed** (+5136 additions, -2545 deletions) <details> <summary>View changed files</summary> 📝 `Makefile.sync` (+1 -1) 📝 `llama/build-info.cpp` (+1 -1) 📝 `llama/llama.cpp/common/common.cpp` (+70 -7) 📝 `llama/llama.cpp/common/common.h` (+23 -5) 📝 `llama/llama.cpp/common/json-schema-to-grammar.cpp` (+1 -1) 📝 `llama/llama.cpp/common/log.cpp` (+18 -27) 📝 `llama/llama.cpp/common/log.h` (+19 -12) 📝 `llama/llama.cpp/src/llama-arch.cpp` (+30 -1) 📝 `llama/llama.cpp/src/llama-arch.h` (+3 -0) 📝 `llama/llama.cpp/src/llama-context.cpp` (+6 -6) 📝 `llama/llama.cpp/src/llama-context.h` (+1 -1) 📝 `llama/llama.cpp/src/llama-grammar.cpp` (+232 -33) 📝 `llama/llama.cpp/src/llama-grammar.h` (+20 -1) 📝 `llama/llama.cpp/src/llama-graph.cpp` (+4 -7) 📝 `llama/llama.cpp/src/llama-hparams.h` (+2 -2) 📝 `llama/llama.cpp/src/llama-impl.h` (+1 -1) 📝 `llama/llama.cpp/src/llama-mmap.cpp` (+1 -1) 📝 `llama/llama.cpp/src/llama-model.cpp` (+74 -14) 📝 `llama/llama.cpp/src/llama-quant.cpp` (+0 -29) 📝 `llama/llama.cpp/src/llama-vocab.cpp` (+1 -2) _...and 80 more files_ </details> ### 📄 Description ## Description This PR bumps the vendored copy of `llama.cpp` to [17f7f4](https://github.com/ggml-org/llama.cpp/tree/17f7f4). It brings in several key PRs for performance improvements around recurrent models (`mamba`, `mamba2`, `granite4`, `qwen3next`, `falcon-h`, `nemotron-h`, etc...): * https://github.com/ggml-org/llama.cpp/pull/17876: 086a63e3a metal: SSM kernel improvements (#17876) * https://github.com/ggml-org/llama.cpp/pull/17873: b63509262 Add DIAG for CUDA (#17873) * https://github.com/ggml-org/llama.cpp/pull/17875: 0cdce38a9 CUDA: fix FP16 overflow in tile FA kernel (#17875) * https://github.com/ggml-org/llama.cpp/pull/17811: 1d2a1ab73 model : support Rnj-1 (#17811) * https://github.com/ggml-org/llama.cpp/pull/17867: c8554b66e graph : use fill instead of scale_bias in grouped expert selection (#17867) * https://github.com/ggml-org/llama.cpp/pull/17851: 51e0c2d91 cuda : add FILL op support (#17851) * https://github.com/ggml-org/llama.cpp/pull/17703: 5814b4dce cuda: optimize SOLVE_TRI using registers and FMAF (#17703) * https://github.com/ggml-org/llama.cpp/pull/17744: 4d3726278 model: add llama 4 scaling for mistral-large (deepseek arch) (#17744) * https://github.com/ggml-org/llama.cpp/pull/16907: 08f9d3cc1 Vulkan: improve mul_mat_vec_iq1_m (#16907) * https://github.com/ggml-org/llama.cpp/pull/17730: dbc15a796 convert: support Mistral 3 Large MoE (#17730) * https://github.com/ggml-org/llama.cpp/pull/17781: c6c5e8597 vulkan: support solve_tri with larger N/K values (#17781) * https://github.com/ggml-org/llama.cpp/pull/17685: e15cd06a9 vulkan : support conv-2d with large output size (#17685) * https://github.com/ggml-org/llama.cpp/pull/17764: fd57b24c0 ggml webgpu: unary op suppport, code refactoring, ops support (#17764) * https://github.com/ggml-org/llama.cpp/pull/17746: e95d0bc8f CUDA: fix FA VKQ accumulator overflow (#17746) * https://github.com/ggml-org/llama.cpp/pull/17584: 96fe9badf Add support for CUMSUM and TRI for CUDA. (#17584) * https://github.com/ggml-org/llama.cpp/pull/16623: bde188d60 metal: TRI, FILL, EXPM1, SOFTPLUS (#16623) * https://github.com/ggml-org/llama.cpp/pull/17587: 746f9ee88 Override SSM_A op for Qwen3 Next to reduce splits (#17587) * https://github.com/ggml-org/llama.cpp/pull/17644: cd3c11890 model: support Ministral3 (#17644) * https://github.com/ggml-org/llama.cpp/pull/17577: 2ba719519 model: LFM2-VL fixes (#17577) <details> <summary>full PR set</summary> * https://github.com/ggml-org/llama.cpp/pull/17891: 17f7f4baa CUDA: fix unpadded strides in MMA FA kernel (#17891) * https://github.com/ggml-org/llama.cpp/pull/17889: 9e79b0116 convert: allow using quantized Mistral weight (#17889) * https://github.com/ggml-org/llama.cpp/pull/17838: 2e9eab80c fix softmax for iGPU (#17838) * https://github.com/ggml-org/llama.cpp/pull/17713: 2fbe3b7bb common : add parser for ministral/mistral large 3/devstral 2 (#17713) * https://github.com/ggml-org/llama.cpp/pull/17890: 63391852b docs : update cpu and cuda ops (#17890) * https://github.com/ggml-org/llama.cpp/pull/17876: 086a63e3a metal: SSM kernel improvements (#17876) * https://github.com/ggml-org/llama.cpp/pull/17873: b63509262 Add DIAG for CUDA (#17873) * https://github.com/ggml-org/llama.cpp/pull/17886: 48f47565a docs: clarify that CPU support should be first (#17886) * https://github.com/ggml-org/llama.cpp/pull/17869: 02e409a5b ggml : Provide macos-specific backtrace printing to avoid terminal death (#17869) * https://github.com/ggml-org/llama.cpp/pull/17882: 6b82eb788 metal : print node names for debugging (#17882) * https://github.com/ggml-org/llama.cpp/pull/17870: 86a3f0fad ggml : allow fill node alloc inplace (#17870) * https://github.com/ggml-org/llama.cpp/pull/17877: 63908b631 cmake: fix Mach-O current version number (#17877) * https://github.com/ggml-org/llama.cpp/pull/12652: 42b12b560 model : nit, DeepSeek V1 MoE is 16B and GigaChat is 20B (#12652) * https://github.com/ggml-org/llama.cpp/pull/17836: 4e842d512 console: allow using arrow left/right, home/end keys and history mode (#17836) * https://github.com/ggml-org/llama.cpp/pull/17543: ca709e427 CANN: add support for partial RoPE and Vision mode (#17543) * https://github.com/ggml-org/llama.cpp/pull/17875: 0cdce38a9 CUDA: fix FP16 overflow in tile FA kernel (#17875) * https://github.com/ggml-org/llama.cpp/pull/17816: e39502e74 llama : add token matching support to llama-grammar (#17816) * https://github.com/ggml-org/llama.cpp/pull/17811: 1d2a1ab73 model : support Rnj-1 (#17811) * https://github.com/ggml-org/llama.cpp/pull/17867: c8554b66e graph : use fill instead of scale_bias in grouped expert selection (#17867) * https://github.com/ggml-org/llama.cpp/pull/17863: 2fa51c19b model-conversion : add token ids to prompt token output [no ci] (#17863) * https://github.com/ggml-org/llama.cpp/pull/17835: 951520ddb server: delegate result_state creation to server_task (#17835) * https://github.com/ggml-org/llama.cpp/pull/17855: 68522c678 ci : support bfloat16 SYCL release package (#17855) * https://github.com/ggml-org/llama.cpp/pull/17808: f896d2c34 server: improve speed of speculative decoding (#17808) * https://github.com/ggml-org/llama.cpp/pull/17794: e4e9c4329 Make graph_max_nodes vary by ubatch size (#17794) * https://github.com/ggml-org/llama.cpp/pull/17376: 636fc17a3 Fix Kimi-K2 tool-call parsing issues (#17376) * https://github.com/ggml-org/llama.cpp/pull/17851: 51e0c2d91 cuda : add FILL op support (#17851) * https://github.com/ggml-org/llama.cpp/pull/17760: 37a4f6324 server : add development documentation (#17760) * https://github.com/ggml-org/llama.cpp/pull/17858: 2bc96931d server : make cache_reuse configurable per request (#17858) * https://github.com/ggml-org/llama.cpp/pull/17703: 5814b4dce cuda: optimize SOLVE_TRI using registers and FMAF (#17703) * https://github.com/ggml-org/llama.cpp/pull/17784: 79d61896d ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (#17784) * https://github.com/ggml-org/llama.cpp/pull/17744: 4d3726278 model: add llama 4 scaling for mistral-large (deepseek arch) (#17744) * https://github.com/ggml-org/llama.cpp/pull/16907: 08f9d3cc1 Vulkan: improve mul_mat_vec_iq1_m (#16907) * https://github.com/ggml-org/llama.cpp/pull/17839: 0a540f9ab ci : add windows-cuda 13.1 release (#17839) * https://github.com/ggml-org/llama.cpp/pull/17827: 22577583a common : change --color to accept on/off/auto, default to auto (#17827) * https://github.com/ggml-org/llama.cpp/pull/17780: d9e03db1e sycl: add missing BF16 conversion support for Intel oneAPI (#17780) * https://github.com/ggml-org/llama.cpp/pull/17672: db9783738 vulkan: perf_logger improvements (#17672) * https://github.com/ggml-org/llama.cpp/pull/17690: 017761daf ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) * https://github.com/ggml-org/llama.cpp/pull/17775: c42712b05 server: support multiple generations from one prompt (OAI "n" option) (#17775) * https://github.com/ggml-org/llama.cpp/pull/16985: 09c7c50e6 ggml : add circular tiling support to pad, for Vulkan, CUDA, and CPU (used for making seamless textures) (#16985) * https://github.com/ggml-org/llama.cpp/pull/17817: f334b7949 HIP: fix RDNA3 FP16/BF16 matrix multiplication (#17817) * https://github.com/ggml-org/llama.cpp/pull/17806: a28e3c756 webui: Stop generation from chat sidebar (#17806) * https://github.com/ggml-org/llama.cpp/pull/17804: e31b5c55c webui: Fix context available value in Multi-model Router mode (#17804) * https://github.com/ggml-org/llama.cpp/pull/17275: 21f24f27a webui: Per-conversation system message with UI displaying, edition & branching (#17275) * https://github.com/ggml-org/llama.cpp/pull/17653: 7b43f5575 ggml : improve error handling for search path existence checks (#17653) * https://github.com/ggml-org/llama.cpp/pull/17788: 444f00b0e llama : remove quantization sanity check (#17788) * https://github.com/ggml-org/llama.cpp/pull/17711: 2960eb297 vulkan: Use one row per workgroup for f32 mmv (#17711) * https://github.com/ggml-org/llama.cpp/pull/17730: dbc15a796 convert: support Mistral 3 Large MoE (#17730) * https://github.com/ggml-org/llama.cpp/pull/17781: c6c5e8597 vulkan: support solve_tri with larger N/K values (#17781) * https://github.com/ggml-org/llama.cpp/pull/17803: 8e5f4987b contrib : stale PRs (#17803) * https://github.com/ggml-org/llama.cpp/pull/17799: 8ce774a10 metal : fix build(#17799) * https://github.com/ggml-org/llama.cpp/pull/17637: 67788f684 vulkan: Replace deprecated VK_EXT_validation_features (#17637) * https://github.com/ggml-org/llama.cpp/pull/17541: d8c0a7b08 vulkan: Fix mismatch in TOPK_MOE unit test (#17541) * https://github.com/ggml-org/llama.cpp/pull/17701: 933414c0b vulkan: add more num_blocks instantiations in rms_norm (#17701) * https://github.com/ggml-org/llama.cpp/pull/17659: a0f3897d5 vulkan: fix top_k bug when there are ties in the input (#17659) * https://github.com/ggml-org/llama.cpp/pull/17685: e15cd06a9 vulkan : support conv-2d with large output size (#17685) * https://github.com/ggml-org/llama.cpp/pull/17764: fd57b24c0 ggml webgpu: unary op suppport, code refactoring, ops support (#17764) * https://github.com/ggml-org/llama.cpp/pull/17675: 6ab0d6496 vulkan: enable mmvq for q2_k on NVIDIA (#17675) * https://github.com/ggml-org/llama.cpp/pull/17624: 93bb92664 vulkan: set all memory allocations to high priority (#17624) * https://github.com/ggml-org/llama.cpp/pull/17116: 8160b38a5 rpc : fix alloc size logic (#17116) * https://github.com/ggml-org/llama.cpp/pull/17766: c41bde6fb metal : add residency sets keep-alive heartbeat (#17766) * https://github.com/ggml-org/llama.cpp/pull/17792: 6016d0bd4 HIP : fix RDNA4 build (#17792) * https://github.com/ggml-org/llama.cpp/pull/17786: 1be97831e fix: prevent segfault in tokenizer on highly repetitive input (#17786) * https://github.com/ggml-org/llama.cpp/pull/17790: a6cfc212e ci : fix winget workflow (#17790) * https://github.com/ggml-org/llama.cpp/pull/16999: 3a0d10533 Q4/Q8 Tiled Gemm Optimization. (#16999) * https://github.com/ggml-org/llama.cpp/pull/17789: 664898967 Add pwilkin to CODEOWNERS for chat files (#17789) * https://github.com/ggml-org/llama.cpp/pull/17746: e95d0bc8f CUDA: fix FA VKQ accumulator overflow (#17746) * https://github.com/ggml-org/llama.cpp/pull/17576: 668ed7657 HIP: enable WMMA-MMQ INT kernels for RDNA 3 (#17576) * https://github.com/ggml-org/llama.cpp/pull/17773: 03d9a77b8 ci : transform release binary root dir in tar to llama-bXXXX (#17773) * https://github.com/ggml-org/llama.cpp/pull/17768: 3143a755c docs : update ops.md (Metal, BLAS) (#17768) * https://github.com/ggml-org/llama.cpp/pull/17584: 96fe9badf Add support for CUMSUM and TRI for CUDA. (#17584) * https://github.com/ggml-org/llama.cpp/pull/16623: bde188d60 metal: TRI, FILL, EXPM1, SOFTPLUS (#16623) * https://github.com/ggml-org/llama.cpp/pull/17734: 9d0229967 server: strip content-length header on proxy (#17734) * https://github.com/ggml-org/llama.cpp/pull/17740: c4c10bfb8 server: move msg diffs tracking to HTTP thread (#17740) * https://github.com/ggml-org/llama.cpp/pull/17756: 817d743cc examples : add missing code block end marker [no ci] (#17756) * https://github.com/ggml-org/llama.cpp/pull/17755: bd4ef1347 common : skip model validation when --help is requested (#17755) * https://github.com/ggml-org/llama.cpp/pull/17728: 87a2084c4 ggml-cpu : remove asserts always evaluating to false (#17728) * https://github.com/ggml-org/llama.cpp/pull/17749: 3659aa28e convert: use existing local chat_template if mistral-format model has one. (#17749) * https://github.com/ggml-org/llama.cpp/pull/17423: 2a73f81f8 cmake : simplify build info detection using standard variables (#17423) * https://github.com/ggml-org/llama.cpp/pull/17753: 7dba049b0 ci : disable ggml-ci-x64-amd-* (#17753) * https://github.com/ggml-org/llama.cpp/pull/17738: 83c117152 common: use native MultiByteToWideChar (#17738) * https://github.com/ggml-org/llama.cpp/pull/17739: 0d1324856 metal : use params per pipeline instance (#17739) * https://github.com/ggml-org/llama.cpp/pull/17721: a67ef0f47 llama : fix sanity checks during quantization (#17721) * https://github.com/ggml-org/llama.cpp/pull/17736: ef75a89fd build : move _WIN32_WINNT definition to headers (#17736) * https://github.com/ggml-org/llama.cpp/pull/17708: d8b5cdc4f build: enable parallel builds in msbuild using MTT (#17708) * https://github.com/ggml-org/llama.cpp/pull/17650: dea9ba27c ggml-cpu: remove duplicate conditional check 'iid' (#17650) * https://github.com/ggml-org/llama.cpp/pull/17670: c6d1a00aa Add a couple of file types to the text section (#17670) * https://github.com/ggml-org/llama.cpp/pull/17712: 424c57945 convert : support latest mistral-common (fix conversion with --mistral-format) (#17712) * https://github.com/ggml-org/llama.cpp/pull/17689: e9f948346 Use OpenAI-compatible `/v1/models` endpoint by default (#17689) * https://github.com/ggml-org/llama.cpp/pull/17445: 41c5e02f4 webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden (#17445) * https://github.com/ggml-org/llama.cpp/pull/17505: 2e1c9cd81 CUDA: generalized (mma) FA, add Volta support (#17505) * https://github.com/ggml-org/llama.cpp/pull/17729: 190c4838b chat : reserve memory in compute_diffs and improve naming (#17729) * https://github.com/ggml-org/llama.cpp/pull/17704: e7c2cf135 server: add router multi-model tests (#17704) (#17722) * https://github.com/ggml-org/llama.cpp/pull/17735: 125749104 server : fix bad fmt, size() is a size_type (#17735) * https://github.com/ggml-org/llama.cpp/pull/17727: 083e18b11 cmake: explicitly link against crypt32 on non-MSVC Windows builds (#17727) * https://github.com/ggml-org/llama.cpp/pull/17731: 3d94e967a metal : fix data race in pipeline library (#17731) * https://github.com/ggml-org/llama.cpp/pull/17724: 7feb0a100 ci : remove the build of openeuler-cann in release (#17724) * https://github.com/ggml-org/llama.cpp/pull/17136: 0a8026e76 common : introduce composable PEG parser combinators for chat parsing (#17136) * https://github.com/ggml-org/llama.cpp/pull/17698: 5ceed6242 server: fix duplicate HTTP headers in multiple models mode (#17698) * https://github.com/ggml-org/llama.cpp/pull/17184: 7ca5991d2 ggml webgpu: add support for emscripten builds (#17184) * https://github.com/ggml-org/llama.cpp/pull/17719: b3e3060f4 ci : move release details to the top visible by default (#17719) * https://github.com/ggml-org/llama.cpp/pull/17649: 37adc9c6b ggml, llama : use defaulted constructors/destructors (#17649) * https://github.com/ggml-org/llama.cpp/pull/17688: 16cc3c606 build: document how to compile with Vulkan using Debian/Ubuntu packages (#17688) * https://github.com/ggml-org/llama.cpp/pull/17697: 13628d8bd server: add --media-path for local media files (#17697) * https://github.com/ggml-org/llama.cpp/pull/17695: a96283adc mtmd: fix --no-warmup (#17695) * https://github.com/ggml-org/llama.cpp/pull/16682: 4eba8d945 ci : RVV1.0 builds with tests (#16682) * https://github.com/ggml-org/llama.cpp/pull/17623: 61bde8e21 vulkan: Reduce temporary memory usage for TOP_K (#17623) * https://github.com/ggml-org/llama.cpp/pull/17682: e251e5ebb cmake : add utf8 compilation options for msvc (#17682) * https://github.com/ggml-org/llama.cpp/pull/17572: c4357dcc3 Server: Change Invalid Schema from Server Error (500) to User Error (400) (#17572) * https://github.com/ggml-org/llama.cpp/pull/17474: e148380c7 ggml : use svcntb() for SVE vector length detection (#17474) * https://github.com/ggml-org/llama.cpp/pull/17563: a2b0fe8d3 CANN: Disable Ger operator of OUT_PROD on 310p device (#17563) * https://github.com/ggml-org/llama.cpp/pull/17612: 7f3a72a8e ggml : remove redundant n_copies check when setting input/output (#17612) * https://github.com/ggml-org/llama.cpp/pull/17658: b9a37717b codeowners : remove ericcurtin (#17658) * https://github.com/ggml-org/llama.cpp/pull/17497: f3a9674ae llama : fix signed comparison warning on FreeBSD (#17497) * https://github.com/ggml-org/llama.cpp/pull/17686: 2c453c6c7 convert: add error message for mistral3 quantized weight (#17686) * https://github.com/ggml-org/llama.cpp/pull/17668: 5d6bd842e server: remove default "gpt-3.5-turbo" model name (#17668) * https://github.com/ggml-org/llama.cpp/pull/17679: fd3abe849 server: fixing naming conflict res_error in server-models.cpp (#17679) * https://github.com/ggml-org/llama.cpp/pull/17669: 682e6658b server: explicitly set exec path when create new instance (#17669) * https://github.com/ggml-org/llama.cpp/pull/17465: 4574f2949 ci : skip winget update when not in ggml-org (#17465) * https://github.com/ggml-org/llama.cpp/pull/17683: ab6726eef ggml : add fallback definition for HWCAP2_SVE2 (#17683) * https://github.com/ggml-org/llama.cpp/pull/17663: cee92af55 Add context info to server error (#17663) * https://github.com/ggml-org/llama.cpp/pull/17639: ed3208992 ggml-cuda: reorder only relevant nodes (#17639) * https://github.com/ggml-org/llama.cpp/pull/17299: 7b6d74536 release: fix duplicate libs, store symbolic links (#17299) * https://github.com/ggml-org/llama.cpp/pull/17573: 98bd9ab1e enhance argsort for UT (#17573) * https://github.com/ggml-org/llama.cpp/pull/17587: 746f9ee88 Override SSM_A op for Qwen3 Next to reduce splits (#17587) * https://github.com/ggml-org/llama.cpp/pull/17661: 9810cb824 ops.md: update vulkan support (#17661) * https://github.com/ggml-org/llama.cpp/pull/17652: ecf74a841 mtmd: add mtmd_context_params::warmup option (#17652) * https://github.com/ggml-org/llama.cpp/pull/17665: 00c361fe5 fix: llama arch implementation (#17665) * https://github.com/ggml-org/llama.cpp/pull/17470: ec18edfcb server: introduce API for serving / loading / unloading multiple models (#17470) * https://github.com/ggml-org/llama.cpp/pull/17630: 773340973 common: improve verbosity level definitions (#17630) * https://github.com/ggml-org/llama.cpp/pull/17644: cd3c11890 model: support Ministral3 (#17644) * https://github.com/ggml-org/llama.cpp/pull/17619: 649495c9d metal : add FA head size 48 (#17619) * https://github.com/ggml-org/llama.cpp/pull/17617: 90c72a614 ggml : extend the GGML_SCHED_NO_REALLOC debug logic of the scheduler (#17617) * https://github.com/ggml-org/llama.cpp/pull/17633: 6eea66691 llama-graph: avoid expand_forward for fusion (#17633) * https://github.com/ggml-org/llama.cpp/pull/17625: ff90508d6 contributing: update guidelines for AI-generated code (#17625) * https://github.com/ggml-org/llama.cpp/pull/17552: 0a4aeb927 cmake : add option to build and link LibreSSL (#17552) * https://github.com/ggml-org/llama.cpp/pull/17577: 2ba719519 model: LFM2-VL fixes (#17577) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:48:02 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#14194