[PR #13277] [CLOSED] Feat/mrope split models #24684

Closed
opened 2026-04-19 17:44:21 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13277
Author: @iosub
Created: 11/30/2025
Status: Closed

Base: mainHead: feat/mrope-split-models


📝 Commits (8)

  • a9c6818 Revert "vulkan: temporary cary of vulkan fixes (#12971)"
  • d2917b7 ggml update to b7087
  • 366ed3e fix argsort on metal
  • 9a4271c update to b7108
  • af56743 fix bakllava regression
  • 4fd4574 fix lint logic to only compare against merge base and ignore files that aren't touched in this PR.
  • 18a8e92 Merge remote-tracking branch 'dhiltgen/ggml_bump' into feat/mrope-split-models
  • 773d9c0 feat: support split multimodal models with M-RoPE (Qwen3-VL)

📊 Changes

287 files changed (+27548 additions, -22424 deletions)

View changed files

📝 .github/workflows/test.yaml (+1 -1)
📝 Makefile.sync (+1 -1)
📝 discover/runner.go (+1 -0)
📝 fs/ggml/ggml.go (+180 -3)
📝 fs/ggml/gguf.go (+4 -1)
📝 llama/build-info.cpp (+1 -1)
📝 llama/llama.cpp/.rsync-filter (+3 -0)
📝 llama/llama.cpp/common/common.cpp (+34 -5)
📝 llama/llama.cpp/common/common.h (+15 -1)
📝 llama/llama.cpp/common/json-schema-to-grammar.cpp (+21 -3)
📝 llama/llama.cpp/common/json-schema-to-grammar.h (+2 -0)
📝 llama/llama.cpp/common/log.cpp (+6 -0)
📝 llama/llama.cpp/common/log.h (+2 -0)
📝 llama/llama.cpp/include/llama.h (+7 -3)
📝 llama/llama.cpp/src/llama-arch.cpp (+140 -0)
📝 llama/llama.cpp/src/llama-arch.h (+13 -0)
📝 llama/llama.cpp/src/llama-batch.cpp (+63 -31)
📝 llama/llama.cpp/src/llama-batch.h (+12 -1)
📝 llama/llama.cpp/src/llama-chat.cpp (+32 -0)
📝 llama/llama.cpp/src/llama-chat.h (+1 -0)

...and 80 more files

📄 Description

No description provided


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13277 **Author:** [@iosub](https://github.com/iosub) **Created:** 11/30/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `feat/mrope-split-models` --- ### 📝 Commits (8) - [`a9c6818`](https://github.com/ollama/ollama/commit/a9c6818a2bc1e3aa7194b577a4fff24abdaa400e) Revert "vulkan: temporary cary of vulkan fixes (#12971)" - [`d2917b7`](https://github.com/ollama/ollama/commit/d2917b76aefa322dc2a5870ebdbdc361671f0ddf) ggml update to b7087 - [`366ed3e`](https://github.com/ollama/ollama/commit/366ed3e30f593c346aef6dd826527aa78f911e1c) fix argsort on metal - [`9a4271c`](https://github.com/ollama/ollama/commit/9a4271cafa81d37f45bf1ed9e9c4b70216f1ff2f) update to b7108 - [`af56743`](https://github.com/ollama/ollama/commit/af567437c959e54bafc2cded3fdfc2a3d7676f6e) fix bakllava regression - [`4fd4574`](https://github.com/ollama/ollama/commit/4fd45744f034b3212df22121b9524123fc5e2ca5) fix lint logic to only compare against merge base and ignore files that aren't touched in this PR. - [`18a8e92`](https://github.com/ollama/ollama/commit/18a8e9233751e08027abc255d7d778d032be0622) Merge remote-tracking branch 'dhiltgen/ggml_bump' into feat/mrope-split-models - [`773d9c0`](https://github.com/ollama/ollama/commit/773d9c0d9738a17a6965ba5c071fcc3d2275f41f) feat: support split multimodal models with M-RoPE (Qwen3-VL) ### 📊 Changes **287 files changed** (+27548 additions, -22424 deletions) <details> <summary>View changed files</summary> 📝 `.github/workflows/test.yaml` (+1 -1) 📝 `Makefile.sync` (+1 -1) 📝 `discover/runner.go` (+1 -0) 📝 `fs/ggml/ggml.go` (+180 -3) 📝 `fs/ggml/gguf.go` (+4 -1) 📝 `llama/build-info.cpp` (+1 -1) 📝 `llama/llama.cpp/.rsync-filter` (+3 -0) 📝 `llama/llama.cpp/common/common.cpp` (+34 -5) 📝 `llama/llama.cpp/common/common.h` (+15 -1) 📝 `llama/llama.cpp/common/json-schema-to-grammar.cpp` (+21 -3) 📝 `llama/llama.cpp/common/json-schema-to-grammar.h` (+2 -0) 📝 `llama/llama.cpp/common/log.cpp` (+6 -0) 📝 `llama/llama.cpp/common/log.h` (+2 -0) 📝 `llama/llama.cpp/include/llama.h` (+7 -3) 📝 `llama/llama.cpp/src/llama-arch.cpp` (+140 -0) 📝 `llama/llama.cpp/src/llama-arch.h` (+13 -0) 📝 `llama/llama.cpp/src/llama-batch.cpp` (+63 -31) 📝 `llama/llama.cpp/src/llama-batch.h` (+12 -1) 📝 `llama/llama.cpp/src/llama-chat.cpp` (+32 -0) 📝 `llama/llama.cpp/src/llama-chat.h` (+1 -0) _...and 80 more files_ </details> ### 📄 Description _No description provided_ --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 17:44:21 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#24684