[PR #10079] [CLOSED] GraniteMoE new engine #39014

Closed
opened 2026-04-22 23:39:29 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/10079
Author: @gabe-l-hart
Created: 4/1/2025
Status: Closed

Base: mainHead: GraniteMoENewEngine


📝 Commits (10+)

  • 9dd63d7 feat: First cut at adding "granite" model support
  • 382f0c2 fix: Update granite for model API changes after rebase
  • e871df6 fix: Fix how rope factors are passed after rebase
  • cba70a2 fix: Update model forward for batch imputs after rebase
  • 8e77026 fix: (bug) Fix how embedding multiplier is applied
  • 6990799 fix: Remove special-case to extend special tokens for gemma3
  • 41ae240 feat: Add support for multi-regex pretokenizers in BytePairEncoding
  • fb81e68 feat: Support "refact" pretokenizer for Granite 3.0 and 3.1
  • b73fff1 feat: Centralize the construction of the pretokenizer expressions
  • 10b0d39 fix: Use Input() instead of Output() for outputs

📊 Changes

9 files changed (+754 additions, -10 deletions)

View changed files

📝 convert/convert.go (+2 -0)
convert/convert_granite.go (+231 -0)
📝 ml/backend.go (+2 -0)
📝 ml/backend/ggml/ggml.go (+14 -0)
model/models/granite/model.go (+197 -0)
model/models/granitemoe/model.go (+243 -0)
📝 model/models/models.go (+2 -0)
📝 model/process_text.go (+43 -8)
📝 model/process_text_test.go (+20 -2)

📄 Description

Description

This PR is a follow on to #9966 that extends support to the GraniteMoE architecture.

Outstanding TODO

Testing

(using the same find-ollama-gguf.sh from the previous PR)

# Run the model with the llama.cpp engine
./ollama runner -model $(find-ollama-gguf.sh granite3.1-moe:1b) -port 12345 -n-gpu-layers 100

# Run the model with the ollama engine
./ollama runner --ollama-engine -model $(find-ollama-gguf.sh granite3.1-moe:1b) -port 12346 -n-gpu-layers 100
import requests
import json
llama_url = "http://localhost:12345/completion"
ollama_url = "http://localhost:12346/completion"
request = {"prompt": "Hi there", "options": {"temperature": 0, "num_predict": 100}}

llama_resp = requests.post(llama_url, json=request)
llama_text = "".join([json.loads(line)["content"] for line in llama_resp.text.splitlines()])

ollama_resp = requests.post(ollama_url, json=request)
ollama_text = "".join([json.loads(line)["content"] for line in ollama_resp.text.splitlines()])

print(f"############ llama.cpp output:\n{llama_text}")
print("-----------")
print(f"############ ollama output:\n{ollama_text}")

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/10079 **Author:** [@gabe-l-hart](https://github.com/gabe-l-hart) **Created:** 4/1/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `GraniteMoENewEngine` --- ### 📝 Commits (10+) - [`9dd63d7`](https://github.com/ollama/ollama/commit/9dd63d77069821d4f6536cc3183c24ce070f28b6) feat: First cut at adding "granite" model support - [`382f0c2`](https://github.com/ollama/ollama/commit/382f0c25fd83f9a2d0b097c65a418bca684fbd63) fix: Update granite for model API changes after rebase - [`e871df6`](https://github.com/ollama/ollama/commit/e871df69152586a0bd5ef03ecb3ff3f22fe20a7d) fix: Fix how rope factors are passed after rebase - [`cba70a2`](https://github.com/ollama/ollama/commit/cba70a285f2a248d79275e3bfb724ffbaa5ff906) fix: Update model forward for batch imputs after rebase - [`8e77026`](https://github.com/ollama/ollama/commit/8e770262311f4eaeec78fa24d46781f00797e7b2) fix: (bug) Fix how embedding multiplier is applied - [`6990799`](https://github.com/ollama/ollama/commit/69907994db6413e6f2a3b7a0000acea96027fa5e) fix: Remove special-case to extend special tokens for gemma3 - [`41ae240`](https://github.com/ollama/ollama/commit/41ae2408ef239676a174181b58a3b6d036e78460) feat: Add support for multi-regex pretokenizers in BytePairEncoding - [`fb81e68`](https://github.com/ollama/ollama/commit/fb81e68d5295fc2a0a94bbd373154ee2cbde1069) feat: Support "refact" pretokenizer for Granite 3.0 and 3.1 - [`b73fff1`](https://github.com/ollama/ollama/commit/b73fff1e8734e87397d0844091c544431e72e804) feat: Centralize the construction of the pretokenizer expressions - [`10b0d39`](https://github.com/ollama/ollama/commit/10b0d39aff3d8dd4a64793ddab06316e5536ddfd) fix: Use Input() instead of Output() for outputs ### 📊 Changes **9 files changed** (+754 additions, -10 deletions) <details> <summary>View changed files</summary> 📝 `convert/convert.go` (+2 -0) ➕ `convert/convert_granite.go` (+231 -0) 📝 `ml/backend.go` (+2 -0) 📝 `ml/backend/ggml/ggml.go` (+14 -0) ➕ `model/models/granite/model.go` (+197 -0) ➕ `model/models/granitemoe/model.go` (+243 -0) 📝 `model/models/models.go` (+2 -0) 📝 `model/process_text.go` (+43 -8) 📝 `model/process_text_test.go` (+20 -2) </details> ### 📄 Description ## Description This PR is a follow on to #9966 that extends support to the `GraniteMoE` architecture. ## Outstanding TODO - [ ] Merge #9966 - [ ] Add support for `GraniteMoE` in the `convert` package ## Testing (using the same `find-ollama-gguf.sh` from the previous PR) ```sh # Run the model with the llama.cpp engine ./ollama runner -model $(find-ollama-gguf.sh granite3.1-moe:1b) -port 12345 -n-gpu-layers 100 # Run the model with the ollama engine ./ollama runner --ollama-engine -model $(find-ollama-gguf.sh granite3.1-moe:1b) -port 12346 -n-gpu-layers 100 ``` ```py import requests import json llama_url = "http://localhost:12345/completion" ollama_url = "http://localhost:12346/completion" request = {"prompt": "Hi there", "options": {"temperature": 0, "num_predict": 100}} llama_resp = requests.post(llama_url, json=request) llama_text = "".join([json.loads(line)["content"] for line in llama_resp.text.splitlines()]) ollama_resp = requests.post(ollama_url, json=request) ollama_text = "".join([json.loads(line)["content"] for line in ollama_resp.text.splitlines()]) print(f"############ llama.cpp output:\n{llama_text}") print("-----------") print(f"############ ollama output:\n{ollama_text}") ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-22 23:39:29 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#39014