[PR #9966] [CLOSED] Granite new engine #59806

Closed
opened 2026-04-29 14:44:30 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/9966
Author: @gabe-l-hart
Created: 3/24/2025
Status: Closed

Base: mainHead: GraniteNewEngine


📝 Commits (10+)

  • 9dd63d7 feat: First cut at adding "granite" model support
  • 382f0c2 fix: Update granite for model API changes after rebase
  • e871df6 fix: Fix how rope factors are passed after rebase
  • cba70a2 fix: Update model forward for batch imputs after rebase
  • 8e77026 fix: (bug) Fix how embedding multiplier is applied
  • 6990799 fix: Remove special-case to extend special tokens for gemma3
  • 41ae240 feat: Add support for multi-regex pretokenizers in BytePairEncoding
  • fb81e68 feat: Support "refact" pretokenizer for Granite 3.0 and 3.1
  • b73fff1 feat: Centralize the construction of the pretokenizer expressions
  • 10b0d39 fix: Use Input() instead of Output() for outputs

📊 Changes

6 files changed (+494 additions, -10 deletions)

View changed files

📝 convert/convert.go (+2 -0)
convert/convert_granite.go (+231 -0)
model/models/granite/model.go (+197 -0)
📝 model/models/models.go (+1 -0)
📝 model/process_text.go (+43 -8)
📝 model/process_text_test.go (+20 -2)

📄 Description

Description

This PR adds the "granite" model architecture to the Ollama engine. It mirrors the changes made in the corresponding llama.cpp PR.

Testing

find-ollama-gguf.sh
#!/usr/bin/env bash

# Helper script to find the GGUF file in the blob dir that is associated with a
# given ollama model

ollama_models=${OLLAMA_MODELS:-$HOME/.ollama/models}

model_name=$1

# TODO: Support other registries
registry="registry.ollama.ai"
if [[ "$model_name" == *":"* ]]; then
    tag="$(echo $model_name | cut -d':' -f2)"
    model_name="$(echo $model_name | cut -d':' -f1)"
else
    tag="latest"
fi
if ! [[ "$model_name" == *"/"* ]]; then
    model_name="library/$model_name"
fi
manifest="$ollama_models/manifests/$registry/$model_name/$tag"

# Use jq to extract the biggest layer (and assume that's the GGUF!)
biggest_layer=$(cat $manifest | jq -r '.layers | max_by(.size)|.digest')
blob_file="$ollama_models/blobs/$(echo $biggest_layer | sed 's,:,-,')"

echo $blob_file
# Run the model with the llama.cpp engine
./ollama runner -model $(find-ollama-gguf.sh granite3.2:2b) -port 12345 -n-gpu-layers 100

# Run the model with the ollama engine
./ollama runner --ollama-engine -model $(find-ollama-gguf.sh granite3.2:2b) -port 12346 -n-gpu-layers 100
import requests
import json
llama_url = "http://localhost:12345/completion"
ollama_url = "http://localhost:12346/completion"
request = {"prompt": "Hi there", "options": {"temperature": 0, "num_predict": 100}}

llama_resp = requests.post(llama_url, json=request)
llama_text = "".join([json.loads(line)["content"] for line in llama_resp.text.splitlines()])

ollama_resp = requests.post(ollama_url, json=request)
ollama_text = "".join([json.loads(line)["content"] for line in ollama_resp.text.splitlines()])

print(f"############ llama.cpp output:\n{llama_text}")
print("-----------")
print(f"############ ollama output:\n{ollama_text}")

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/9966 **Author:** [@gabe-l-hart](https://github.com/gabe-l-hart) **Created:** 3/24/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `GraniteNewEngine` --- ### 📝 Commits (10+) - [`9dd63d7`](https://github.com/ollama/ollama/commit/9dd63d77069821d4f6536cc3183c24ce070f28b6) feat: First cut at adding "granite" model support - [`382f0c2`](https://github.com/ollama/ollama/commit/382f0c25fd83f9a2d0b097c65a418bca684fbd63) fix: Update granite for model API changes after rebase - [`e871df6`](https://github.com/ollama/ollama/commit/e871df69152586a0bd5ef03ecb3ff3f22fe20a7d) fix: Fix how rope factors are passed after rebase - [`cba70a2`](https://github.com/ollama/ollama/commit/cba70a285f2a248d79275e3bfb724ffbaa5ff906) fix: Update model forward for batch imputs after rebase - [`8e77026`](https://github.com/ollama/ollama/commit/8e770262311f4eaeec78fa24d46781f00797e7b2) fix: (bug) Fix how embedding multiplier is applied - [`6990799`](https://github.com/ollama/ollama/commit/69907994db6413e6f2a3b7a0000acea96027fa5e) fix: Remove special-case to extend special tokens for gemma3 - [`41ae240`](https://github.com/ollama/ollama/commit/41ae2408ef239676a174181b58a3b6d036e78460) feat: Add support for multi-regex pretokenizers in BytePairEncoding - [`fb81e68`](https://github.com/ollama/ollama/commit/fb81e68d5295fc2a0a94bbd373154ee2cbde1069) feat: Support "refact" pretokenizer for Granite 3.0 and 3.1 - [`b73fff1`](https://github.com/ollama/ollama/commit/b73fff1e8734e87397d0844091c544431e72e804) feat: Centralize the construction of the pretokenizer expressions - [`10b0d39`](https://github.com/ollama/ollama/commit/10b0d39aff3d8dd4a64793ddab06316e5536ddfd) fix: Use Input() instead of Output() for outputs ### 📊 Changes **6 files changed** (+494 additions, -10 deletions) <details> <summary>View changed files</summary> 📝 `convert/convert.go` (+2 -0) ➕ `convert/convert_granite.go` (+231 -0) ➕ `model/models/granite/model.go` (+197 -0) 📝 `model/models/models.go` (+1 -0) 📝 `model/process_text.go` (+43 -8) 📝 `model/process_text_test.go` (+20 -2) </details> ### 📄 Description ## Description This PR adds the `"granite"` model architecture to the Ollama engine. It mirrors the changes made in [the corresponding `llama.cpp` PR](https://github.com/ggml-org/llama.cpp/pull/9412/files). ## Testing <details> <summary>find-ollama-gguf.sh</summary> ```sh #!/usr/bin/env bash # Helper script to find the GGUF file in the blob dir that is associated with a # given ollama model ollama_models=${OLLAMA_MODELS:-$HOME/.ollama/models} model_name=$1 # TODO: Support other registries registry="registry.ollama.ai" if [[ "$model_name" == *":"* ]]; then tag="$(echo $model_name | cut -d':' -f2)" model_name="$(echo $model_name | cut -d':' -f1)" else tag="latest" fi if ! [[ "$model_name" == *"/"* ]]; then model_name="library/$model_name" fi manifest="$ollama_models/manifests/$registry/$model_name/$tag" # Use jq to extract the biggest layer (and assume that's the GGUF!) biggest_layer=$(cat $manifest | jq -r '.layers | max_by(.size)|.digest') blob_file="$ollama_models/blobs/$(echo $biggest_layer | sed 's,:,-,')" echo $blob_file ``` </details> ```sh # Run the model with the llama.cpp engine ./ollama runner -model $(find-ollama-gguf.sh granite3.2:2b) -port 12345 -n-gpu-layers 100 # Run the model with the ollama engine ./ollama runner --ollama-engine -model $(find-ollama-gguf.sh granite3.2:2b) -port 12346 -n-gpu-layers 100 ``` ```py import requests import json llama_url = "http://localhost:12345/completion" ollama_url = "http://localhost:12346/completion" request = {"prompt": "Hi there", "options": {"temperature": 0, "num_predict": 100}} llama_resp = requests.post(llama_url, json=request) llama_text = "".join([json.loads(line)["content"] for line in llama_resp.text.splitlines()]) ollama_resp = requests.post(ollama_url, json=request) ollama_text = "".join([json.loads(line)["content"] for line in ollama_resp.text.splitlines()]) print(f"############ llama.cpp output:\n{llama_text}") print("-----------") print(f"############ ollama output:\n{ollama_text}") ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 14:44:30 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#59806