[GH-ISSUE #7544] Despite advertised, granite3-dense does not seem to support tools. #66855

Closed
opened 2026-05-04 08:26:42 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @chhu on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7544

What is the issue?

granite3-dense
Gave bash as tool, but it is refusing to use it, other models work fine (qwen2.5 32b outshines all others for shell use).
Tool setup and sys prompt here: https://github.com/chhu/ollash/blob/main/index.js

asterope:~ >ask List file contents of current folder
Querying granite3-dense:8b-instruct-q8_0...

ls

asterope:~ >

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.1.40

Originally created by @chhu on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7544 ### What is the issue? [granite3-dense](https://ollama.com/library/granite3-dense) Gave bash as tool, but it is refusing to use it, other models work fine (qwen2.5 32b outshines all others for shell use). Tool setup and sys prompt here: https://github.com/chhu/ollash/blob/main/index.js asterope:~ >ask List file contents of current folder Querying granite3-dense:8b-instruct-q8_0... ```bash ls ``` asterope:~ > ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.40
GiteaMirror added the bug label 2026-05-04 08:26:42 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 7, 2024):

You need ollama version 0.3.14 or later to run granite3-dense.

<!-- gh-comment-id:2462173153 --> @rick-github commented on GitHub (Nov 7, 2024): You need ollama version 0.3.14 or later to run granite3-dense.
Author
Owner

@chhu commented on GitHub (Nov 8, 2024):

Sorrysorrysorry, messed that up, I am actually on 0.4.0...

<!-- gh-comment-id:2464310403 --> @chhu commented on GitHub (Nov 8, 2024): Sorrysorrysorry, messed that up, I am actually on 0.4.0...
Author
Owner

@kellyaa commented on GitHub (Nov 11, 2024):

Try adding this to your prompt (I did, and it worked!):

When calling a tool, respond in the format: <function_call> {"name": function name, "arguments": dictionary of argument name and its value}. Do not use variables.

We are investigating whether this means we need to update the prompt template.

<!-- gh-comment-id:2468661181 --> @kellyaa commented on GitHub (Nov 11, 2024): Try adding this to your prompt (I did, and it worked!): `When calling a tool, respond in the format: <function_call> {"name": function name, "arguments": dictionary of argument name and its value}. Do not use variables.` We are investigating whether this means we need to update the prompt template.
Author
Owner

@bjhargrave commented on GitHub (Nov 11, 2024):

See also https://www.ibm.com/granite/docs/models/granite/#tool-usefunction-calling-weather-scenario

I used the chat_template in the model.

tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-3.0-8b-instruct")
tools = [get_stock_price, get_current_weather]
chat = [
    {"role":"system","content": "You are a helpful assistant with access to the following function calls. Your task is to produce a list of function calls necessary to generate response to the user utterance. Use the following function calls as required."},
    {"role": "user", "content": query }
]

prompt = tokenizer.apply_chat_template(conversation=chat, tools=tools, tokenize=False, add_generation_prompt=True)

I found that the tool schema format generated by the call from the functions (e.g. transformers.utils.get_json_schema) produced better results than the example schema on the link above. The tool schema generated by langchain_ibm.chat_models.convert_to_openai_tool also worked well.

<!-- gh-comment-id:2468789073 --> @bjhargrave commented on GitHub (Nov 11, 2024): See also https://www.ibm.com/granite/docs/models/granite/#tool-usefunction-calling-weather-scenario I used the chat_template in the model. ```python tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-3.0-8b-instruct") tools = [get_stock_price, get_current_weather] chat = [ {"role":"system","content": "You are a helpful assistant with access to the following function calls. Your task is to produce a list of function calls necessary to generate response to the user utterance. Use the following function calls as required."}, {"role": "user", "content": query } ] prompt = tokenizer.apply_chat_template(conversation=chat, tools=tools, tokenize=False, add_generation_prompt=True) ``` I found that the tool schema format generated by the call from the functions (e.g. `transformers.utils.get_json_schema`) produced better results than the example schema on the link above. The tool schema generated by `langchain_ibm.chat_models.convert_to_openai_tool` also worked well.
Author
Owner

@rick-github commented on GitHub (Jan 13, 2025):

As mentioned, granite3-dense requires extra encouragement to use tools. The template for granite3.1-dense has this, if you don't have any particular requirement for granite3, I suggest moving to the updated model. Closing this but feel free to re-open if you continue to have problems.

<!-- gh-comment-id:2586015636 --> @rick-github commented on GitHub (Jan 13, 2025): As mentioned, granite3-dense requires extra encouragement to use tools. The template for [granite3.1-dense](https://ollama.com/library/granite3.1-dense) has this, if you don't have any particular requirement for granite3, I suggest moving to the updated model. Closing this but feel free to re-open if you continue to have problems.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66855