[GH-ISSUE #7860] Tool calling: LLAMA3.2 ignores param types #5027

Closed
opened 2026-04-12 16:05:40 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @fce2 on GitHub (Nov 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7860

What is the issue?

if i run llama3.1 (which is ok):

Prompt: What is three plus one?
Calling function: add_two_numbers
Arguments: {'a': 3, 'b': 1}
Function output: 4

but if i run llama3.2:

Prompt: What is three plus one?
Calling function: add_two_numbers
Arguments: {'a': '3', 'b': '1'}
Function output: 31

unfortunately "llama3.2-vision:11b-instruct-q8_0" does not work at all:
ResponseError: llama3.2-vision:11b-instruct-q8_0 does not support tools

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.4.5

Originally created by @fce2 on GitHub (Nov 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7860 ### What is the issue? if i run llama3.1 (which is ok): ``` Prompt: What is three plus one? Calling function: add_two_numbers Arguments: {'a': 3, 'b': 1} Function output: 4 ``` but if i run llama3.2: ``` Prompt: What is three plus one? Calling function: add_two_numbers Arguments: {'a': '3', 'b': '1'} Function output: 31 ``` unfortunately "llama3.2-vision:11b-instruct-q8_0" does not work at all: `ResponseError: llama3.2-vision:11b-instruct-q8_0 does not support tools` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.5
GiteaMirror added the bug label 2026-04-12 16:05:40 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 27, 2024):

llama3.2 is a smaller and less capable model than llama3.1 in some areas. It's also just an example, if you are writing tools for actual use you need to include code for verifying and processing inputs. For example, in the case of add_two_numbers, you could use return int(a) + int(b) to protect against string arguments. Or you could make the function more flexible:

def add(l: list) -> float:
    """Adds 2 or more numbers and returns the result"""
    if isinstance(l, str):
      try:
        l = eval(l)
      except:
        pass
    return sum(l)

You can add tools to llama3.2-vision by using the template from llama3.2. Be aware that if a model hasn't been trained for tool use, the results may not be great.

<!-- gh-comment-id:2503948463 --> @rick-github commented on GitHub (Nov 27, 2024): llama3.2 is a smaller and less capable model than llama3.1 in some areas. It's also just an example, if you are writing tools for actual use you need to include code for verifying and processing inputs. For example, in the case of `add_two_numbers`, you could use `return int(a) + int(b)` to protect against string arguments. Or you could make the function more flexible: ```python def add(l: list) -> float: """Adds 2 or more numbers and returns the result""" if isinstance(l, str): try: l = eval(l) except: pass return sum(l) ``` You can add tools to llama3.2-vision by using the template from llama3.2. Be aware that if a model hasn't been trained for tool use, the results may not be great.
Author
Owner

@fce2 commented on GitHub (Nov 27, 2024):

I'm using js, and yes, its already "return parseInt(a) + parseInt(b)", thanks ;-)

What do you mean with "by using the template from llama3.2" ?
Not sure how to apply (or even get) that template.

<!-- gh-comment-id:2503957544 --> @fce2 commented on GitHub (Nov 27, 2024): I'm using js, and yes, its already "return parseInt(a) + parseInt(b)", thanks ;-) What do you mean with "by using the template from llama3.2" ? Not sure how to apply (or even get) that template.
Author
Owner

@rick-github commented on GitHub (Nov 27, 2024):

Get the llama3.2 template:

ollama show --template llama3.2 > llama3.2.template

Get the llama3.2-vision Modelfile:

ollama show --modelfile llama3.2-vision > Modelfile

Edit Modelfile, replace TEMPLATE with llama3.2.template.
Create the new model:

ollama create llama3.2-vision-tools
<!-- gh-comment-id:2503970069 --> @rick-github commented on GitHub (Nov 27, 2024): Get the llama3.2 template: ``` ollama show --template llama3.2 > llama3.2.template ```` Get the llama3.2-vision Modelfile: ``` ollama show --modelfile llama3.2-vision > Modelfile ``` Edit `Modelfile`, replace `TEMPLATE` with llama3.2.template. Create the new model: ``` ollama create llama3.2-vision-tools ```
Author
Owner

@fce2 commented on GitHub (Nov 27, 2024):

thanks!!!

<!-- gh-comment-id:2503972673 --> @fce2 commented on GitHub (Nov 27, 2024): thanks!!!
Author
Owner

@fce2 commented on GitHub (Nov 28, 2024):

I know its the wrong place here... Sorry !
But i dont find any forum to discuss ollama things.
I've some questions where i dont find an answer.
Any idea where to go ?
Maybe a ollama-forum is a new feature-request ;-)

<!-- gh-comment-id:2505953120 --> @fce2 commented on GitHub (Nov 28, 2024): I know its the wrong place here... Sorry ! But i dont find any forum to discuss ollama things. I've some questions where i dont find an answer. Any idea where to go ? Maybe a ollama-forum is a new feature-request ;-)
Author
Owner

@rick-github commented on GitHub (Nov 28, 2024):

https://discord.gg/ollama

<!-- gh-comment-id:2505956095 --> @rick-github commented on GitHub (Nov 28, 2024): https://discord.gg/ollama
Author
Owner

@fce2 commented on GitHub (Nov 28, 2024):

Discord ??
Thats for chatting, not for serious work (i thought) ;-)
Maybe I'm too old, but I like the normal forums more... Maybe I'll make one...
Thanks.

<!-- gh-comment-id:2505971379 --> @fce2 commented on GitHub (Nov 28, 2024): Discord ?? Thats for chatting, not for serious work (i thought) ;-) Maybe I'm too old, but I like the normal forums more... Maybe I'll make one... Thanks.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5027