[GH-ISSUE #1729] Function call with Ollama and LlamaIndex #984

Closed
opened 2026-04-12 10:40:44 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @sandangel on GitHub (Dec 27, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1729

Hi, I'm looking for a way to add function call to work with Ollama and LlamaIndex.

From my research we have format json in Ollama, so theoretically, there are 2 ways we can support function call:

  1. Enforce the LLM to output json following a schema, and we can call the function based on the json output.
  1. We can also add API in Ollama itself to support function call directly, similar to OpenAI.
  • I'm not sure how this will work, especially OpenAI is not open source. Do you think it's possible to implement the function call feature directly in Ollama?
    • I'm not sure will we need to have a specific model that support function call, and we can feed { role: "tool", content: "tool output" } into the LLM
    • Or it's simply the feature we can add at the API level.

Please let me know what do you guys think and what should be the right approach for this issue going forward.

Originally created by @sandangel on GitHub (Dec 27, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1729 Hi, I'm looking for a way to add function call to work with Ollama and LlamaIndex. From my research we have format json in Ollama, so theoretically, there are 2 ways we can support function call: 1. Enforce the LLM to output json following a schema, and we can call the function based on the json output. * Not sure how reliable it is for this approach, has anyone been able to have a consistent output from the LLM for the exact prompt? * Client side also need to implement a retry mechanism so we will feed the previous output and errors back to LLM and ask it to regenerate * What are schemas and data structure that we should use? Currently, most people seem to go with OpenAI function call schema, but it does not support validation and we probably need to have a pydantic model and keep it up-to-date for LLM response's validation. * Some examples: https://github.com/lgrammel/modelfusion/blob/main/examples/basic/src/model-provider/ollama/ollama-chat-use-tools-or-generator-text-mistral-example.ts 2. We can also add API in Ollama itself to support function call directly, similar to OpenAI. * I'm not sure how this will work, especially OpenAI is not open source. Do you think it's possible to implement the function call feature directly in Ollama? * I'm not sure will we need to have a specific model that support function call, and we can feed `{ role: "tool", content: "tool output" }` into the LLM * Or it's simply the feature we can add at the API level. Please let me know what do you guys think and what should be the right approach for this issue going forward.
Author
Owner

@xprnio commented on GitHub (Dec 27, 2023):

From personal experience, enforcing the schema is somewhat hit-or-miss, especially depending on the complexity of the schema. I've gotten the best results with both being highly explicit in describing the schema (explaining each property in detail, specifying which properties are required), instructing it to only follow the schema (eg. "only include properties defined in the schema"), and giving some examples.

For my own project I'm currently using a different approach where I instead defined a custom "line-based protocol" for it to use which allows for both "sending messages" as well as "running commands" which not only reduces the overall response size (since JSON is quite verbose and thus increases the number of tokens per response quite a lot), but also enables my application to make use of streaming as well. The specifics of the protocol are somewhat specific to my application, but the general gist of it is this:

Every response line is either a message or a command.
Empty lines are skipped from processing

A response line is processed as a command if it is prefixed with `{command}:`.
Calling the `a` (action block) command takes in an action and parameters
Calling the `d` (data) command takes in a JSON object to be passed into the current action block
Calling the `e` (end) command ends the current action block
a: insert tasks
d: { "name": "Task name", "completed": false }
e:

Actions can also take in multiple parameters, for example to update a collection we can do
a: update tasks { "name": "Task name" }
d: { "completed": true }
e:

Response lines which are not prefixed with a command are processed as regular messages

My application explains the protocol, the various actions available, and the collections to the model in the system prompt, and by giving some examples for each of them it does do it's job quite well (at least the Mistral and Mixtral models, haven't tested others yet)

<!-- gh-comment-id:1870385509 --> @xprnio commented on GitHub (Dec 27, 2023): From personal experience, enforcing the schema is somewhat hit-or-miss, especially depending on the complexity of the schema. I've gotten the best results with both being highly explicit in describing the schema (explaining each property in detail, specifying which properties are required), instructing it to only follow the schema (eg. "only include properties defined in the schema"), and giving some examples. For my own project I'm currently using a different approach where I instead defined a custom "line-based protocol" for it to use which allows for both "sending messages" as well as "running commands" which not only reduces the overall response size (since JSON is quite verbose and thus increases the number of tokens per response quite a lot), but also enables my application to make use of streaming as well. The specifics of the protocol are somewhat specific to my application, but the general gist of it is this: ``` Every response line is either a message or a command. Empty lines are skipped from processing A response line is processed as a command if it is prefixed with `{command}:`. Calling the `a` (action block) command takes in an action and parameters Calling the `d` (data) command takes in a JSON object to be passed into the current action block Calling the `e` (end) command ends the current action block a: insert tasks d: { "name": "Task name", "completed": false } e: Actions can also take in multiple parameters, for example to update a collection we can do a: update tasks { "name": "Task name" } d: { "completed": true } e: Response lines which are not prefixed with a command are processed as regular messages ``` My application explains the protocol, the various actions available, and the collections to the model in the system prompt, and by giving some examples for each of them it does do it's job quite well (at least the Mistral and Mixtral models, haven't tested others yet)
Author
Owner

@sandangel commented on GitHub (Dec 28, 2023):

Hi @xprnio ,
Thanks a lot for sharing the experience and the detailed write-up. I wonder which schema do you use? Is it following OpenAI function call schema, or is it a custom schema we define ourselves?

<!-- gh-comment-id:1870820734 --> @sandangel commented on GitHub (Dec 28, 2023): Hi @xprnio , Thanks a lot for sharing the experience and the detailed write-up. I wonder which schema do you use? Is it following OpenAI function call schema, or is it a custom schema we define ourselves?
Author
Owner

@xprnio commented on GitHub (Dec 28, 2023):

@sandangel
You need to define your own schema, which means that the world is your oyster in that regard. Make the schema as complex or as simple as you want, explain it however you want, etc.

For more of how I used it, have a look at this gist. It's quite big though (in terms of tokens) and mainly focuses on explaining it more in natural language than code, but does also incorporate quite a lot of examples to help the LLM understand.

I've also heard that another good way of describing JSON is to use TypeScript (haven't tested, but I think this might be a pretty good approach as well).

<!-- gh-comment-id:1871013422 --> @xprnio commented on GitHub (Dec 28, 2023): @sandangel You need to define your own schema, which means that the world is your oyster in that regard. Make the schema as complex or as simple as you want, explain it however you want, etc. For more of how I used it, have a look at [this gist](https://gist.github.com/xprnio/05c23c1911070533115701998b9a26b4). It's quite big though (in terms of tokens) and mainly focuses on explaining it more in natural language than code, but does also incorporate quite a lot of examples to help the LLM understand. I've also heard that another good way of describing JSON is to use TypeScript (haven't tested, but I think this might be a pretty good approach as well).
Author
Owner

@jukofyork commented on GitHub (Jan 1, 2024):

My application explains the protocol, the various actions available, and the collections to the model in the system prompt, and by giving some examples for each of them it does do it's job quite well (at least the Mistral and Mixtral models, haven't tested others yet)

I'll have to try using the Mistral and Mixtral models. I've been adapting the Eclipse IDE plug-in called "AI Assist" to work with the Ollama API instead of OpenAI API but so far I've found it excruciatingly hard to get any of the coding specific LLMs to use function calls:

  • The Deepseek models seem to have been actively fine tuned to refuse to run any functions even though they seem to understand what you are asking them to do!
  • The codelama models and their derivatives seem to have much more trouble understanding what you are asking them to do, but will everything call a function if you totally spell it out and ask them "please call function X", but otherwise they just won't use them.

I'll be interested to see what you are using to help prompt them into using functions in your code. I agree showing examples of how to call the functions is important. The most success I had was just adding the functions to the system prompt in OpenAI API format (with the parameter descriptions, which parameters are optional, etc) with some examples below of how to use them.

I also found trying to get chat/instruct fine-tuned models to call functions right at the start of their reply (because of the way AI Assist handles streaming and function calls) was near impossible. I've had so many hilarious chats along the lines of: "No!!! please use the function at the start of the message!" followed by them apologising before trying to calling the function again - doh.

Overall it's been a huge fail so far.

<!-- gh-comment-id:1873266456 --> @jukofyork commented on GitHub (Jan 1, 2024): > My application explains the protocol, the various actions available, and the collections to the model in the system prompt, and by giving some examples for each of them it does do it's job quite well (at least the Mistral and Mixtral models, haven't tested others yet) I'll have to try using the Mistral and Mixtral models. I've been adapting the Eclipse IDE plug-in called "AI Assist" to work with the Ollama API instead of OpenAI API but so far I've found it excruciatingly hard to get any of the coding specific LLMs to use function calls: - The Deepseek models seem to have been actively fine tuned to refuse to run any functions even though they seem to understand what you are asking them to do! - The codelama models and their derivatives seem to have much more trouble understanding what you are asking them to do, but will everything call a function if you totally spell it out and ask them "please call function X", but otherwise they just won't use them. I'll be interested to see what you are using to help prompt them into using functions in your code. I agree showing examples of how to call the functions is important. The most success I had was just adding the functions to the system prompt in OpenAI API format (with the parameter descriptions, which parameters are optional, etc) with some examples below of how to use them. I also found trying to get chat/instruct fine-tuned models to call functions right at the start of their reply (because of the way AI Assist handles streaming and function calls) was near impossible. I've had so many hilarious chats along the lines of: "No!!! please use the function at the start of the message!" followed by them apologising before trying to calling the function again - doh. Overall it's been a huge fail so far.
Author
Owner

@technovangelist commented on GitHub (Jan 2, 2024):

Hi @sandangel , @xprnio , @jukofyork , thanks for contributing to this issue. For function calling, I have found the best result coming from doing a few things:

First include format: json. Then specify in the system prompt that the model needs to output json. This gets you most of the way there. What makes it perfect in most cases I have tried is to do a few shot prompt. This is easiest with the chat endpoint. So include your system prompt, then an example question, and then the example answer in your schema. repeat that 1 or 2 more times. That has worked well for me.

<!-- gh-comment-id:1874259413 --> @technovangelist commented on GitHub (Jan 2, 2024): Hi @sandangel , @xprnio , @jukofyork , thanks for contributing to this issue. For function calling, I have found the best result coming from doing a few things: First include `format: json`. Then specify in the system prompt that the model needs to output json. This gets you most of the way there. What makes it perfect in most cases I have tried is to do a few shot prompt. This is easiest with the chat endpoint. So include your system prompt, then an example question, and then the example answer in your schema. repeat that 1 or 2 more times. That has worked well for me.
Author
Owner

@xprnio commented on GitHub (Jan 2, 2024):

You're right @technovangelist, the way I used to do it was by putting all of the examples into the system prompt instead of "simulating" the examples through the chat interface itself with pre-made messages showing the expected path.

<!-- gh-comment-id:1874419618 --> @xprnio commented on GitHub (Jan 2, 2024): You're right @technovangelist, the way I used to do it was by putting all of the examples into the system prompt instead of "simulating" the examples through the chat interface itself with pre-made messages showing the expected path.
Author
Owner

@sampriti026 commented on GitHub (Jan 9, 2024):

@xprnio can you please share an example of your code? I wanted to build a bot that asks necessary questions, and when the requisite information is received, then it calls the api. (imagine a shopping bot). My first version is to have the llm ask user - if all the necessary information is furnished - and when the user responds with yes - the llm makes the api call.

<!-- gh-comment-id:1883071060 --> @sampriti026 commented on GitHub (Jan 9, 2024): @xprnio can you please share an example of your code? I wanted to build a bot that asks necessary questions, and when the requisite information is received, then it calls the api. (imagine a shopping bot). My first version is to have the llm ask user - if all the necessary information is furnished - and when the user responds with yes - the llm makes the api call.
Author
Owner

@xprnio commented on GitHub (Jan 9, 2024):

@sampriti026 what part of the code do you mean exactly? In all honesty, the application I've been using this approach in has been put "into the drawer" for a bit and isn't really that good in terms of quality. But I do plan on open-sourcing the project as soon as I get time to clean up the code a bit, however I guess there's nothing really stopping me from just throwing it all up here and getting to cleaning it up whenever I have the time to.

But yeah, let me know what exactly you want an example of, I'll try to get that project up here on Git some time this week, and I'll give you a ping with the appropriate part of it. For context, the project itself is written in Go, just so you know

<!-- gh-comment-id:1883475610 --> @xprnio commented on GitHub (Jan 9, 2024): @sampriti026 what part of the code do you mean exactly? In all honesty, the application I've been using this approach in has been put "into the drawer" for a bit and isn't really that good in terms of quality. But I do plan on open-sourcing the project as soon as I get time to clean up the code a bit, however I guess there's nothing really stopping me from just throwing it all up here and getting to cleaning it up whenever I have the time to. But yeah, let me know what exactly you want an example of, I'll try to get that project up here on Git some time this week, and I'll give you a ping with the appropriate part of it. For context, the project itself is written in Go, just so you know
Author
Owner

@johndpope commented on GitHub (Feb 5, 2024):

I read on twitter - one user was getting good mileage making 2 calls - rather than forcing chatgpt 3.5 to return json in addition to prompt - just get the results - then ask api to format result into a json response. was 100% hit rate.

<!-- gh-comment-id:1926776857 --> @johndpope commented on GitHub (Feb 5, 2024): I read on twitter - one user was getting good mileage making 2 calls - rather than forcing chatgpt 3.5 to return json in addition to prompt - just get the results - then ask api to format result into a json response. was 100% hit rate.
Author
Owner

@tolasing commented on GitHub (Feb 11, 2024):

for function calling you can try this model https://huggingface.co/TheBloke/gorilla-openfunctions-v1-GGUF and then format the prompt template as :

FROM ./gorilla-openfunctions-v1.Q4_K_M.gguf


TEMPLATE """

### User:
{{.Prompt }}
### System:
{{.System}}

### Response:
"""

SYSTEM """<<function>> functions = [
    {
        "name": "Uber Carpool",
        "api_name": "uber.ride",
        "description": "Find suitable ride for customers given the location, type of ride, and the a>
        "parameters":  [
            {"name": "loc", "description": "Location of the starting place of the Uber ride"},
            {"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber rid>
            {"name": "time", "description": "The amount of time in minutes the customer is willing t>
        ]
    }
]\n
ASSISTANT:"""

PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "</s>"


./ollama run gorilla_test "USER: <<question>>"Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes ""
uber.ride(USER="plus", LOC="94704", TIME=10)

and append "USER: <" before the user request.

<!-- gh-comment-id:1937763369 --> @tolasing commented on GitHub (Feb 11, 2024): for function calling you can try this model https://huggingface.co/TheBloke/gorilla-openfunctions-v1-GGUF and then format the prompt template as : ``` FROM ./gorilla-openfunctions-v1.Q4_K_M.gguf TEMPLATE """ ### User: {{.Prompt }} ### System: {{.System}} ### Response: """ SYSTEM """<<function>> functions = [ { "name": "Uber Carpool", "api_name": "uber.ride", "description": "Find suitable ride for customers given the location, type of ride, and the a> "parameters": [ {"name": "loc", "description": "Location of the starting place of the Uber ride"}, {"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber rid> {"name": "time", "description": "The amount of time in minutes the customer is willing t> ] } ]\n ASSISTANT:""" PARAMETER stop "<|system|>" PARAMETER stop "<|user|>" PARAMETER stop "<|assistant|>" PARAMETER stop "</s>" ``` ``` ./ollama run gorilla_test "USER: <<question>>"Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes "" uber.ride(USER="plus", LOC="94704", TIME=10) ``` and append "USER: <<question>" before the user request.
Author
Owner

@jerryan999 commented on GitHub (Feb 13, 2024):

how about this blog:https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms

<!-- gh-comment-id:1941225458 --> @jerryan999 commented on GitHub (Feb 13, 2024): how about this blog:https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms?
Author
Owner

@RachelShalom commented on GitHub (Apr 14, 2024):

how about this blog:https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms?

I get 404 for this url

<!-- gh-comment-id:2054066942 --> @RachelShalom commented on GitHub (Apr 14, 2024): > how about this blog:[https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms?](https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms%EF%BC%9F) I get 404 for this url
Author
Owner

@jerryan999 commented on GitHub (Apr 14, 2024):

@RachelShalom
sorry for that, here is the link:https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms

<!-- gh-comment-id:2054083056 --> @jerryan999 commented on GitHub (Apr 14, 2024): @RachelShalom sorry for that, here is the link:https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms
Author
Owner

@icetech233 commented on GitHub (Jun 27, 2024):

for function calling you can try this model https://huggingface.co/TheBloke/gorilla-openfunctions-v1-GGUF and then format the prompt template as :

FROM ./gorilla-openfunctions-v1.Q4_K_M.gguf


TEMPLATE """

### User:
{{.Prompt }}
### System:
{{.System}}

### Response:
"""

SYSTEM """<<function>> functions = [
    {
        "name": "Uber Carpool",
        "api_name": "uber.ride",
        "description": "Find suitable ride for customers given the location, type of ride, and the a>
        "parameters":  [
            {"name": "loc", "description": "Location of the starting place of the Uber ride"},
            {"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber rid>
            {"name": "time", "description": "The amount of time in minutes the customer is willing t>
        ]
    }
]\n
ASSISTANT:"""

PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "</s>"
./ollama run gorilla_test "USER: <<question>>"Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes ""
uber.ride(USER="plus", LOC="94704", TIME=10)

and append "USER: <" before the user request.

i cant understand

<!-- gh-comment-id:2193896124 --> @icetech233 commented on GitHub (Jun 27, 2024): > for function calling you can try this model https://huggingface.co/TheBloke/gorilla-openfunctions-v1-GGUF and then format the prompt template as : > > ``` > FROM ./gorilla-openfunctions-v1.Q4_K_M.gguf > > > TEMPLATE """ > > ### User: > {{.Prompt }} > ### System: > {{.System}} > > ### Response: > """ > > SYSTEM """<<function>> functions = [ > { > "name": "Uber Carpool", > "api_name": "uber.ride", > "description": "Find suitable ride for customers given the location, type of ride, and the a> > "parameters": [ > {"name": "loc", "description": "Location of the starting place of the Uber ride"}, > {"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber rid> > {"name": "time", "description": "The amount of time in minutes the customer is willing t> > ] > } > ]\n > ASSISTANT:""" > > PARAMETER stop "<|system|>" > PARAMETER stop "<|user|>" > PARAMETER stop "<|assistant|>" > PARAMETER stop "</s>" > ``` > > ``` > ./ollama run gorilla_test "USER: <<question>>"Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes "" > uber.ride(USER="plus", LOC="94704", TIME=10) > ``` > > and append "USER: <" before the user request. i cant understand
Author
Owner

@jmorganca commented on GitHub (Jul 26, 2024):

Hi there! Tools are now supported in Ollama. See https://ollama.com/blog/tool-support. After some preliminary testing, it does work with LlamaIndex's OpenAI tooling and I know they're working on some amazing tool calling improvements to their Ollama integration

<!-- gh-comment-id:2251736140 --> @jmorganca commented on GitHub (Jul 26, 2024): Hi there! Tools are now supported in Ollama. See https://ollama.com/blog/tool-support. After some preliminary testing, it does work with LlamaIndex's OpenAI tooling and I know they're working on some amazing tool calling improvements to their Ollama integration
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#984