[GH-ISSUE #1065] Support for openai style functions #523

Closed
opened 2026-04-12 10:13:04 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @tionis on GitHub (Nov 10, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1065

I couldn't find any information if this is considered out of scope or not, but some support for function definitions would be great.

Originally created by @tionis on GitHub (Nov 10, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1065 I couldn't find any information if this is considered out of scope or not, but some support for function definitions would be great.
Author
Owner

@K1ngjulien commented on GitHub (Nov 11, 2023):

It's possible to use Ollama models with LangChain.

https://python.langchain.com/docs/integrations/llms/ollama

LangChain already supports Functions.

https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent

I have not tested this, but from the example code it looks like you should just be able to swap out the llm with langchain.llms.Ollama.

<!-- gh-comment-id:1806857566 --> @K1ngjulien commented on GitHub (Nov 11, 2023): It's possible to use Ollama models with LangChain. https://python.langchain.com/docs/integrations/llms/ollama LangChain already supports Functions. https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent I have not tested this, but from the example code it looks like you should just be able to swap out the llm with `langchain.llms.Ollama`.
Author
Owner

@pdavis68 commented on GitHub (Nov 11, 2023):

And really, functions aren't a big deal. You can add the function stuff to the prompt itself.

In the game I'm writing, I give the LLM a prompt and then I give it a list of functions it can execute to collect information about game state. It then responds with the functions it wants to execute and what parameters it's passing. None of this is using Open AI's functions because I don't want to be tied to their implementation. My method is LLM agnostic.

Somewhat differently, but when I use Open AI's API for programmatic work to have it return data to me, I'll simply tell it to respond with a JSON structure that my app will recognize. I then have a regex I use to pull the JSON out of the response (for those times it likes to be verbose and over-answer like, "Sure, here's the JSON { ... }"

But also, as @K1ngjulien said, you can use Langchain or if you're doing .NET, you can use Semantic Kernel, to perform the back-and-forth. For now, I prefer doing it all myself.

<!-- gh-comment-id:1806932392 --> @pdavis68 commented on GitHub (Nov 11, 2023): And really, functions aren't a big deal. You can add the function stuff to the prompt itself. In the game I'm writing, I give the LLM a prompt and then I give it a list of functions it can execute to collect information about game state. It then responds with the functions it wants to execute and what parameters it's passing. None of this is using Open AI's functions because I don't want to be tied to their implementation. My method is LLM agnostic. Somewhat differently, but when I use Open AI's API for programmatic work to have it return data to me, I'll simply tell it to respond with a JSON structure that my app will recognize. I then have a regex I use to pull the JSON out of the response (for those times it likes to be verbose and over-answer like, "Sure, here's the JSON { ... }" But also, as @K1ngjulien said, you can use Langchain or if you're doing .NET, you can use Semantic Kernel, to perform the back-and-forth. For now, I prefer doing it all myself.
Author
Owner

@tionis commented on GitHub (Nov 20, 2023):

I'd rather not have to integrate with the python ecosystem, so I'll just script on top of the prompt them

<!-- gh-comment-id:1819570704 --> @tionis commented on GitHub (Nov 20, 2023): I'd rather not have to integrate with the python ecosystem, so I'll just script on top of the prompt them
Author
Owner

@pdavis68 commented on GitHub (Nov 20, 2023):

@tionis You could probably just write your own library for it. I mean, under the hood, functions are just part of the prompt anyway. So you could probably wrap your pompts and functions into a kind of meta-prompt that explained how functions work.

<!-- gh-comment-id:1819641251 --> @pdavis68 commented on GitHub (Nov 20, 2023): @tionis You could probably just write your own library for it. I mean, under the hood, functions are just part of the prompt anyway. So you could probably wrap your pompts and functions into a kind of meta-prompt that explained how functions work.
Author
Owner

@tionis commented on GitHub (Nov 20, 2023):

Yeah I just started doing that for a small personal assistant.
I also just discovered that zephyr can't follow instructions very well 🤔. Llama2 worked better for this.

<!-- gh-comment-id:1819657467 --> @tionis commented on GitHub (Nov 20, 2023): Yeah I just started doing that for a small personal assistant. I also just discovered that zephyr can't follow instructions very well 🤔. Llama2 worked better for this.
Author
Owner

@technovangelist commented on GitHub (Dec 4, 2023):

I think the original question is about function calling. We introduced format: json recently which allows you to output as well formed json and specify the schema. You can do this in the CLI or the API. So I will go ahead and close the issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.

<!-- gh-comment-id:1839697485 --> @technovangelist commented on GitHub (Dec 4, 2023): I think the original question is about function calling. We introduced `format: json` recently which allows you to output as well formed json and specify the schema. You can do this in the CLI or the API. So I will go ahead and close the issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.
Author
Owner

@tionis commented on GitHub (Dec 7, 2023):

Makes sense, perhaps an example in the docs on how to build something like that would be useful for some people.
If I implement a small wrapper, I might add a PR to add an example to the docs.

<!-- gh-comment-id:1844119953 --> @tionis commented on GitHub (Dec 7, 2023): Makes sense, perhaps an example in the docs on how to build something like that would be useful for some people. If I implement a small wrapper, I might add a PR to add an example to the docs.
Author
Owner

@technovangelist commented on GitHub (Dec 8, 2023):

I have a typescript example, but the essential parts of function calling are roughly equivalent. https://github.com/jmorganca/ollama/tree/main/examples/typescript-functioncalling

i will be posting another python version soon

<!-- gh-comment-id:1847980529 --> @technovangelist commented on GitHub (Dec 8, 2023): I have a typescript example, but the essential parts of function calling are roughly equivalent. https://github.com/jmorganca/ollama/tree/main/examples/typescript-functioncalling i will be posting another python version soon
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#523