[GH-ISSUE #754] Support for Autogen #46867

Closed
opened 2026-04-28 01:20:05 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @greg-peters on GitHub (Oct 11, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/754

#305 Requesting support to use ollama with Autogen

Originally created by @greg-peters on GitHub (Oct 11, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/754 #305 Requesting support to use ollama with Autogen
Author
Owner

@technovangelist commented on GitHub (Oct 11, 2023):

It looks like Autogen would need to support calling ollama from them. Looks like there is an existing issue on their repo you could add your use case to: https://github.com/microsoft/autogen/issues/46

<!-- gh-comment-id:1757695368 --> @technovangelist commented on GitHub (Oct 11, 2023): It looks like Autogen would need to support calling ollama from them. Looks like there is an existing issue on their repo you could add your use case to: https://github.com/microsoft/autogen/issues/46
Author
Owner

@MattBanak commented on GitHub (Nov 12, 2023):

It's still way too early to tell, but it seems more and more like the... "industry" (? can we use that term yet?) seems to be trending towards creating their LLM projects with APIs that treat OpenAI's GPT API as a spec so that smaller projects can be drop-in replacements.

Ollama could decide to follow this trend, or you could use LiteLLM to fit the local models into that spec and keep going! More details in the thread linked on the "completed" status, but here's the last message describing how to do this: https://github.com/jmorganca/ollama/issues/305#issuecomment-1752150867

In my opinion, I think treating OpenAI's API as a spec is a good decision, and seems to be adopted more and more every day. I'm sure a unified spec will come out at some point, but for now that API seems to be the closest thing we have and cross-project plug-and-play compatibility is invaluable when technology is moving at the pace LLMs are.

<!-- gh-comment-id:1807274804 --> @MattBanak commented on GitHub (Nov 12, 2023): It's still _way_ too early to tell, but it seems more and more like the... "industry" (? can we use that term yet?) seems to be trending towards creating their LLM projects with APIs that treat OpenAI's GPT API as a spec so that smaller projects can be drop-in replacements. Ollama _could_ decide to follow this trend, or you could use LiteLLM to fit the local models into that spec and keep going! More details in the thread linked on the "completed" status, but here's the last message describing how to do this: https://github.com/jmorganca/ollama/issues/305#issuecomment-1752150867 In my opinion, I think treating OpenAI's API as a spec is a good decision, and seems to be adopted more and more every day. I'm sure a unified spec will come out at some point, but for now that API seems to be the closest thing we have and cross-project plug-and-play compatibility is invaluable when technology is moving at the pace LLMs are.
Author
Owner

@dlaliberte commented on GitHub (Dec 19, 2023):

It looks like Autogen would need to support calling ollama from them. Looks like there is an existing issue on their repo you could add your use case to: microsoft/autogen#46

Autogen supports a base_url config option, e.g. "base_url": "http://localhost:8000/v1", so no further change is needed by Autogen. I'm using it now, but can't get anything useful out of Ollama.

It appears Ollama's openai api support is lacking what AutoGen requires. e.g. "/v1/chat/completions". I can't tell what openai api support Ollama does have. I only see "/api/version" working.

<!-- gh-comment-id:1862163583 --> @dlaliberte commented on GitHub (Dec 19, 2023): > It looks like Autogen would need to support calling ollama from them. Looks like there is an existing issue on their repo you could add your use case to: [microsoft/autogen#46](https://github.com/microsoft/autogen/issues/46) Autogen supports a base_url config option, e.g. "base_url": "http://localhost:8000/v1", so no further change is needed by Autogen. I'm using it now, but can't get anything useful out of Ollama. It appears Ollama's openai api support is lacking what AutoGen requires. e.g. "/v1/chat/completions". I can't tell what openai api support Ollama does have. I only see "/api/version" working.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46867