[GH-ISSUE #5538] autogen: Model llama3 is not found #49969

Closed
opened 2026-04-28 13:36:28 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @jjeejj on GitHub (Jul 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5538

What is the issue?

Ref doc: https://ollama.com/blog/openai-compatibility

llm_config = {
"model": "llama3",
"api_key": "ollama",
"base_url": "http://localhost:11434/v1",
}

[autogen.oai.client: 07-08 11:33:27] {329} WARNING - Model llama3 is not found.

how to solve it ?

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.1.32

Originally created by @jjeejj on GitHub (Jul 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5538 ### What is the issue? Ref doc: https://ollama.com/blog/openai-compatibility ```python llm_config = { "model": "llama3", "api_key": "ollama", "base_url": "http://localhost:11434/v1", } ``` [autogen.oai.client: 07-08 11:33:27] {329} WARNING - Model llama3 is not found. how to solve it ? ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.32
GiteaMirror added the question label 2026-04-28 13:36:28 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 9, 2024):

ollama pull llama3

<!-- gh-comment-id:2218719955 --> @rick-github commented on GitHub (Jul 9, 2024): `ollama pull llama3`
Author
Owner

@iamkhajan commented on GitHub (Jul 12, 2024):

Same issue , i am behind a corporate proxy . ollama pull llama3 works and i can chat in terminal . However when i use it as an API from either browser, autogen studio or script it doesnt work . Always get this Web Site does not exist (dns_unresolved_hostname) . How can i fix this . I have http_proxy/https_proxy set already . Thanks

<!-- gh-comment-id:2226187144 --> @iamkhajan commented on GitHub (Jul 12, 2024): Same issue , i am behind a corporate proxy . ollama pull llama3 works and i can chat in terminal . However when i use it as an API from either browser, autogen studio or script it doesnt work . Always get this Web Site does not exist (dns_unresolved_hostname) . How can i fix this . I have http_proxy/https_proxy set already . Thanks
Author
Owner

@rick-github commented on GitHub (Jul 12, 2024):

Not really enough information to diagnose the issue.

Do you set OLLAMA_HOST when you chat in terminal? Does autogen/browser/script run on the same machine as the ollama client? Does the ollama client and ollama server run on the same machine? Is the proxy configured to forward port 11434?

If autogen is reporting missing model but you can use it from ollama client, it sounds like you have two servers running.

<!-- gh-comment-id:2226207014 --> @rick-github commented on GitHub (Jul 12, 2024): Not really enough information to diagnose the issue. Do you set OLLAMA_HOST when you chat in terminal? Does autogen/browser/script run on the same machine as the ollama client? Does the ollama client and ollama server run on the same machine? Is the proxy configured to forward port 11434? If autogen is reporting missing model but you can use it from ollama client, it sounds like you have two servers running.
Author
Owner

@shuaib7860 commented on GitHub (Jul 23, 2024):

Hi I have this same problem but I can give some more information around my setup. I have an Ollama model hosted on a remote server/machine which I utilise by setting up a port forwarding. But I still get the same error that was shared previously by the original author of the post, see below.

[autogen.oai.client: 07-23 16:13:02] {315} WARNING - Model llama3:70b is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.

The weird thing is that keeping track of the GPU on the remote machine, which hosts the LLM, using nvtop shows that it is firing off and the LLM is being used even though it says it is not. Very strange.

Any idea on what could be causing this issue @rick-github?

<!-- gh-comment-id:2245571565 --> @shuaib7860 commented on GitHub (Jul 23, 2024): Hi I have this same problem but I can give some more information around my setup. I have an Ollama model hosted on a remote server/machine which I utilise by setting up a port forwarding. But I still get the same error that was shared previously by the original author of the post, see below. [autogen.oai.client: 07-23 16:13:02] {315} WARNING - Model llama3:70b is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing. The weird thing is that keeping track of the GPU on the remote machine, which hosts the LLM, using nvtop shows that it is firing off and the LLM is being used even though it says it is not. Very strange. Any idea on what could be causing this issue @rick-github?
Author
Owner

@rick-github commented on GitHub (Jul 23, 2024):

This is a warning message about the cost:

b7bdbe1ecc/autogen/oai/client.py (L330)

It has no effect on operations, autogen will use the model, it's just that for accounting purposes, autogen is going to treat it as no cost.

<!-- gh-comment-id:2245846709 --> @rick-github commented on GitHub (Jul 23, 2024): This is a warning message about the cost: https://github.com/microsoft/autogen/blob/b7bdbe1ecc1c00abd5172f472c6051d0230249bd/autogen/oai/client.py#L330 It has no effect on operations, autogen will use the model, it's just that for accounting purposes, autogen is going to treat it as no cost.
Author
Owner

@shuaib7860 commented on GitHub (Jul 24, 2024):

Ah thank you @rick-github. For anyone who comes across this warning after, see this comment for an example of how to suppress this warning.

<!-- gh-comment-id:2247179659 --> @shuaib7860 commented on GitHub (Jul 24, 2024): Ah thank you @rick-github. For anyone who comes across this warning after, see [this comment](https://github.com/microsoft/autogen/issues/2555#issuecomment-2208623783) for an example of how to suppress this warning.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49969