[GH-ISSUE #9333] autogen #6094

Open
opened 2026-04-12 17:25:37 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @harrrrden on GitHub (Feb 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9333

When I use this code to build the agent, some local models can be used, while others cannot.

from autogen import AssistantAgent, UserProxyAgent

config_list = [
{
"model": "llama2:latest",
"base_url": "http://localhost:11434/v1",
"api_key": "ollama",
}
]

assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})

user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False})
user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.")

For example, when I run the command:codellama,deepseek-r1 ,they are usable ,but when I try to use llama2, I get the following error:
openai.InternalServerError: Error code: 502
How can I resolve this issue?

os:windows

ollama version : 0.5.7

Originally created by @harrrrden on GitHub (Feb 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9333 When I use this code to build the agent, some local models can be used, while others cannot. from autogen import AssistantAgent, UserProxyAgent config_list = [ { "model": "llama2:latest", "base_url": "http://localhost:11434/v1", "api_key": "ollama", } ] assistant = AssistantAgent("assistant", llm_config={"config_list": config_list}) user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False}) user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.") For example, when I run the command:codellama,deepseek-r1 ,they are usable ,but when I try to use llama2, I get the following error: openai.InternalServerError: Error code: 502 How can I resolve this issue? os:windows ollama version : 0.5.7
Author
Owner

@lagane-78 commented on GitHub (Mar 5, 2025):

Seeing similar error but getting a 404 error although the model is there. I tried both llama3.2 and llama2. unfortunately we are not allowed to download deepseek :-(

Configure agents for Autogen to explicitly use Ollama
config_list = [{
"api_type": "ollama",
"model": "llama3",
"base_url": ":11434" # Explicitly define base_url
}]
llm_config = {
"config_list": config_list,
"cache_seed": 42
}

Initialize AssistantAgent with explicit Ollama config

assistant = autogen.AssistantAgent(
name="PII_Detector",
llm_config=llm_config, # Use fixed llm_config structure
system_message="You are an AI assistant specialized in detecting PII in text."
)

Debugging AssistantAgent

print(f"Debug: AssistantAgent configured with name '{assistant.name}' and system message '{assistant.system_message}'")

Initialize UserProxyAgent

user_proxy = autogen.UserProxyAgent(
name="User_Proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=1,
is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""),
)

Debugging UserProxyAgent

print(f"Debug: UserProxyAgent configured with name '{user_proxy.name}' and human_input_mode '{user_proxy.human_input_mode}'")

Initiate the conversation

initial_message = "Identify any PII in this text: 'John Doe's SSN is 123-45-6789 and his account number is 56789 and he was born on 01/01/1990.'"
print(f"Debug: Initiating chat with message: '{initial_message}'")
user_proxy.initiate_chat(
assistant,
message=initial_message
)

raise ResponseError(e.response.text, e.response.status_code) from None

ollama._types.ResponseError: model "llama3" not found, try pulling it first (status code: 404)

Also tried llama2 or llama3.2:latest

<!-- gh-comment-id:2702106961 --> @lagane-78 commented on GitHub (Mar 5, 2025): Seeing similar error but getting a 404 error although the model is there. I tried both llama3.2 and llama2. unfortunately we are not allowed to download deepseek :-( Configure agents for Autogen to explicitly use Ollama config_list = [{ "api_type": "ollama", "model": "llama3", "base_url": "<local host>:11434" # Explicitly define base_url }] llm_config = { "config_list": config_list, "cache_seed": 42 } # Initialize AssistantAgent with explicit Ollama config assistant = autogen.AssistantAgent( name="PII_Detector", llm_config=llm_config, # Use fixed llm_config structure system_message="You are an AI assistant specialized in detecting PII in text." ) # Debugging AssistantAgent print(f"Debug: AssistantAgent configured with name '{assistant.name}' and system message '{assistant.system_message}'") # Initialize UserProxyAgent user_proxy = autogen.UserProxyAgent( name="User_Proxy", human_input_mode="NEVER", max_consecutive_auto_reply=1, is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""), ) # Debugging UserProxyAgent print(f"Debug: UserProxyAgent configured with name '{user_proxy.name}' and human_input_mode '{user_proxy.human_input_mode}'") # Initiate the conversation initial_message = "Identify any PII in this text: 'John Doe's SSN is 123-45-6789 and his account number is 56789 and he was born on 01/01/1990.'" print(f"Debug: Initiating chat with message: '{initial_message}'") user_proxy.initiate_chat( assistant, message=initial_message ) raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: model "llama3" not found, try pulling it first (status code: 404) Also tried llama2 or llama3.2:latest
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6094