[GH-ISSUE #12399] Running Ollama Cloud with Pydantic AI #70295

Closed
opened 2026-05-04 20:58:18 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @knana1662 on GitHub (Sep 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12399

Originally assigned to: @BruceMacD on GitHub.

What is the issue?

I have this code where iam using ollama cloud with pydantic ai .Please kindly assist as i am currently getting the error below:

CODE

from pydantic_ai.models.openai import OpenAIChatModel  # Updated class name
from pydantic_ai.providers.openai import OpenAIProvider  # Correct import
from pydantic_ai import Agent

# Ollama running with OpenAI-compatible endpoint (Ollama Cloud)
ollama_cloud_gpt_oss_120b = OpenAIChatModel(
    model_name="gpt-oss:120b",
    provider=OpenAIProvider(
        base_url="https://ollama.com/v1",   # OpenAI-compatible endpoint
        api_key="x.y.z"                       # API key if required
    )
)

# --- Usage Example ---
agent = Agent(ollama_cloud_gpt_oss_120b)

response = agent.run_sync("Why is the sky blue?")  # Use run_sync instead of run
print(response.output)  # Use response.output instead of response.data

ERROR

C:\Users\NanaKwameAsanteDanso>python -u "c:\Users\NanaKwameAsanteDanso\test_ollama.py"
Traceback (most recent call last):
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 523, in _process_response
    response = chat.ChatCompletion.model_validate(response.model_dump())
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic\main.py", line 705, in model_validate
    return cls.__pydantic_validator__.validate_python(
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        obj, strict=strict, from_attributes=from_attributes, context=context, by_alias=by_alias, by_name=by_name
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
pydantic_core._pydantic_core.ValidationError: 3 validation errors for ChatCompletion
id
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/string_type
choices.0.index
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type
object
  Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/literal_error

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "c:\Users\NanaKwameAsanteDanso\test_ollama.py", line 17, in <module>
    response = agent.run_sync("Why is the sky blue?")  # Use run_sync instead of run
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\agent\abstract.py", line 317, in run_sync
    return get_event_loop().run_until_complete(
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        self.run(
        ^^^^^^^^^
    ...<12 lines>...
        )
        ^
    )
    ^
  File "C:\Users\NanaKwameAsanteDanso\AppData\Local\Programs\Python\Python313\Lib\asyncio\base_events.py", line 725, in run_until_complete
    return future.result()
           ~~~~~~~~~~~~~^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\agent\abstract.py", line 218, in run
    async for node in agent_run:
    ...<4 lines>...
                await event_stream_handler(_agent_graph.build_run_context(agent_run.ctx), stream)
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\run.py", line 149, in __anext__
    next_node = await self._graph_run.__anext__()
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_graph\graph.py", line 758, in __anext__
    return await self.next(self._next_node)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_graph\graph.py", line 731, in next
    self._next_node = await node.run(ctx)
                      ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\_agent_graph.py", line 399, in run
    return await self._make_request(ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\_agent_graph.py", line 441, in _make_request
    model_response = await ctx.deps.model.request(message_history, model_settings, model_request_parameters)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 399, in request
    model_response = self._process_response(response)
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 525, in _process_response
    raise UnexpectedModelBehavior(f'Invalid response from OpenAI chat completions endpoint: {e}') from e
pydantic_ai.exceptions.UnexpectedModelBehavior: Invalid response from OpenAI chat completions endpoint: 3 validation errors for ChatCompletion
id
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/string_type
choices.0.index
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type
object
  Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/literal_error

C:\Users\NanaKwameAsanteDanso>python -u "c:\Users\NanaKwameAsanteDanso\test_ollama.py"
Traceback (most recent call last):
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 523, in _process_response
    response = chat.ChatCompletion.model_validate(response.model_dump())
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic\main.py", line 705, in model_validate
    return cls.__pydantic_validator__.validate_python(
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        obj, strict=strict, from_attributes=from_attributes, context=context, by_alias=by_alias, by_name=by_name
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
pydantic_core._pydantic_core.ValidationError: 3 validation errors for ChatCompletion
id
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/string_type
choices.0.index
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type
object
  Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/literal_error

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "c:\Users\NanaKwameAsanteDanso\test_ollama.py", line 17, in <module>
    response = agent.run_sync("Why is the sky blue?")  # Use run_sync instead of run
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\agent\abstract.py", line 317, in run_sync
    return get_event_loop().run_until_complete(
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        self.run(
        ^^^^^^^^^
    ...<12 lines>...
        )
        ^
    )
    ^
  File "C:\Users\NanaKwameAsanteDanso\AppData\Local\Programs\Python\Python313\Lib\asyncio\base_events.py", line 725, in run_until_complete
    return future.result()
           ~~~~~~~~~~~~~^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\agent\abstract.py", line 218, in run
    async for node in agent_run:
    ...<4 lines>...
                await event_stream_handler(_agent_graph.build_run_context(agent_run.ctx), stream)
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\run.py", line 149, in __anext__
    next_node = await self._graph_run.__anext__()
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_graph\graph.py", line 758, in __anext__
    return await self.next(self._next_node)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_graph\graph.py", line 731, in next
    self._next_node = await node.run(ctx)
                      ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\_agent_graph.py", line 399, in run
    return await self._make_request(ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\_agent_graph.py", line 441, in _make_request
    model_response = await ctx.deps.model.request(message_history, model_settings, model_request_parameters)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 399, in request
    model_response = self._process_response(response)
  File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 525, in _process_response
    raise UnexpectedModelBehavior(f'Invalid response from OpenAI chat completions endpoint: {e}') from e
pydantic_ai.exceptions.UnexpectedModelBehavior: Invalid response from OpenAI chat completions endpoint: 3 validation errors for ChatCompletion
id
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/string_type
choices.0.index
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type
object
  Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/literal_error

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @knana1662 on GitHub (Sep 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12399 Originally assigned to: @BruceMacD on GitHub. ### What is the issue? I have this code where iam using ollama cloud with pydantic ai .Please kindly assist as i am currently getting the error below: CODE ``` from pydantic_ai.models.openai import OpenAIChatModel # Updated class name from pydantic_ai.providers.openai import OpenAIProvider # Correct import from pydantic_ai import Agent # Ollama running with OpenAI-compatible endpoint (Ollama Cloud) ollama_cloud_gpt_oss_120b = OpenAIChatModel( model_name="gpt-oss:120b", provider=OpenAIProvider( base_url="https://ollama.com/v1", # OpenAI-compatible endpoint api_key="x.y.z" # API key if required ) ) # --- Usage Example --- agent = Agent(ollama_cloud_gpt_oss_120b) response = agent.run_sync("Why is the sky blue?") # Use run_sync instead of run print(response.output) # Use response.output instead of response.data ``` ERROR ``` C:\Users\NanaKwameAsanteDanso>python -u "c:\Users\NanaKwameAsanteDanso\test_ollama.py" Traceback (most recent call last): File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 523, in _process_response response = chat.ChatCompletion.model_validate(response.model_dump()) File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic\main.py", line 705, in model_validate return cls.__pydantic_validator__.validate_python( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ obj, strict=strict, from_attributes=from_attributes, context=context, by_alias=by_alias, by_name=by_name ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ pydantic_core._pydantic_core.ValidationError: 3 validation errors for ChatCompletion id Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/string_type choices.0.index Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/int_type object Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/literal_error The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\NanaKwameAsanteDanso\test_ollama.py", line 17, in <module> response = agent.run_sync("Why is the sky blue?") # Use run_sync instead of run File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\agent\abstract.py", line 317, in run_sync return get_event_loop().run_until_complete( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ self.run( ^^^^^^^^^ ...<12 lines>... ) ^ ) ^ File "C:\Users\NanaKwameAsanteDanso\AppData\Local\Programs\Python\Python313\Lib\asyncio\base_events.py", line 725, in run_until_complete return future.result() ~~~~~~~~~~~~~^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\agent\abstract.py", line 218, in run async for node in agent_run: ...<4 lines>... await event_stream_handler(_agent_graph.build_run_context(agent_run.ctx), stream) File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\run.py", line 149, in __anext__ next_node = await self._graph_run.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_graph\graph.py", line 758, in __anext__ return await self.next(self._next_node) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_graph\graph.py", line 731, in next self._next_node = await node.run(ctx) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\_agent_graph.py", line 399, in run return await self._make_request(ctx) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\_agent_graph.py", line 441, in _make_request model_response = await ctx.deps.model.request(message_history, model_settings, model_request_parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 399, in request model_response = self._process_response(response) File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 525, in _process_response raise UnexpectedModelBehavior(f'Invalid response from OpenAI chat completions endpoint: {e}') from e pydantic_ai.exceptions.UnexpectedModelBehavior: Invalid response from OpenAI chat completions endpoint: 3 validation errors for ChatCompletion id Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/string_type choices.0.index Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/int_type object Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/literal_error C:\Users\NanaKwameAsanteDanso>python -u "c:\Users\NanaKwameAsanteDanso\test_ollama.py" Traceback (most recent call last): File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 523, in _process_response response = chat.ChatCompletion.model_validate(response.model_dump()) File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic\main.py", line 705, in model_validate return cls.__pydantic_validator__.validate_python( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ obj, strict=strict, from_attributes=from_attributes, context=context, by_alias=by_alias, by_name=by_name ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ pydantic_core._pydantic_core.ValidationError: 3 validation errors for ChatCompletion id Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/string_type choices.0.index Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/int_type object Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/literal_error The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\NanaKwameAsanteDanso\test_ollama.py", line 17, in <module> response = agent.run_sync("Why is the sky blue?") # Use run_sync instead of run File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\agent\abstract.py", line 317, in run_sync return get_event_loop().run_until_complete( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ self.run( ^^^^^^^^^ ...<12 lines>... ) ^ ) ^ File "C:\Users\NanaKwameAsanteDanso\AppData\Local\Programs\Python\Python313\Lib\asyncio\base_events.py", line 725, in run_until_complete return future.result() ~~~~~~~~~~~~~^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\agent\abstract.py", line 218, in run async for node in agent_run: ...<4 lines>... await event_stream_handler(_agent_graph.build_run_context(agent_run.ctx), stream) File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\run.py", line 149, in __anext__ next_node = await self._graph_run.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_graph\graph.py", line 758, in __anext__ return await self.next(self._next_node) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_graph\graph.py", line 731, in next self._next_node = await node.run(ctx) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\_agent_graph.py", line 399, in run return await self._make_request(ctx) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\_agent_graph.py", line 441, in _make_request model_response = await ctx.deps.model.request(message_history, model_settings, model_request_parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 399, in request model_response = self._process_response(response) File "C:\Users\NanaKwameAsanteDanso\OneDrive - Dynamic Data Solutions Ltd\Desktop\core\Lib\site-packages\pydantic_ai\models\openai.py", line 525, in _process_response raise UnexpectedModelBehavior(f'Invalid response from OpenAI chat completions endpoint: {e}') from e pydantic_ai.exceptions.UnexpectedModelBehavior: Invalid response from OpenAI chat completions endpoint: 3 validation errors for ChatCompletion id Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/string_type choices.0.index Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/int_type object Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.11/v/literal_error ``` ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the cloudbug labels 2026-05-04 20:58:18 -05:00
Author
Owner

@pdevine commented on GitHub (Sep 24, 2025):

As a workaround, can you use gpt-oss:120b-cloud as the model name and http://localhost:11434/v1 as the base_url?

<!-- gh-comment-id:3329829263 --> @pdevine commented on GitHub (Sep 24, 2025): As a workaround, can you use `gpt-oss:120b-cloud` as the model name and `http://localhost:11434/v1` as the base_url?
Author
Owner

@knana1662 commented on GitHub (Sep 25, 2025):

I know that but i want to run it from the cloud without pulling the model and using it locally @pdevine

<!-- gh-comment-id:3332995667 --> @knana1662 commented on GitHub (Sep 25, 2025): I know that but i want to run it from the cloud without pulling the model and using it locally @pdevine
Author
Owner

@pdevine commented on GitHub (Sep 25, 2025):

@knana1662 the gpt-oss:120b-cloud model will reroute the traffic automatically to the cloud. You call it locally, but it proxies everything for you.

<!-- gh-comment-id:3334983412 --> @pdevine commented on GitHub (Sep 25, 2025): @knana1662 the `gpt-oss:120b-cloud` model will reroute the traffic automatically to the cloud. You call it locally, but it proxies everything for you.
Author
Owner

@BruceMacD commented on GitHub (Sep 25, 2025):

Thanks for the report @knana1662, I've figured out the problem, it is in our opeanai response format. I'll be deploying a fix soon, I'll let you know when.

<!-- gh-comment-id:3335430770 --> @BruceMacD commented on GitHub (Sep 25, 2025): Thanks for the report @knana1662, I've figured out the problem, it is in our opeanai response format. I'll be deploying a fix soon, I'll let you know when.
Author
Owner

@knana1662 commented on GitHub (Sep 27, 2025):

Thank You @BruceMacD, keep me posted when the new fix is released, as I don't want to run that model (ollama cloud models) locally.

<!-- gh-comment-id:3341992351 --> @knana1662 commented on GitHub (Sep 27, 2025): Thank You @BruceMacD, keep me posted when the new fix is released, as I don't want to run that model (ollama cloud models) locally.
Author
Owner

@ganakee commented on GitHub (Sep 28, 2025):

This might also be a similar issue when using Rust and the async-openai = "0.29.3" crate with OpenAI compatibility.
When trying to execute against the Ollama Cloud models, I get:
Error: JSONDeserialize(Error("expected value", line: 8, column: 1))
The same error occurs no matter what type of completion attempted.

The error is thrown when trying to execute the prompt.
let response = client.images().create(request).await?;

<!-- gh-comment-id:3342154658 --> @ganakee commented on GitHub (Sep 28, 2025): This might also be a similar issue when using Rust and the async-openai = "0.29.3" crate with OpenAI compatibility. When trying to execute against the Ollama Cloud models, I get: `Error: JSONDeserialize(Error("expected value", line: 8, column: 1))` The same error occurs no matter what type of completion attempted. The error is thrown when trying to execute the prompt. `let response = client.images().create(request).await?;`
Author
Owner

@BruceMacD commented on GitHub (Sep 29, 2025):

This should be fixed now, please let me know if you have any more issues

<!-- gh-comment-id:3349148224 --> @BruceMacD commented on GitHub (Sep 29, 2025): This should be fixed now, please let me know if you have any more issues
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70295