[GH-ISSUE #15250] Qwen3.5-35b with OpenClaw sometimes throws an error. #35515

Closed
opened 2026-04-22 20:04:58 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @chigkim on GitHub (Apr 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15250

Originally assigned to: @drifkin on GitHub.

What is the issue?

If I use qwen3.5:35b-a3b-q8_0 with OpenClaw, sometimes I get this error:

Relevant log output

400 input[135]: json: cannot unmarshal array into Go struct field ResponsesFunctionCallOutput.output of type string

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.20.0

Originally created by @chigkim on GitHub (Apr 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15250 Originally assigned to: @drifkin on GitHub. ### What is the issue? If I use qwen3.5:35b-a3b-q8_0 with OpenClaw, sometimes I get this error: ### Relevant log output ```shell 400 input[135]: json: cannot unmarshal array into Go struct field ResponsesFunctionCallOutput.output of type string ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.20.0
GiteaMirror added the bug label 2026-04-22 20:04:58 -05:00
Author
Owner

@mdrxy commented on GitHub (Apr 7, 2026):

We're seeing this from users of LangChain who point ChatOpenAI (with Responses API) at Ollama's /v1/responses endpoint. Tracking on our side at langchain-ai/langchain#34669.

Per the OpenAI Python SDK (auto-generated from the OpenAPI spec), function_call_output.output is typed as str | list[TextContent | ImageContent | FileContent] — arrays are valid. The relevant type definition from openai/types/responses/response_input_item_param.py:

output: Required[Union[str, ResponseFunctionCallOutputItemListParam]]
# resolves to: str | list[ResponseInputTextContentParam | ResponseInputImageContentParam | ResponseInputFileContentParam]

Ollama's Go struct currently deserializes output as type string, which rejects the array variant.

Also related

<!-- gh-comment-id:4195869465 --> @mdrxy commented on GitHub (Apr 7, 2026): We're seeing this from users of [LangChain](https://github.com/langchain-ai/langchain) who point `ChatOpenAI` (with Responses API) at Ollama's `/v1/responses` endpoint. Tracking on our side at langchain-ai/langchain#34669. Per the OpenAI Python SDK (auto-generated from the OpenAPI spec), `function_call_output.output` is typed as `str | list[TextContent | ImageContent | FileContent]` — arrays are valid. The relevant type definition from `openai/types/responses/response_input_item_param.py`: ```python output: Required[Union[str, ResponseFunctionCallOutputItemListParam]] # resolves to: str | list[ResponseInputTextContentParam | ResponseInputImageContentParam | ResponseInputFileContentParam] ``` Ollama's Go struct currently deserializes `output` as `type string`, which rejects the array variant. [Also related](https://github.com/ollama/ollama/issues/15250)
Author
Owner

@drifkin commented on GitHub (Apr 7, 2026):

Hi @chigkim, thanks for reporting, I've got a fix up at https://github.com/ollama/ollama/pull/15406 that we'll try to get released soon.

Out of curiosity, how do you have openclaw configured to use Ollama? Normally it goes through our native Ollama APIs, rather than the v1/responses compatibility layer that had this issue

<!-- gh-comment-id:4202569886 --> @drifkin commented on GitHub (Apr 7, 2026): Hi @chigkim, thanks for reporting, I've got a fix up at <https://github.com/ollama/ollama/pull/15406> that we'll try to get released soon. Out of curiosity, how do you have openclaw configured to use Ollama? Normally it goes through our native Ollama APIs, rather than the v1/responses compatibility layer that had this issue
Author
Owner

@chigkim commented on GitHub (Apr 8, 2026):

Thanks for the fix!
I setup OpenClaw inside a docker on one computer and accessing Ollama on another computer.
This way I can swap model quickly.

<!-- gh-comment-id:4202911269 --> @chigkim commented on GitHub (Apr 8, 2026): Thanks for the fix! I setup OpenClaw inside a docker on one computer and accessing Ollama on another computer. This way I can swap model quickly.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35515