[GH-ISSUE #12203] gpt-oss:120b - thinking being returned in tool calls #70177

Open
opened 2026-05-04 20:36:36 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @lefoulkrod on GitHub (Sep 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12203

Originally assigned to: @ParthSareen on GitHub.

What is the issue?

I can reliably reproduce this bug. In the log file below I am logging the input to the model. Then I log the exception being thrown from the ollama python lib which shows it trying to parse the tool call which appears to be the text that should be returned in the thinking section. I retry the same request (5 times in this case) and eventually the model starts to return nothing.

At this point a failure condition is triggered in my code which results in me retrying the entire conversation again and the model continues to return nothing until my code fails and exits.

This seems to happen everytime for this particular system prompt. Other system prompts I have that demand returning json and use the same tools seem to work fine.

larry@llm1:~$ ollama -v
ollama version is 0.11.10

larry@llm1:~$sytemctl edit ollama 
[Service]
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_KV_CACHE_TYPE=q8_0"
Environment="OLLAMA_MODELS=/storage4tb/models/ollama"
Environment="OLLAMA_DEBUG=1"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_NEW_ESTIMATES=1"

Relevant log output

========== [before_model_call] for agent: CODE_REVIEW_AGENT ==========
Chat history sent to LLM:
[{'content': '\n'
             'Role: Code Review Agent\n'
             '\n'
             'Your job is to verify whether a single step in a plan\n'
             'was implemented correctly based on concrete evidence gathered '
             'with tools. You will\n'
             'be reviewing the output from a coder agent that has implemented '
             'the plan step.\n'
             '\n'
             'You will receive as input\n'
             '- the plan step that is being implemented\n'
             '- the detailed list of instructions provided to the coder agent '
             'that is used to\n'
             '  implement the step\n'
             '- the coder agents summary of what it accomplished\n'
             '- additional context about the overall plan such as selected '
             'tooling\n'
             '\n'
             'Tools\n'
             '- exists, is_file, is_dir\n'
             '- read_file, head, tail, grep, list_dir\n'
             '- run_bash_cmd for short-lived validations (pytest -q, uvx ruff '
             'check ., mypy, grep -R ...)\n'
             '\n'
             'STRICT OUTPUT\n'
             '- Return ONLY JSON with this exact shape (no prose, no code '
             'fences):\n'
             '{\n'
             '  // True if the step appears correctly implemented; otherwise '
             'False.\n'
             '  "success": "boolean",\n'
             '  // Actionable fixes to address gaps when ``success`` is '
             'False.\n'
             '  "required_changes": [\n'
             '    "string",\n'
             '    "..."\n'
             '  ]\n'
             '}\n'
             '\n'
             'Verification workflow - you must verify the results of the coder '
             'agent do not assume correctness\n'
             '- Identify acceptance checks from step + instructions: expected '
             'files/dirs, symbols,\n'
             '    strings, config keys, and short commands/tests to run.\n'
             '- Uses grep, head, tail, and other tools to gather evidence.\n'
             '- Read relevant files to confirm required content and '
             'placements.\n'
             '- Run short, idempotent commands for validation (tests/lint/type '
             'checks) when applicable.\n'
             '- Run any test steps provided in the list of instructions.\n'
             '- Decide:\n'
             "    - success=true when evidence supports the step's goal is "
             'met.\n'
             '    - success=false when evidence is missing or incorrect; list '
             'minimal actionable\n'
             '      fixes in required_changes.\n',
  'role': 'system'},
 {'content': '{"step":{"id":"init_env","title":"Create virtual environment and '
             'initialize uv '
             'project","step_kind":"command","file_path":null,"command":{"run":"uv '
             'venv && uv init '
             '-y","timeout_sec":30},"implementation_details":["Create an '
             'isolated virtual environment using uv.","Initialize a new uv '
             'project with a default '
             'pyproject.toml."],"depends_on":[]},"tooling":{"language":"python","package_manager":"uv","test_framework":"pytest"},"instructions":["Run '
             'uv venv to create an isolated virtual environment","List the '
             'directory to confirm that the .venv folder was created","Run uv '
             'init -y to generate a default pyproject.toml for the '
             'project","Read ./pyproject.toml to verify it was created and '
             'contains the expected project metadata","Execute uv pip list '
             'within the virtual environment to ensure the environment is '
             'functional","Run uvx ruff check . and uvx black . --check to '
             'perform a quick lint and formatting '
             'validation"],"coder_output":"Implemented the initialization step '
             'as instructed:\\n\\n1. **Created an isolated virtual '
             'environment** using `uv venv`.  \\n2. **Verified** that the '
             '`.venv` directory appeared in the workspace.  \\n3. '
             '**Initialized a new uv project** with `uv init -y`, generating a '
             'default `pyproject.toml`.  \\n4. **Read back `pyproject.toml`** '
             'to confirm it contains the expected project metadata (project '
             'name, version, and basic build system settings).  \\n5. '
             '**Checked the environment’s package list** with `uv pip list` '
             'inside the virtual environment, confirming the environment is '
             'functional.  \\n6. **Ran linting and formatting checks** using '
             '`uvx ruff check .` and `uvx black . --check`; both returned '
             'clean results, indicating the newly created project adheres to '
             'basic style guidelines.\\n\\nAll commands completed within the '
             'allotted timeout, and the workspace now contains a ready‑to‑use '
             'Python uv project with an active virtual environment."}',
  'role': 'user'},
 {'content': '',
  'role': 'assistant',
  'tool_calls': [ToolCall(function=Function(name='is_dir', arguments={'path': '.venv'}))]},
 {'content': '{"result": {"exists": true, "is_file": false, "is_dir": true, '
             '"path": "ws_aee8bf0e/.venv"}}',
  'role': 'tool',
  'tool_name': 'is_dir'},
 {'content': '',
  'role': 'assistant',
  'tool_calls': [ToolCall(function=Function(name='is_file', arguments={'path': 'pyproject.toml'}))]},
 {'content': '{"result": {"exists": true, "is_file": true, "is_dir": false, '
             '"path": "ws_aee8bf0e/pyproject.toml"}}',
  'role': 'tool',
  'tool_name': 'is_file'},
 {'content': '',
  'role': 'assistant',
  'tool_calls': [ToolCall(function=Function(name='read_file', arguments={'end': 200, 'path': 'pyproject.toml', 'start': 1}))]},
 {'content': '{"result": {"success": true, "file_path": '
             '"ws_aee8bf0e/pyproject.toml", "content": "[project]\\nname = '
             '\\"ws-aee8bf0e\\"\\nversion = \\"0.1.0\\"\\nrequires-python = '
             '\\">=3.12\\"\\ndependencies = []\\n", "start": 1, "end": 200, '
             '"total_lines": 5, "error": null}}',
  'role': 'tool',
  'tool_name': 'read_file'},
 {'content': '',
  'role': 'assistant',
  'tool_calls': [ToolCall(function=Function(name='run_bash_cmd', arguments={'cmd': 'source .venv/bin/activate && uv pip list'}))]},
 {'content': '{"result": {"stdout": null, "stderr": null, "exit_code": 0}}',
  'role': 'tool',
  'tool_name': 'run_bash_cmd'}]
WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 1/21): error parsing tool call: raw='Oops, typo typo? The tool is run_bash_cmd.{"cmd":"uvx ruff check ."}', err=invalid character 'O' looking for beginning of value (status code: 500)
WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 2/21): error parsing tool call: raw='We need to verify step implementation. The step expects:

- Run uv venv => .venv created. Verified is_dir true.
- List directory to confirm .venv appears – we didn't list but existence is enough.
- Run uv init -y => pyproject.toml created. Verified file exists, content includes project metadata. Check expected metadata? Instructions: "Read ./pyproject.toml to verify it was created and contains the expected project metadata". Expected metadata: project name, version, maybe requires-python. The file has those. Good.

- Execute uv pip list within the virtual environment to ensure environment functional. Ran command, exit_code 0 (though stdout null). Could verify output but exit code 0 suggests success.

- Run uvx ruff check . and uvx black . --check to perform lint/format validation. Not executed yet. We didn't run those commands. Should run them to confirm they pass. Could run `uvx ruff check .` and `uvx black . --check`. Let's run.

{"cmd":"source .venv/bin/activate && uvx ruff check ."}', err=invalid character 'W' looking for beginning of value (status code: 500)
WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 3/21): error parsing tool call: raw='We need to verify also lint checks. Let's run ruff check and black check.{"cmd":"uvx ruff check ."}', err=invalid character 'W' looking for beginning of value (status code: 500)
WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 4/21): error parsing tool call: raw='We need to verify other steps: run uvx ruff check . and uvx black . --check. Let's run them.{"cmd":"source .venv/bin/activate && uvx ruff check ."}', err=invalid character 'W' looking for beginning of value (status code: 500)
WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 5/21): error parsing tool call: raw='We need to run lint checks.{"cmd":"uvx ruff check ."}', err=invalid character 'W' looking for beginning of value (status code: 500)
DEBUG:agents.ollama.sdk.logging_callbacks:
========== [after_model_call] for agent: CODE_REVIEW_AGENT ==========
LLM stats:
  total_duration:         0.397s
  load_duration:          0.076s
  prompt_eval_count:      1831
  prompt_eval_duration:   0.031s
  prompt_tokens_per_sec:  58700.94
  eval_count:             14
  eval_duration:          0.212s
  eval_tokens_per_sec:    65.97

LLM response:
{'created_at': '2025-09-06T13:04:36.870754008Z',
 'done': True,
 'done_reason': 'stop',
 'eval_count': 14,
 'eval_duration': 212224159,
 'load_duration': 75621740,
 'message': {'content': '',
             'images': None,
             'role': 'assistant',
             'thinking': None,
             'tool_calls': None},
 'model': 'gpt-oss:120b',
 'prompt_eval_count': 1831,
 'prompt_eval_duration': 31192005,
 'total_duration': 397337245}

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.11.10

Originally created by @lefoulkrod on GitHub (Sep 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12203 Originally assigned to: @ParthSareen on GitHub. ### What is the issue? I can reliably reproduce this bug. In the log file below I am logging the input to the model. Then I log the exception being thrown from the ollama python lib which shows it trying to parse the tool call which appears to be the text that should be returned in the thinking section. I retry the same request (5 times in this case) and eventually the model starts to return nothing. At this point a failure condition is triggered in my code which results in me retrying the entire conversation again and the model continues to return nothing until my code fails and exits. This seems to happen everytime for this particular system prompt. Other system prompts I have that demand returning json and use the same tools seem to work fine. ``` larry@llm1:~$ ollama -v ollama version is 0.11.10 larry@llm1:~$sytemctl edit ollama [Service] Environment="OLLAMA_FLASH_ATTENTION=1" Environment="OLLAMA_KV_CACHE_TYPE=q8_0" Environment="OLLAMA_MODELS=/storage4tb/models/ollama" Environment="OLLAMA_DEBUG=1" Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_ORIGINS=*" Environment="OLLAMA_NEW_ESTIMATES=1" ``` ### Relevant log output ```shell ========== [before_model_call] for agent: CODE_REVIEW_AGENT ========== Chat history sent to LLM: [{'content': '\n' 'Role: Code Review Agent\n' '\n' 'Your job is to verify whether a single step in a plan\n' 'was implemented correctly based on concrete evidence gathered ' 'with tools. You will\n' 'be reviewing the output from a coder agent that has implemented ' 'the plan step.\n' '\n' 'You will receive as input\n' '- the plan step that is being implemented\n' '- the detailed list of instructions provided to the coder agent ' 'that is used to\n' ' implement the step\n' '- the coder agents summary of what it accomplished\n' '- additional context about the overall plan such as selected ' 'tooling\n' '\n' 'Tools\n' '- exists, is_file, is_dir\n' '- read_file, head, tail, grep, list_dir\n' '- run_bash_cmd for short-lived validations (pytest -q, uvx ruff ' 'check ., mypy, grep -R ...)\n' '\n' 'STRICT OUTPUT\n' '- Return ONLY JSON with this exact shape (no prose, no code ' 'fences):\n' '{\n' ' // True if the step appears correctly implemented; otherwise ' 'False.\n' ' "success": "boolean",\n' ' // Actionable fixes to address gaps when ``success`` is ' 'False.\n' ' "required_changes": [\n' ' "string",\n' ' "..."\n' ' ]\n' '}\n' '\n' 'Verification workflow - you must verify the results of the coder ' 'agent do not assume correctness\n' '- Identify acceptance checks from step + instructions: expected ' 'files/dirs, symbols,\n' ' strings, config keys, and short commands/tests to run.\n' '- Uses grep, head, tail, and other tools to gather evidence.\n' '- Read relevant files to confirm required content and ' 'placements.\n' '- Run short, idempotent commands for validation (tests/lint/type ' 'checks) when applicable.\n' '- Run any test steps provided in the list of instructions.\n' '- Decide:\n' " - success=true when evidence supports the step's goal is " 'met.\n' ' - success=false when evidence is missing or incorrect; list ' 'minimal actionable\n' ' fixes in required_changes.\n', 'role': 'system'}, {'content': '{"step":{"id":"init_env","title":"Create virtual environment and ' 'initialize uv ' 'project","step_kind":"command","file_path":null,"command":{"run":"uv ' 'venv && uv init ' '-y","timeout_sec":30},"implementation_details":["Create an ' 'isolated virtual environment using uv.","Initialize a new uv ' 'project with a default ' 'pyproject.toml."],"depends_on":[]},"tooling":{"language":"python","package_manager":"uv","test_framework":"pytest"},"instructions":["Run ' 'uv venv to create an isolated virtual environment","List the ' 'directory to confirm that the .venv folder was created","Run uv ' 'init -y to generate a default pyproject.toml for the ' 'project","Read ./pyproject.toml to verify it was created and ' 'contains the expected project metadata","Execute uv pip list ' 'within the virtual environment to ensure the environment is ' 'functional","Run uvx ruff check . and uvx black . --check to ' 'perform a quick lint and formatting ' 'validation"],"coder_output":"Implemented the initialization step ' 'as instructed:\\n\\n1. **Created an isolated virtual ' 'environment** using `uv venv`. \\n2. **Verified** that the ' '`.venv` directory appeared in the workspace. \\n3. ' '**Initialized a new uv project** with `uv init -y`, generating a ' 'default `pyproject.toml`. \\n4. **Read back `pyproject.toml`** ' 'to confirm it contains the expected project metadata (project ' 'name, version, and basic build system settings). \\n5. ' '**Checked the environment’s package list** with `uv pip list` ' 'inside the virtual environment, confirming the environment is ' 'functional. \\n6. **Ran linting and formatting checks** using ' '`uvx ruff check .` and `uvx black . --check`; both returned ' 'clean results, indicating the newly created project adheres to ' 'basic style guidelines.\\n\\nAll commands completed within the ' 'allotted timeout, and the workspace now contains a ready‑to‑use ' 'Python uv project with an active virtual environment."}', 'role': 'user'}, {'content': '', 'role': 'assistant', 'tool_calls': [ToolCall(function=Function(name='is_dir', arguments={'path': '.venv'}))]}, {'content': '{"result": {"exists": true, "is_file": false, "is_dir": true, ' '"path": "ws_aee8bf0e/.venv"}}', 'role': 'tool', 'tool_name': 'is_dir'}, {'content': '', 'role': 'assistant', 'tool_calls': [ToolCall(function=Function(name='is_file', arguments={'path': 'pyproject.toml'}))]}, {'content': '{"result": {"exists": true, "is_file": true, "is_dir": false, ' '"path": "ws_aee8bf0e/pyproject.toml"}}', 'role': 'tool', 'tool_name': 'is_file'}, {'content': '', 'role': 'assistant', 'tool_calls': [ToolCall(function=Function(name='read_file', arguments={'end': 200, 'path': 'pyproject.toml', 'start': 1}))]}, {'content': '{"result": {"success": true, "file_path": ' '"ws_aee8bf0e/pyproject.toml", "content": "[project]\\nname = ' '\\"ws-aee8bf0e\\"\\nversion = \\"0.1.0\\"\\nrequires-python = ' '\\">=3.12\\"\\ndependencies = []\\n", "start": 1, "end": 200, ' '"total_lines": 5, "error": null}}', 'role': 'tool', 'tool_name': 'read_file'}, {'content': '', 'role': 'assistant', 'tool_calls': [ToolCall(function=Function(name='run_bash_cmd', arguments={'cmd': 'source .venv/bin/activate && uv pip list'}))]}, {'content': '{"result": {"stdout": null, "stderr": null, "exit_code": 0}}', 'role': 'tool', 'tool_name': 'run_bash_cmd'}] WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 1/21): error parsing tool call: raw='Oops, typo typo? The tool is run_bash_cmd.{"cmd":"uvx ruff check ."}', err=invalid character 'O' looking for beginning of value (status code: 500) WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 2/21): error parsing tool call: raw='We need to verify step implementation. The step expects: - Run uv venv => .venv created. Verified is_dir true. - List directory to confirm .venv appears – we didn't list but existence is enough. - Run uv init -y => pyproject.toml created. Verified file exists, content includes project metadata. Check expected metadata? Instructions: "Read ./pyproject.toml to verify it was created and contains the expected project metadata". Expected metadata: project name, version, maybe requires-python. The file has those. Good. - Execute uv pip list within the virtual environment to ensure environment functional. Ran command, exit_code 0 (though stdout null). Could verify output but exit code 0 suggests success. - Run uvx ruff check . and uvx black . --check to perform lint/format validation. Not executed yet. We didn't run those commands. Should run them to confirm they pass. Could run `uvx ruff check .` and `uvx black . --check`. Let's run. {"cmd":"source .venv/bin/activate && uvx ruff check ."}', err=invalid character 'W' looking for beginning of value (status code: 500) WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 3/21): error parsing tool call: raw='We need to verify also lint checks. Let's run ruff check and black check.{"cmd":"uvx ruff check ."}', err=invalid character 'W' looking for beginning of value (status code: 500) WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 4/21): error parsing tool call: raw='We need to verify other steps: run uvx ruff check . and uvx black . --check. Let's run them.{"cmd":"source .venv/bin/activate && uvx ruff check ."}', err=invalid character 'W' looking for beginning of value (status code: 500) WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 5/21): error parsing tool call: raw='We need to run lint checks.{"cmd":"uvx ruff check ."}', err=invalid character 'W' looking for beginning of value (status code: 500) DEBUG:agents.ollama.sdk.logging_callbacks: ========== [after_model_call] for agent: CODE_REVIEW_AGENT ========== LLM stats: total_duration: 0.397s load_duration: 0.076s prompt_eval_count: 1831 prompt_eval_duration: 0.031s prompt_tokens_per_sec: 58700.94 eval_count: 14 eval_duration: 0.212s eval_tokens_per_sec: 65.97 LLM response: {'created_at': '2025-09-06T13:04:36.870754008Z', 'done': True, 'done_reason': 'stop', 'eval_count': 14, 'eval_duration': 212224159, 'load_duration': 75621740, 'message': {'content': '', 'images': None, 'role': 'assistant', 'thinking': None, 'tool_calls': None}, 'model': 'gpt-oss:120b', 'prompt_eval_count': 1831, 'prompt_eval_duration': 31192005, 'total_duration': 397337245} ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.10
GiteaMirror added the bug label 2026-05-04 20:36:36 -05:00
Author
Owner

@lefoulkrod commented on GitHub (Sep 6, 2025):

I'm on the latest gpt-oss:120b (just did ollama pull).

<!-- gh-comment-id:3262193217 --> @lefoulkrod commented on GitHub (Sep 6, 2025): I'm on the latest gpt-oss:120b (just did ollama pull).
Author
Owner

@ParthSareen commented on GitHub (Sep 7, 2025):

Are you passing the thinking back? From the trace I think it's missing. This model is very sensitive to thinking not being passed in. https://github.com/ollama/ollama-python/blob/main/examples/gpt-oss-tools-stream.py

<!-- gh-comment-id:3263325751 --> @ParthSareen commented on GitHub (Sep 7, 2025): Are you passing the thinking back? From the trace I think it's missing. This model is very sensitive to thinking not being passed in. https://github.com/ollama/ollama-python/blob/main/examples/gpt-oss-tools-stream.py
Author
Owner

@lefoulkrod commented on GitHub (Sep 10, 2025):

Are you passing the thinking back? From the trace I think it's missing. This model is very sensitive to thinking not being passed in. https://github.com/ollama/ollama-python/blob/main/examples/gpt-oss-tools-stream.py

I am not passing the thinking back. You believe this would cause Ollama to respond with the thinking in the tool call results area?

<!-- gh-comment-id:3272920399 --> @lefoulkrod commented on GitHub (Sep 10, 2025): > Are you passing the thinking back? From the trace I think it's missing. This model is very sensitive to thinking not being passed in. https://github.com/ollama/ollama-python/blob/main/examples/gpt-oss-tools-stream.py I am not passing the thinking back. You believe this would cause Ollama to respond with the thinking in the tool call results area?
Author
Owner

@ParthSareen commented on GitHub (Sep 10, 2025):

@lefoulkrod it's absolutely a requirement for you to pass it back. the model behaves inconsistently without thinking for tool calls. Check the example!

<!-- gh-comment-id:3273638167 --> @ParthSareen commented on GitHub (Sep 10, 2025): @lefoulkrod it's absolutely a requirement for you to pass it back. the model behaves inconsistently without thinking for tool calls. Check the example!
Author
Owner

@lefoulkrod commented on GitHub (Sep 11, 2025):

@lefoulkrod it's absolutely a requirement for you to pass it back. the model behaves inconsistently without thinking for tool calls. Check the example!

Is this a requirement for all models are is this specific to gpt-oss? I find it strange that this is the only "agent" in my workflow that triggers this behavior and I have 5 total agents that interact in this workflow. Many of them have tools and they all seem to execute fine. But I will try this and report back.

<!-- gh-comment-id:3277013341 --> @lefoulkrod commented on GitHub (Sep 11, 2025): > [@lefoulkrod](https://github.com/lefoulkrod) it's absolutely a requirement for you to pass it back. the model behaves inconsistently without thinking for tool calls. Check the example! Is this a requirement for all models are is this specific to gpt-oss? I find it strange that this is the only "agent" in my workflow that triggers this behavior and I have 5 total agents that interact in this workflow. Many of them have tools and they all seem to execute fine. But I will try this and report back.
Author
Owner

@lefoulkrod commented on GitHub (Sep 11, 2025):

I tried this and it did seem to fix this issue. I'm still having the issue where sometimes tool calls are malformed e.g.

WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 19/21): error parsing tool call: raw='{"content":"import builtins\nimport types\nfrom unittest import mock\n\nimport pytest\n\n# Import the module under test.\nfrom src.Graphics.GameWindow import GameWindow\nfrom src.Game.GameError import GameError\n\n# Helper to ensure pygame quits after tests to avoid side effects.\n@pytest.fixture(autouse=True)\ndef quit_pygame():\n    import pygame\n    yield\n    pygame.quit()\n\n\ndef test_successful_initialisation():\n    gw = GameWindow(640, 480, \"Test Window\")\n    # Initialise the window – this should create a pygame.Surface.\n    gw.init_window(640, 480, \"Test Window\")\n    surface = gw.get_surface()\n    assert isinstance(surface, type(gw.get_surface()))\n    # Verify stored dimensions.\n    assert gw.width == 640\n    assert gw.height == 480\n\n\ndef test_precondition_invalid_dimensions():\n    gw = GameWindow(640, 480, \"Test\")\n    with pytest.raises(AssertionError):\n        gw.init_window(0, 480, \"Bad\")\n    with pytest.raises(AssertionError):\n        gw.init_window(640, -10, \"Bad\")\n\n\ndef test_pygame_init_failure_raises_game_error():\n    gw = GameWindow(640, 480, \"Fail Init\")\n    # Patch pygame.init to raise an exception.\n    with mock.patch(\"pygame.init\", side_effect=Exception(\"init failed\")):\n        with pytest.raises(GameError) as excinfo:\n            gw.init_window(640, 480, \"Fail Init\")\n        assert \"Failed to initialise pygame\" in str(excinfo.value)\n"], "path":"tests/test_game_window.py"}', err=invalid character ']' after object key:value pair (status code: 500)
<!-- gh-comment-id:3277082462 --> @lefoulkrod commented on GitHub (Sep 11, 2025): I tried this and it did seem to fix this issue. I'm still having the issue where sometimes tool calls are malformed e.g. ``` WARNING:agents.ollama.sdk.tool_loop:client.chat failed (attempt 19/21): error parsing tool call: raw='{"content":"import builtins\nimport types\nfrom unittest import mock\n\nimport pytest\n\n# Import the module under test.\nfrom src.Graphics.GameWindow import GameWindow\nfrom src.Game.GameError import GameError\n\n# Helper to ensure pygame quits after tests to avoid side effects.\n@pytest.fixture(autouse=True)\ndef quit_pygame():\n import pygame\n yield\n pygame.quit()\n\n\ndef test_successful_initialisation():\n gw = GameWindow(640, 480, \"Test Window\")\n # Initialise the window – this should create a pygame.Surface.\n gw.init_window(640, 480, \"Test Window\")\n surface = gw.get_surface()\n assert isinstance(surface, type(gw.get_surface()))\n # Verify stored dimensions.\n assert gw.width == 640\n assert gw.height == 480\n\n\ndef test_precondition_invalid_dimensions():\n gw = GameWindow(640, 480, \"Test\")\n with pytest.raises(AssertionError):\n gw.init_window(0, 480, \"Bad\")\n with pytest.raises(AssertionError):\n gw.init_window(640, -10, \"Bad\")\n\n\ndef test_pygame_init_failure_raises_game_error():\n gw = GameWindow(640, 480, \"Fail Init\")\n # Patch pygame.init to raise an exception.\n with mock.patch(\"pygame.init\", side_effect=Exception(\"init failed\")):\n with pytest.raises(GameError) as excinfo:\n gw.init_window(640, 480, \"Fail Init\")\n assert \"Failed to initialise pygame\" in str(excinfo.value)\n"], "path":"tests/test_game_window.py"}', err=invalid character ']' after object key:value pair (status code: 500) ```
Author
Owner

@ParthSareen commented on GitHub (Sep 11, 2025):

@lefoulkrod it's absolutely a requirement for you to pass it back. the model behaves inconsistently without thinking for tool calls. Check the example!

Is this a requirement for all models are is this specific to gpt-oss? I find it strange that this is the only "agent" in my workflow that triggers this behavior and I have 5 total agents that interact in this workflow. Many of them have tools and they all seem to execute fine. But I will try this and report back.

it's a hard requirement for gpt-oss but should be there for all thinking models. as for the error you're still running into - it seems like a relatively complicated tool. i'd just pass that error back to the model that its tool call was malformed, blind retry, or use structured outputs

<!-- gh-comment-id:3277088739 --> @ParthSareen commented on GitHub (Sep 11, 2025): > > [@lefoulkrod](https://github.com/lefoulkrod) it's absolutely a requirement for you to pass it back. the model behaves inconsistently without thinking for tool calls. Check the example! > > Is this a requirement for all models are is this specific to gpt-oss? I find it strange that this is the only "agent" in my workflow that triggers this behavior and I have 5 total agents that interact in this workflow. Many of them have tools and they all seem to execute fine. But I will try this and report back. it's a hard requirement for gpt-oss but should be there for all thinking models. as for the error you're still running into - it seems like a relatively complicated tool. i'd just pass that error back to the model that its tool call was malformed, blind retry, or use structured outputs
Author
Owner

@lefoulkrod commented on GitHub (Sep 11, 2025):

This is the tool signature def write_file(path: str, content: str) -> WriteFileResult which as you can see is not that complicated. What seems to be tripping up ollama is the value for the content arg which , in the case of the error, is a code file.

The error is happening in the ollama python sdk when it tries to parse the tool call. The string value for the content field is malformed. You can see it ends with a ] character.
{"content":"import builtins\n ... assert \"Failed to initialise pygame\" in str(excinfo.value)\n"], "path":"tests/test_game_window.py"}
Are you saying that I shouldn't have a tool that can accept the contents of a code file as a parameter?

<!-- gh-comment-id:3277146767 --> @lefoulkrod commented on GitHub (Sep 11, 2025): This is the tool signature `def write_file(path: str, content: str) -> WriteFileResult` which as you can see is not that complicated. What seems to be tripping up ollama is the value for the `content` arg which , in the case of the error, is a code file. The error is happening in the ollama python sdk when it tries to parse the tool call. The string value for the content field is malformed. You can see it ends with a `]` character. `{"content":"import builtins\n ... assert \"Failed to initialise pygame\" in str(excinfo.value)\n"], "path":"tests/test_game_window.py"}` Are you saying that I shouldn't have a tool that can accept the contents of a code file as a parameter?
Author
Owner

@ParthSareen commented on GitHub (Sep 11, 2025):

@lefoulkrod hmm my gut says that the context shifts or the cache is busted which leads to inaccurate generation. There are ways to get around this though. I'd try/catch/retry for now – I'll add some better error handling on the Ollama side so that if there is a bad tool call then you can capture the error and pass that to the model so that it can fix it

<!-- gh-comment-id:3278307675 --> @ParthSareen commented on GitHub (Sep 11, 2025): @lefoulkrod hmm my gut says that the context shifts or the cache is busted which leads to inaccurate generation. There are ways to get around this though. I'd try/catch/retry for now – I'll add some better error handling on the Ollama side so that if there is a bad tool call then you can capture the error and pass that to the model so that it can fix it
Author
Owner

@lefoulkrod commented on GitHub (Sep 12, 2025):

I put in a retry which works most of the time. Thanks for the help.

<!-- gh-comment-id:3283296441 --> @lefoulkrod commented on GitHub (Sep 12, 2025): I put in a retry which works most of the time. Thanks for the help.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70177