mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #11908] issue: Code interpreter doesn't work most the time because its prompt doesn't yield the expected response #31930
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @allo- on GitHub (Mar 20, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/11908
Check Existing Issues
Installation Method
Pip Install
Open WebUI Version
v0.5.20
Ollama Version (if applicable)
No response
Operating System
Linux
Browser (if applicable)
No response
Confirmation
README.md.Ollamatext-generation-webui.Expected Behavior
Prompt the LLM to create code in a form that can be interepreted.
Actual Behavior
The LLM is prompted in a way that returns a response that cannot be parsed by open-webui.
Steps to Reproduce
Logs & Screenshots
Spaces in three backticks inserted by me, so github doesn't parse them.
Second:
Third:
(Just a newline between two backticks)
Additional Information
The feature is rather new and I guess the prompt for tool usage should be improved.
Model versions:
I tried some other models that I don't know exactly, but current Mistral would clearly be one of my favorites.
@rgaricano commented on GitHub (Mar 20, 2025):
It add an space on backtics, maybe if you reminding it how to write code ? prompting insisting that a block of code is written with 3 backtics before and after?
@allo- commented on GitHub (Mar 20, 2025):
The space is added by me, because otherwise GitHub interprets it as me wanting to end the codeblock in the issue. Remove the spaces in the three backticks to get the original output.
The problem is more, that it mixes up Codeblocks, XML,
<details>tags and other things that are probably mentioned in the instructions how to use the code interpreter. Sometimes it also adds XML inside a code block, but not as comment, so that executing fails because of syntax errors.I guess the tool prompt can still be improved and somehow errors of output not matching the tool call syntax should be caught (as good as possible) such that broken code blocks are not executed when it can be detected that the LLM didn't close all tags or nested them in the wrong order.
@rgaricano commented on GitHub (Mar 20, 2025):
ok, & indicating it that you want outputs as MD ?
@allo- commented on GitHub (Mar 20, 2025):
I only asked the very basic question "Let's start by writing a Python program that adds two numbers." as a test and hoped for the open-webui tool prompt to help the LLM answer in the right format.
Maybe you can tell me what goal you have in mind with the feature? If the goal is that it should work (often) automatically I think it still needs finetuning and I opened the issue to discuss and give feedback.
If the goal is, that the user requests the LLM to follow the format themselves, it may need some documention but would otherwise possibly be enough as it is (I would still test some prompt formats to give feedback which ones got the feature to work well for me).
@rgaricano commented on GitHub (Mar 20, 2025):
each model, each system, each config and how its react between....is a learning curve that all do,
yes could be great more docs, and more so with new options every so often, this is a labour that it is being done by everyone.
Thanks for the help too!!