[GH-ISSUE #1321] feat: code block execution #12444

Closed
opened 2026-04-19 19:22:53 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @zabirauf on GitHub (Mar 27, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/1321

Originally assigned to: @tjbck on GitHub.

Is your feature request related to a problem? Please describe.
Executing code allow the tool to be used as an ideation and copilot in software development for brainstorming purposes. It also allows user to instead of asking question to LLM and hopping for the correct answer, to actually create code and run it to get correct answer.

Eventually having something like code interpreter as shared in #851 will be great but I think that may require much more changes to get it right e.g. some form of function calling pattern that works across multiple models to pick the code interpreter and some form of chaining to respond based on those results.

This can be a good stepping stone to eventually build that.

Describe the solution you'd like
Have a 'Run Code' in the Code block which sends the code over to a new API in backend e.g. /code/api/v1/run, which executes that code in a somewhat secure manner and responds back with response. To start with we can support only Python code.

Additional context

Here is very basic prototype I whipped up: https://github.com/zabirauf/open-webui/pull/1/files
Want to get thoughts before I work on it further.

2024-03-26_23-42-47

Originally created by @zabirauf on GitHub (Mar 27, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/1321 Originally assigned to: @tjbck on GitHub. **Is your feature request related to a problem? Please describe.** Executing code allow the tool to be used as an ideation and copilot in software development for brainstorming purposes. It also allows user to instead of asking question to LLM and hopping for the correct answer, to actually create code and run it to get correct answer. Eventually having something like code interpreter as shared in #851 will be great but I think that may require much more changes to get it right e.g. some form of function calling pattern that works across multiple models to pick the code interpreter and some form of chaining to respond based on those results. This can be a good stepping stone to eventually build that. **Describe the solution you'd like** Have a 'Run Code' in the Code block which sends the code over to a new API in backend e.g. `/code/api/v1/run`, which executes that code in a somewhat secure manner and responds back with response. To start with we can support only Python code. **Additional context** Here is very basic prototype I whipped up: https://github.com/zabirauf/open-webui/pull/1/files Want to get thoughts before I work on it further. ![2024-03-26_23-42-47](https://github.com/open-webui/open-webui/assets/1104560/1f1d690f-10d1-4246-bea0-6d051db280ec)
GiteaMirror added the enhancementcore labels 2026-04-19 19:22:53 -05:00
Author
Owner

@slash-proc commented on GitHub (Mar 27, 2024):

nice work, this looks very promising! here's my thought:

Looking into your implementation, the only thing I don't feel comfortable with is running the code locally without any kind of isolation with the host system. It's conceivable that any user could have code be written or just paste code themselves into chat, it's run in python through subprocess and the host system is compromised. What I'm about to propose isn't necessarily a simple change so I humbly request that you add a way to disable code execution in the admin panel.

I believe it would be best to use containers for code execution. They can be tracked, isolated and then destroyed accordingly. Considering how open-webui can be started locally, in docker or kubernetes, what I'm proposing has interesting implications about how this feature would need to be implemented.

Local - could just use docker python bindings - no fuss
Docker - dind + docker python bindings should work
Podman/etc. - ?
Kubernetes - Service account + kubernetes python bindings to create/destroy pods should work

<!-- gh-comment-id:2022124332 --> @slash-proc commented on GitHub (Mar 27, 2024): nice work, this looks very promising! here's my thought: Looking into your implementation, the only thing I don't feel comfortable with is running the code locally without any kind of isolation with the host system. It's conceivable that any user could have code be written or just paste code themselves into chat, it's run in python through subprocess and the host system is compromised. What I'm about to propose isn't necessarily a simple change so I humbly request that you add a way to disable code execution in the admin panel. I believe it would be best to use containers for code execution. They can be tracked, isolated and then destroyed accordingly. Considering how open-webui can be started locally, in docker or kubernetes, what I'm proposing has interesting implications about how this feature would need to be implemented. Local - could just use docker python bindings - no fuss Docker - dind + docker python bindings should work Podman/etc. - ? Kubernetes - Service account + kubernetes python bindings to create/destroy pods should work
Author
Owner

@zabirauf commented on GitHub (Mar 27, 2024):

Agreed on the need to provide the code isolation from host. I also am not comfortable with the current direct execution :). I was already leaning towards docker as it helps both from perspective of isolation from host and also for future support for other languages. I'll need to dig a bit more on the dind and see the pros/cons.

Good idea on also having an admin toggle to disable it. Will also track that.

<!-- gh-comment-id:2022154678 --> @zabirauf commented on GitHub (Mar 27, 2024): Agreed on the need to provide the code isolation from host. I also am not comfortable with the current direct execution :). I was already leaning towards docker as it helps both from perspective of isolation from host and also for future support for other languages. I'll need to dig a bit more on the dind and see the pros/cons. Good idea on also having an admin toggle to disable it. Will also track that.
Author
Owner

@tjbck commented on GitHub (Mar 27, 2024):

Love this suggestion! I'll take a look and see how this could be safely implemented soon!

<!-- gh-comment-id:2024141910 --> @tjbck commented on GitHub (Mar 27, 2024): Love this suggestion! I'll take a look and see how this could be safely implemented soon!
Author
Owner

@zabirauf commented on GitHub (Mar 28, 2024):

@tjbck We can leverage Piston https://github.com/engineer-man/piston for running code in an isolated manner. It's already used in various projects, supports multiple languages, has a straight forward API.

I updated the prototype to use that (main code here). How I anticipate the complete solution can be done is

  1. With the Open-WebUI docker, also run the Piston docker where its API is hosted
  2. Make sure the open-webui docker can access the piston API running in its own docker container
  3. Have a mapping of supported languages and corresponding version
  4. When user tries to execute code block, detect language and make sure correct runtime is installed
  5. Piston takes care of executing and we returning back its results back to open-webui
<!-- gh-comment-id:2024392127 --> @zabirauf commented on GitHub (Mar 28, 2024): @tjbck We can leverage Piston https://github.com/engineer-man/piston for running code in an isolated manner. It's already used in various projects, supports multiple languages, has a straight forward API. I updated the prototype to use that ([main code here](https://github.com/zabirauf/open-webui/blob/8dff4e734113aa6fcbd70a5ec35bd689910350f6/backend/apps/code/main.py)). How I anticipate the complete solution can be done is 1. With the Open-WebUI docker, also run the Piston docker where its API is hosted 2. Make sure the open-webui docker can access the piston API running in its own docker container 3. Have a mapping of supported languages and corresponding version 4. When user tries to execute code block, detect language and make sure correct runtime is installed 5. Piston takes care of executing and we returning back its results back to open-webui
Author
Owner

@shinohara-rin commented on GitHub (Apr 2, 2024):

How about supporting sandboxed executions on cloud providers like Modal?

This way we can put security concerns away and work on the actual code execution feature first, then implement the sandbox locally later.

<!-- gh-comment-id:2032268948 --> @shinohara-rin commented on GitHub (Apr 2, 2024): How about supporting sandboxed executions on cloud providers like [Modal](https://modal.com/use-cases/sandboxes)? This way we can put security concerns away and work on the actual code execution feature first, then implement the sandbox locally later.
Author
Owner

@Taehui commented on GitHub (Apr 19, 2024):

Next to python, it would be nice to support SQL. I think it will be very useful to many people.

<!-- gh-comment-id:2065749485 --> @Taehui commented on GitHub (Apr 19, 2024): Next to python, it would be nice to support SQL. I think it will be very useful to many people.
Author
Owner

@gruckion commented on GitHub (Apr 20, 2024):

Nice work, I am compiling together research notes on this discussion item and further defining the roadmap for this epic.

https://github.com/open-webui/open-webui/discussions/1629

<!-- gh-comment-id:2067754123 --> @gruckion commented on GitHub (Apr 20, 2024): Nice work, I am compiling together research notes on this discussion item and further defining the roadmap for this epic. https://github.com/open-webui/open-webui/discussions/1629
Author
Owner

@gruckion commented on GitHub (Apr 20, 2024):

Next to python, it would be nice to support SQL. I think it will be very useful to many people.

Please provide more information on how you would like / expect this to function.

Also what limitations can you think of?

<!-- gh-comment-id:2067754223 --> @gruckion commented on GitHub (Apr 20, 2024): > Next to python, it would be nice to support SQL. I think it will be very useful to many people. Please provide more information on how you would like / expect this to function. Also what limitations can you think of?
Author
Owner

@YangQiuEric commented on GitHub (May 7, 2024):

really need this feature lol

<!-- gh-comment-id:2099076582 --> @YangQiuEric commented on GitHub (May 7, 2024): really need this feature lol
Author
Owner

@tjbck commented on GitHub (May 19, 2024):

Partially implemented with 0.1.125.

<!-- gh-comment-id:2119350911 --> @tjbck commented on GitHub (May 19, 2024): Partially implemented with 0.1.125.
Author
Owner

@tizkovatereza commented on GitHub (May 24, 2024):

Hello, have you tried adding E2B for code execution?

It runs code in isolated sandboxes (full VM environment) in the cloud, it's open-source and has also special SDK for code interpreter use cases.

It also supports any LLM and here are some open-source examples of how it is used: https://github.com/e2b-dev/e2b-cookbook.

EDIT: Disclaimer, I'm from E2B team! 😃

<!-- gh-comment-id:2129061275 --> @tizkovatereza commented on GitHub (May 24, 2024): Hello, have you tried adding [E2B](https://e2b.dev/) for code execution? It runs code in isolated sandboxes (full VM environment) in the cloud, it's [open-source](https://github.com/e2b-dev) and has also special [SDK for code interpreter use cases](https://github.com/e2b-dev/code-interpreter). It also supports any LLM and here are some open-source examples of how it is used: https://github.com/e2b-dev/e2b-cookbook. EDIT: Disclaimer, I'm from E2B team! 😃
Author
Owner

@bannert1337 commented on GitHub (May 29, 2024):

Hello, have you tried adding E2B for code execution?

It runs code in isolated sandboxes (full VM environment) in the cloud, it's open-source and has also special SDK for code interpreter use cases.

It also supports any LLM and here are some open-source examples of how it is used: https://github.com/e2b-dev/e2b-cookbook.

Setting the infrastructure up yourself for self-hosting is currently not possible. They use Terraform to deploy the infrastructure and according to their documentation, "right now it is deployable on GCP only". (1)

Therefore, I tend towards the solution proposed by @zabirauf in this comment.

Open WebUI — it should be self-hostable by everyone, without external dependencies.

<!-- gh-comment-id:2137767127 --> @bannert1337 commented on GitHub (May 29, 2024): > Hello, have you tried adding [E2B](https://e2b.dev/) for code execution? > > It runs code in isolated sandboxes (full VM environment) in the cloud, it's [open-source](https://github.com/e2b-dev) and has also special [SDK for code interpreter use cases](https://github.com/e2b-dev/code-interpreter). > > It also supports any LLM and here are some open-source examples of how it is used: https://github.com/e2b-dev/e2b-cookbook. Setting the infrastructure up yourself for self-hosting is currently not possible. They use Terraform to deploy the infrastructure and according to their documentation, "right now it is deployable on GCP only". ([1](https://github.com/e2b-dev/infra?tab=readme-ov-file#deployment)) Therefore, I tend towards the solution proposed by @zabirauf in [this](https://github.com/open-webui/open-webui/issues/1321#issuecomment-2024392127) comment. **Open** WebUI — it should be self-hostable by everyone, without external dependencies.
Author
Owner

@cyrpaut commented on GitHub (May 31, 2024):

I just tested the last version and it is very promising. Including the possibility to plot graphs! Wonderful!

I have a question though. And it may be difficult from the point of view of architecture. But I'd love the capacity to interact with the uploaded file.

Let me exemplify, I would like to upload a CSV file and prompt python to read it in panda, manipulate it and plot it. Ultimately, I would love open-webui to be able to act as the data-science feature of GPT4 while keeping my data private.

Is that doable in a forseable future?

Thanks for the job already done.

<!-- gh-comment-id:2141894456 --> @cyrpaut commented on GitHub (May 31, 2024): I just tested the last version and it is very promising. Including the possibility to plot graphs! Wonderful! I have a question though. And it may be difficult from the point of view of architecture. But I'd love the capacity to interact with the uploaded file. Let me exemplify, I would like to upload a CSV file and prompt python to read it in panda, manipulate it and plot it. Ultimately, I would love open-webui to be able to act as the data-science feature of GPT4 while keeping my data private. Is that doable in a forseable future? Thanks for the job already done.
Author
Owner

@justinh-rahb commented on GitHub (May 31, 2024):

@cyrpaut Pipelines is the solution for that, write a "Code Interpreter" pipeline: https://github.com/open-webui/pipelines

<!-- gh-comment-id:2141963122 --> @justinh-rahb commented on GitHub (May 31, 2024): @cyrpaut Pipelines is the solution for that, write a "Code Interpreter" pipeline: https://github.com/open-webui/pipelines
Author
Owner

@tizkovatereza commented on GitHub (May 31, 2024):

I just tested the last version and it is very promising. Including the possibility to plot graphs! Wonderful!

I have a question though. And it may be difficult from the point of view of architecture. But I'd love the capacity to interact with the uploaded file.

Let me exemplify, I would like to upload a CSV file and prompt python to read it in panda, manipulate it and plot it. Ultimately, I would love open-webui to be able to act as the data-science feature of GPT4 while keeping my data private.

Is that doable in a forseable future?

Thanks for the job already done.

Hey @cyrpaut, thank you! Happy to hear that.

You can definitely interact with uploaded files.

Here is an example with data upload where the agent uses the E2B code interpreter to analyze the uploaded csv file.

If you want to try something that has a web UI, E2B is integrated in LlamaIndex as a tool, so you can just check this template and follow the installation steps in the readme to try it: https://github.com/run-llama/create-llama

To see E2B integrated in a proper enterprise-level product with UI, some tools on the top of my mind are Athena, Gumloop or tinybio.

Is this what you asked for? Hope I helped.

T.

<!-- gh-comment-id:2142244520 --> @tizkovatereza commented on GitHub (May 31, 2024): > I just tested the last version and it is very promising. Including the possibility to plot graphs! Wonderful! > > I have a question though. And it may be difficult from the point of view of architecture. But I'd love the capacity to interact with the uploaded file. > > Let me exemplify, I would like to upload a CSV file and prompt python to read it in panda, manipulate it and plot it. Ultimately, I would love open-webui to be able to act as the data-science feature of GPT4 while keeping my data private. > > Is that doable in a forseable future? > > Thanks for the job already done. Hey @cyrpaut, thank you! Happy to hear that. You can definitely interact with uploaded files. Here is an [example](https://github.com/e2b-dev/e2b-cookbook/tree/main/examples/upload-dataset-code-interpreter) with data upload where the agent uses the E2B code interpreter to analyze the uploaded csv file. If you want to try something that has a web UI, E2B is integrated in LlamaIndex as a tool, so you can just check this template and follow the installation steps in the readme to try it: https://github.com/run-llama/create-llama To see E2B integrated in a proper enterprise-level product with UI, some tools on the top of my mind are [Athena](https://www.athenaintelligence.ai/), [Gumloop](https://www.gumloop.com/) or [tinybio](https://www.tinybio.cloud/). Is this what you asked for? Hope I helped. T.
Author
Owner

@justinh-rahb commented on GitHub (May 31, 2024):

Let's see a Pipelines x E2B integration then @tizkovatereza 🤘

<!-- gh-comment-id:2142274996 --> @justinh-rahb commented on GitHub (May 31, 2024): Let's see a Pipelines x E2B integration then @tizkovatereza 🤘
Author
Owner

@EtiennePerot commented on GitHub (Sep 3, 2024):

I have implemented an Open WebUI function for Python and Bash code block execution. It uses gVisor for sandboxing.

Code execution function

You can install it here.

<!-- gh-comment-id:2325980973 --> @EtiennePerot commented on GitHub (Sep 3, 2024): I have implemented an Open WebUI function for Python and Bash code block execution. It uses [gVisor](https://gvisor.dev) for sandboxing. ![Code execution function](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) You can [install it here](https://github.com/EtiennePerot/open-webui-code-execution).
Author
Owner

@sultanjulyan commented on GitHub (Sep 5, 2024):

I have implemented an Open WebUI function for Python and Bash code block execution. It uses gVisor for sandboxing.

Code execution function Code execution function

You can install it here.

image

So, I created a code with a function to export to a file from an AI response. After running the code and getting a successful response, I can't find or locate the file. Where can I find the file that has been successfully saved? @EtiennePerot

<!-- gh-comment-id:2332403456 --> @sultanjulyan commented on GitHub (Sep 5, 2024): > I have implemented an Open WebUI function for Python and Bash code block execution. It uses [gVisor](https://gvisor.dev) for sandboxing. > > ![Code execution function](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) [ ![Code execution function](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) ](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) [ ](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) > > You can [install it here](https://github.com/EtiennePerot/open-webui-code-execution). ![image](https://github.com/user-attachments/assets/da83371a-d39e-47ae-8637-5bafbf7a3946) So, I created a code with a function to export to a file from an AI response. After running the code and getting a successful response, I can't find or locate the file. Where can I find the file that has been successfully saved? @EtiennePerot
Author
Owner

@EtiennePerot commented on GitHub (Sep 6, 2024):

@sultanjulyan In general, for discussion about this tool, please open issues on the tool's repository rather than in this bug.

But to answer your question: The code runs in a sandbox, so all traces of its execution are gone as soon as the code finishes running. So this file no longer exists anywhere. However, it would be quite cool if code that produces files would let you download these files straight from the chat UI. That seems like a good feature request, worth filing an issue about. I filed it here.

EDIT 2024-09-16: This is now implemented.

<!-- gh-comment-id:2333073108 --> @EtiennePerot commented on GitHub (Sep 6, 2024): @sultanjulyan In general, for discussion about this tool, please open issues on [the tool's repository](https://github.com/EtiennePerot/open-webui-code-execution) rather than in this bug. But to answer your question: The code runs in a sandbox, so all traces of its execution are gone as soon as the code finishes running. So this file no longer exists anywhere. However, it _would_ be quite cool if code that produces files would let you download these files straight from the chat UI. That seems like a good feature request, worth filing an issue about. I [filed it here](https://github.com/EtiennePerot/open-webui-code-execution/issues/4). **EDIT 2024-09-16**: This is now implemented.
Author
Owner

@murong1 commented on GitHub (Nov 5, 2024):

I have implemented an Open WebUI function for Python and Bash code block execution. It uses gVisor for sandboxing.我已经实现了用于 Python 和 Bash 代码块执行的 Open WebUI 函数。它使用gVisor进行沙箱。

Code execution function Code execution function

You can install it here.

如何将数据文件传送到沙箱准备执行?

<!-- gh-comment-id:2457747764 --> @murong1 commented on GitHub (Nov 5, 2024): > I have implemented an Open WebUI function for Python and Bash code block execution. It uses [gVisor](https://gvisor.dev) for sandboxing.我已经实现了用于 Python 和 Bash 代码块执行的 Open WebUI 函数。它使用[gVisor](https://gvisor.dev)进行沙箱。 > > ![Code execution function](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) [ ![Code execution function](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) ](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) [ ](https://github.com/EtiennePerot/open-webui-code-execution/blob/master/res/code-execution-function.gif?raw=true) > > You can [install it here](https://github.com/EtiennePerot/open-webui-code-execution). 如何将数据文件传送到沙箱准备执行?
Author
Owner

@ParisNeo commented on GitHub (Jan 27, 2025):

How to install new libraries when using the inplace execution ?

<!-- gh-comment-id:2615550307 --> @ParisNeo commented on GitHub (Jan 27, 2025): How to install new libraries when using the inplace execution ?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#12444