[GH-ISSUE #8090] Add an ollama example that enables users to chat with a code generation model and then tests the code generated by the model #51682

Open
opened 2026-04-28 20:44:54 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @jagane on GitHub (Dec 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8090

Often times, code generated by code generation models such as qwen2.5-coder:7b is not perfect. It may not even be syntactically correct. I am proposing to add an example program, possibly derived from the the python-simplechat example, that extracts the code generated by the model and runs/tests it. The first iteration would only know to run python programs.

The name python-code-iterate sounds like a reasonable name for this example program

  1. User enters a prompt e.g. write a python program to quantize a model stored in my local dir /home/userabc/model1
  2. The model generates code for this task and provides it as part of its response, i.e. the 'assistant' message
  3. Our new program python-code-iterate then asks the user whether to run the program and check output
  4. If the user says yes, then python-code-iterate will use a subprocess to run the program and check return code, stdout and stderr
  5. If the return code is non zero, then the stderr contents will be added as a user message and the chat will continue
  6. repeat steps 2 to 5 till satisfactory results are obtained
Originally created by @jagane on GitHub (Dec 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8090 Often times, code generated by code generation models such as qwen2.5-coder:7b is not perfect. It may not even be syntactically correct. I am proposing to add an example program, possibly derived from the the python-simplechat example, that extracts the code generated by the model and runs/tests it. The first iteration would only know to run python programs. The name python-code-iterate sounds like a reasonable name for this example program 1. User enters a prompt e.g. write a python program to quantize a model stored in my local dir /home/userabc/model1 2. The model generates code for this task and provides it as part of its response, i.e. the 'assistant' message 3. Our new program python-code-iterate then asks the user whether to run the program and check output 4. If the user says yes, then python-code-iterate will use a subprocess to run the program and check return code, stdout and stderr 5. If the return code is non zero, then the stderr contents will be added as a user message and the chat will continue 6. repeat steps 2 to 5 till satisfactory results are obtained
GiteaMirror added the feature request label 2026-04-28 20:44:54 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51682