[PR #8120] [CLOSED] Add an ollama example that enables users to chat with a code generation model and then tests the code generated by the model #8090 #59328

Closed
opened 2026-04-29 14:16:01 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/8120
Author: @jagane
Created: 12/16/2024
Status: Closed

Base: mainHead: main


📝 Commits (9)

  • 6a104d7 use conda environment if possible; create tempfile with .py suffix
  • 5968823 change connect ip address from 0.0.0.0 to 127.0.0.1 and flush stdin for win32
  • 4046801 new readme.md
  • d78f3d8 Merge branch 'ollama:main' into main
  • 30f6774 more verbose message when prompting user for input
  • d544837 added a requirements.txt with just the reqests package
  • 3c2cf37 shorten the context send to the LLM by sending only three messages
  • 845d4d8 Merge branch 'ollama:main' into main
  • 28b05d9 print stats at the end of chat repsonse from the server

📊 Changes

3 files changed (+193 additions, -0 deletions)

View changed files

examples/python-code-iterate/codeiterate.py (+133 -0)
examples/python-code-iterate/readme.md (+59 -0)
examples/python-code-iterate/requirements.txt (+1 -0)

📄 Description

Often times, code generated by code generation models such as qwen2.5-coder:7b is not perfect. It may not even be syntactically correct. I am proposing to add an example program, possibly derived from the the python-simplechat example, that extracts the code generated by the model and runs/tests it. The first iteration would only know to run python programs.

The name python-code-iterate sounds like a reasonable name for this example program

User enters a prompt e.g. write a python program to quantize a model stored in my local dir /home/userabc/model1
The model generates code for this task and provides it as part of its response, i.e. the 'assistant' message
Our new program python-code-iterate then asks the user whether to run the program and check output
If the user says yes, then python-code-iterate will use a subprocess to run the program and check return code, stdout and stderr
If the return code is non zero, then the stderr contents will be added as a user message and the chat will continue
repeat steps 2 to 5 till satisfactory results are obtained

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/8120 **Author:** [@jagane](https://github.com/jagane) **Created:** 12/16/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `main` --- ### 📝 Commits (9) - [`6a104d7`](https://github.com/ollama/ollama/commit/6a104d78b36c2aa12d4d6e726f04a7d57658237d) use conda environment if possible; create tempfile with .py suffix - [`5968823`](https://github.com/ollama/ollama/commit/5968823bedb93949609d5b3cf9e6af721ac50dc9) change connect ip address from 0.0.0.0 to 127.0.0.1 and flush stdin for win32 - [`4046801`](https://github.com/ollama/ollama/commit/40468014411b1c9a06e4069b6c123adf80c5d902) new readme.md - [`d78f3d8`](https://github.com/ollama/ollama/commit/d78f3d891426a42564e9aa90779e5845ad45e3d0) Merge branch 'ollama:main' into main - [`30f6774`](https://github.com/ollama/ollama/commit/30f6774a2ddc4be6dc9355aa134291803068f968) more verbose message when prompting user for input - [`d544837`](https://github.com/ollama/ollama/commit/d5448375f23fcc73945a50a064bf8db8e8a07234) added a requirements.txt with just the reqests package - [`3c2cf37`](https://github.com/ollama/ollama/commit/3c2cf3758879141adeed643f4cc3067db54a091d) shorten the context send to the LLM by sending only three messages - [`845d4d8`](https://github.com/ollama/ollama/commit/845d4d86b2a06fe52026157a8f060a1a3ce2776b) Merge branch 'ollama:main' into main - [`28b05d9`](https://github.com/ollama/ollama/commit/28b05d9014994942a4dc0f1860990a0e10f98798) print stats at the end of chat repsonse from the server ### 📊 Changes **3 files changed** (+193 additions, -0 deletions) <details> <summary>View changed files</summary> ➕ `examples/python-code-iterate/codeiterate.py` (+133 -0) ➕ `examples/python-code-iterate/readme.md` (+59 -0) ➕ `examples/python-code-iterate/requirements.txt` (+1 -0) </details> ### 📄 Description Often times, code generated by code generation models such as qwen2.5-coder:7b is not perfect. It may not even be syntactically correct. I am proposing to add an example program, possibly derived from the the python-simplechat example, that extracts the code generated by the model and runs/tests it. The first iteration would only know to run python programs. The name python-code-iterate sounds like a reasonable name for this example program User enters a prompt e.g. write a python program to quantize a model stored in my local dir /home/userabc/model1 The model generates code for this task and provides it as part of its response, i.e. the 'assistant' message Our new program python-code-iterate then asks the user whether to run the program and check output If the user says yes, then python-code-iterate will use a subprocess to run the program and check return code, stdout and stderr If the return code is non zero, then the stderr contents will be added as a user message and the chat will continue repeat steps 2 to 5 till satisfactory results are obtained --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 14:16:01 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#59328