[GH-ISSUE #4374] how to write script so that it will remember the last conversation . #28490

Closed
opened 2026-04-22 06:42:25 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @View-my-Git-Lab-krafi on GitHub (May 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4374

What is the issue?

You:: my name is rafi
AI: Nice to meet you, Rafi !  
You:: what was my name
AI: I apologize , but I don 't think we ever established a specific name for you in our  conversation!

whether i use the
localhost:11434/api/generate or
http://localhost:11434/api/chat same result

# Define API endpoint
$apiEndpoint = "http://localhost:11434/api/generate"

# Function to send prompt to API and receive response
function GetResponse {
    param (
        [string]$prompt
    )

    $payload = @{
        "model" = "llama3"
        "prompt" = $prompt
    } | ConvertTo-Json

    $response = Invoke-RestMethod -Uri $apiEndpoint -Method Post -Body $payload -ContentType "application/json"

    # Parse each JSON response and return the generated text
    $responseObjects = $response -split "`n" | ForEach-Object { $_ | ConvertFrom-Json }
    $generatedText = foreach ($responseObject in $responseObjects) {
        $responseObject.response
    }

    return ($generatedText -join " ")
}

# Initial prompt
Write-Output "AI: Hi there! I'm AI, nice to meet you! Is there something on your mind that you'd like to chat about or ask for help with? I'm here to listen and assist if I can."

# Main loop
while ($true) {
    # Prompt user for input
    $userInput = Read-Host "You:"

    # Send user input to API and receive response
    $responseText = GetResponse -prompt $userInput

    # Display response
    Write-Output "AI: $responseText"
}

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

ollama version is 0.1.33

Originally created by @View-my-Git-Lab-krafi on GitHub (May 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4374 ### What is the issue? ``` You:: my name is rafi AI: Nice to meet you, Rafi ! You:: what was my name AI: I apologize , but I don 't think we ever established a specific name for you in our conversation! ``` **whether i use the** localhost:11434/api/generate or http://localhost:11434/api/chat same result ``` # Define API endpoint $apiEndpoint = "http://localhost:11434/api/generate" # Function to send prompt to API and receive response function GetResponse { param ( [string]$prompt ) $payload = @{ "model" = "llama3" "prompt" = $prompt } | ConvertTo-Json $response = Invoke-RestMethod -Uri $apiEndpoint -Method Post -Body $payload -ContentType "application/json" # Parse each JSON response and return the generated text $responseObjects = $response -split "`n" | ForEach-Object { $_ | ConvertFrom-Json } $generatedText = foreach ($responseObject in $responseObjects) { $responseObject.response } return ($generatedText -join " ") } # Initial prompt Write-Output "AI: Hi there! I'm AI, nice to meet you! Is there something on your mind that you'd like to chat about or ask for help with? I'm here to listen and assist if I can." # Main loop while ($true) { # Prompt user for input $userInput = Read-Host "You:" # Send user input to API and receive response $responseText = GetResponse -prompt $userInput # Display response Write-Output "AI: $responseText" } ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version ollama version is 0.1.33
GiteaMirror added the bug label 2026-04-22 06:42:25 -05:00
Author
Owner
<!-- gh-comment-id:2106223778 --> @XDesktopSoft commented on GitHub (May 12, 2024): https://github.com/ollama/ollama/blob/main/docs/api.md#chat-request-with-history
Author
Owner

@View-my-Git-Lab-krafi commented on GitHub (May 12, 2024):

The replay i got only is , your name is rafi!
why it miss the first question

$url = "http://localhost:11434/api/chat"
$json = '{
  "model": "llama3",
  "messages": [
    {
      "role": "user",
      "content": "what is the color of sky, and my name  is rafi"
    },

    {
      "role": "user",
      "content": "what was my name?"
    }
  ]
}'

Invoke-RestMethod -Method Post -Uri $url -Body $json -ContentType 'application/json'
Read-Host -Prompt "Press Enter to exit"

<!-- gh-comment-id:2106233111 --> @View-my-Git-Lab-krafi commented on GitHub (May 12, 2024): The replay i got only is , **your name is rafi!** why it miss the first question ``` $url = "http://localhost:11434/api/chat" $json = '{ "model": "llama3", "messages": [ { "role": "user", "content": "what is the color of sky, and my name is rafi" }, { "role": "user", "content": "what was my name?" } ] }' Invoke-RestMethod -Method Post -Uri $url -Body $json -ContentType 'application/json' Read-Host -Prompt "Press Enter to exit" ```
Author
Owner

@View-my-Git-Lab-krafi commented on GitHub (May 12, 2024):

if i store it as json i dont know is it a correct way to do it or not

`python


import requests
import json

# Define the conversation history
conversation = []

# Define the API endpoint
api_url = 'http://localhost:11434/api/chat'

# Define the model to use
model_name = 'llama3'

while True:
    # Get user input
    user_input = input("You: ")
    
    # Add user input to the conversation history
    conversation.append({"role": "user", "content": user_input})
    
    # Send message to the chatbot API
    response = requests.post(api_url, json={"model": model_name, "messages": conversation})
    
    # Split the response by newline character
    response_parts = response.content.split(b'\n')
    
    # Process each part of the response
    for part in response_parts:
        # Decode the JSON object
        response_json = part.decode('utf-8')
        
        # Skip empty parts
        if not response_json.strip():
            continue
        
        # Parse the JSON object
        response_data = json.loads(response_json)
        
        # Print the content of the assistant's message
        
        print("AI:", response_data['message']['content'])
        
        # Add assistant's message to the conversation history
        conversation.append({"role": "assistant", "content": response_data['message']['content']})

`

<!-- gh-comment-id:2106240685 --> @View-my-Git-Lab-krafi commented on GitHub (May 12, 2024): if i store it as json i dont know is it a correct way to do it or not `python ``` import requests import json # Define the conversation history conversation = [] # Define the API endpoint api_url = 'http://localhost:11434/api/chat' # Define the model to use model_name = 'llama3' while True: # Get user input user_input = input("You: ") # Add user input to the conversation history conversation.append({"role": "user", "content": user_input}) # Send message to the chatbot API response = requests.post(api_url, json={"model": model_name, "messages": conversation}) # Split the response by newline character response_parts = response.content.split(b'\n') # Process each part of the response for part in response_parts: # Decode the JSON object response_json = part.decode('utf-8') # Skip empty parts if not response_json.strip(): continue # Parse the JSON object response_data = json.loads(response_json) # Print the content of the assistant's message print("AI:", response_data['message']['content']) # Add assistant's message to the conversation history conversation.append({"role": "assistant", "content": response_data['message']['content']}) ``` `
Author
Owner

@zanderlewis commented on GitHub (May 12, 2024):

if i store it as json i dont know is it a correct way to do it or not

`python




import requests

import json



# Define the conversation history

conversation = []



# Define the API endpoint

api_url = 'http://localhost:11434/api/chat'



# Define the model to use

model_name = 'llama3'



while True:

    # Get user input

    user_input = input("You: ")

    

    # Add user input to the conversation history

    conversation.append({"role": "user", "content": user_input})

    

    # Send message to the chatbot API

    response = requests.post(api_url, json={"model": model_name, "messages": conversation})

    

    # Split the response by newline character

    response_parts = response.content.split(b'\n')

    

    # Process each part of the response

    for part in response_parts:

        # Decode the JSON object

        response_json = part.decode('utf-8')

        

        # Skip empty parts

        if not response_json.strip():

            continue

        

        # Parse the JSON object

        response_data = json.loads(response_json)

        

        # Print the content of the assistant's message

        

        print("AI:", response_data['message']['content'])

        

        # Add assistant's message to the conversation history

        conversation.append({"role": "assistant", "content": response_data['message']['content']})

`

Instead of using requests, just use the Ollama library instead by using pip install ollama. There are people who made projects that use a history and want it added into examples for python.

<!-- gh-comment-id:2106255218 --> @zanderlewis commented on GitHub (May 12, 2024): > if i store it as json i dont know is it a correct way to do it or not > > > > `python > > ``` > > > > import requests > > import json > > > > # Define the conversation history > > conversation = [] > > > > # Define the API endpoint > > api_url = 'http://localhost:11434/api/chat' > > > > # Define the model to use > > model_name = 'llama3' > > > > while True: > > # Get user input > > user_input = input("You: ") > > > > # Add user input to the conversation history > > conversation.append({"role": "user", "content": user_input}) > > > > # Send message to the chatbot API > > response = requests.post(api_url, json={"model": model_name, "messages": conversation}) > > > > # Split the response by newline character > > response_parts = response.content.split(b'\n') > > > > # Process each part of the response > > for part in response_parts: > > # Decode the JSON object > > response_json = part.decode('utf-8') > > > > # Skip empty parts > > if not response_json.strip(): > > continue > > > > # Parse the JSON object > > response_data = json.loads(response_json) > > > > # Print the content of the assistant's message > > > > print("AI:", response_data['message']['content']) > > > > # Add assistant's message to the conversation history > > conversation.append({"role": "assistant", "content": response_data['message']['content']}) > > ``` > > > > ` Instead of using requests, just use the Ollama library instead by using `pip install ollama`. There are people who made projects that use a history and want it added into examples for python.
Author
Owner

@View-my-Git-Lab-krafi commented on GitHub (May 12, 2024):

can you give me doc link, where i can learn properly

<!-- gh-comment-id:2106255541 --> @View-my-Git-Lab-krafi commented on GitHub (May 12, 2024): can you give me doc link, where i can learn properly
Author
Owner

@zanderlewis commented on GitHub (May 12, 2024):

can you give me doc link, where i can learn properly

I have an example in one of my repos. here is the link.

<!-- gh-comment-id:2106256175 --> @zanderlewis commented on GitHub (May 12, 2024): > can you give me doc link, where i can learn properly I have an example in one of my repos. here is the [link.](https://github.com/WolfTheDeveloper/llama3-ollama/blob/master/cli.py)
Author
Owner

@View-my-Git-Lab-krafi commented on GitHub (May 12, 2024):

thanks you may love it , whats app style

import sys
import threading
from datetime import datetime
import json
from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget, QTextEdit, QPushButton, QHBoxLayout, QSizePolicy
from PySide6.QtGui import QColor, QTextCursor
import ollama 

MODEL = 'llama3'

class ChatWindow(QMainWindow):
    def __init__(self):
        super().__init__()
        self.setWindowTitle("Chat Window")
        self.setGeometry(100, 100, 600, 800) 
        
        self.central_widget = QWidget()
        self.setCentralWidget(self.central_widget)
        
        self.layout = QVBoxLayout()
        self.central_widget.setLayout(self.layout)
        
        self.chat_display = QTextEdit()
        self.chat_display.setReadOnly(True) 
        self.chat_display.setStyleSheet(
            "background-color: #e6e6e6; "
            "border: none; "
            "padding: 10px;"
            "border-radius: 10px;"
        )
        self.layout.addWidget(self.chat_display)
        
        self.input_layout = QHBoxLayout()
        self.layout.addLayout(self.input_layout)
        
        self.text_edit = QTextEdit()
        self.text_edit.setPlaceholderText("Type your message here...")
        self.text_edit.setStyleSheet(
            "background-color: #ffffff; "
            "border: none; "
            "padding: 10px;"
            "border-radius: 10px;"
        )
        self.text_edit.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Fixed)
        self.input_layout.addWidget(self.text_edit)
        
        self.send_button = QPushButton("Send")
        self.send_button.setStyleSheet(
            "background-color: #128C7E; "
            "color: #ffffff; "
            "border: none; "
            "padding: 10px;"
            "border-radius: 10px;"
        )
        self.send_button.clicked.connect(self.send_message)
        self.input_layout.addWidget(self.send_button)
        
        self.model = MODEL
        self.history = []

        self.user_color = QColor(0, 128, 0)  # Green
        self.assistant_color = QColor(0, 0, 255)  # Blue

    def append_message(self, role, message, color):
        formatted_message = f"<b>{role.capitalize()}:</b> {message}"
        cursor = self.chat_display.textCursor()
        cursor.movePosition(QTextCursor.End)
        cursor.insertHtml(f'<font color="{color.name()}">{formatted_message}</font><br>')
        self.chat_display.setTextCursor(cursor)

    def send_message(self):
        user_input = self.text_edit.toPlainText().strip()
        if not user_input: 
            return
        
        self.text_edit.clear()
        
        self.history.append({"role": "user", "content": user_input, "timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S")})
        
        threading.Thread(target=self.send_and_receive_message).start()

    def send_and_receive_message(self):
        response = ollama.chat(model=self.model, messages=self.history)
        assistant_message = response['message']['content']
        assistant_response = f"Assistant: {assistant_message}"

        user_message = f"User: {self.history[-1]['content']}" 
        self.append_message("User", user_message, self.user_color)
        self.append_message("Assistant", assistant_response, self.assistant_color)


if __name__ == "__main__":
    app = QApplication(sys.argv)
    window = ChatWindow()
    window.show()
    sys.exit(app.exec())

<!-- gh-comment-id:2106257879 --> @View-my-Git-Lab-krafi commented on GitHub (May 12, 2024): thanks you may love it , whats app style ``` import sys import threading from datetime import datetime import json from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget, QTextEdit, QPushButton, QHBoxLayout, QSizePolicy from PySide6.QtGui import QColor, QTextCursor import ollama MODEL = 'llama3' class ChatWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Chat Window") self.setGeometry(100, 100, 600, 800) self.central_widget = QWidget() self.setCentralWidget(self.central_widget) self.layout = QVBoxLayout() self.central_widget.setLayout(self.layout) self.chat_display = QTextEdit() self.chat_display.setReadOnly(True) self.chat_display.setStyleSheet( "background-color: #e6e6e6; " "border: none; " "padding: 10px;" "border-radius: 10px;" ) self.layout.addWidget(self.chat_display) self.input_layout = QHBoxLayout() self.layout.addLayout(self.input_layout) self.text_edit = QTextEdit() self.text_edit.setPlaceholderText("Type your message here...") self.text_edit.setStyleSheet( "background-color: #ffffff; " "border: none; " "padding: 10px;" "border-radius: 10px;" ) self.text_edit.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Fixed) self.input_layout.addWidget(self.text_edit) self.send_button = QPushButton("Send") self.send_button.setStyleSheet( "background-color: #128C7E; " "color: #ffffff; " "border: none; " "padding: 10px;" "border-radius: 10px;" ) self.send_button.clicked.connect(self.send_message) self.input_layout.addWidget(self.send_button) self.model = MODEL self.history = [] self.user_color = QColor(0, 128, 0) # Green self.assistant_color = QColor(0, 0, 255) # Blue def append_message(self, role, message, color): formatted_message = f"<b>{role.capitalize()}:</b> {message}" cursor = self.chat_display.textCursor() cursor.movePosition(QTextCursor.End) cursor.insertHtml(f'<font color="{color.name()}">{formatted_message}</font><br>') self.chat_display.setTextCursor(cursor) def send_message(self): user_input = self.text_edit.toPlainText().strip() if not user_input: return self.text_edit.clear() self.history.append({"role": "user", "content": user_input, "timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S")}) threading.Thread(target=self.send_and_receive_message).start() def send_and_receive_message(self): response = ollama.chat(model=self.model, messages=self.history) assistant_message = response['message']['content'] assistant_response = f"Assistant: {assistant_message}" user_message = f"User: {self.history[-1]['content']}" self.append_message("User", user_message, self.user_color) self.append_message("Assistant", assistant_response, self.assistant_color) if __name__ == "__main__": app = QApplication(sys.argv) window = ChatWindow() window.show() sys.exit(app.exec()) ```
Author
Owner

@View-my-Git-Lab-krafi commented on GitHub (May 12, 2024):

https://gitlab.com/krafi/ollamachat
https://gitlab.com/krafi/git-backed-diary

actually i am learing it bcz i have this diary application , you can excrypt photo , recode audio or add audio , add note -> compress -> encrypt them and push to git, what if i can RAG a ai model , it may become know everything about you, your diary assistant. cheers open source

<!-- gh-comment-id:2106258958 --> @View-my-Git-Lab-krafi commented on GitHub (May 12, 2024): https://gitlab.com/krafi/ollamachat https://gitlab.com/krafi/git-backed-diary actually i am learing it bcz i have this diary application , you can excrypt photo , recode audio or add audio , add note -> compress -> encrypt them and push to git, what if i can RAG a ai model , it may become know everything about you, your diary assistant. cheers open source
Author
Owner

@View-my-Git-Lab-krafi commented on GitHub (May 12, 2024):

can you give me doc link, where i can learn properly

I have an example in one of my repos. here is the link.

i see you use , history , are you sure this is correct way to do this , there is no built in feature to that ollama python library ?
its like storeing a chat in a list self.history = [] , there is not better way ?

<!-- gh-comment-id:2106270432 --> @View-my-Git-Lab-krafi commented on GitHub (May 12, 2024): > > can you give me doc link, where i can learn properly > > I have an example in one of my repos. here is the [link.](https://github.com/WolfTheDeveloper/llama3-ollama/blob/master/cli.py) i see you use , history , are you sure this is correct way to do this , there is no built in feature to that ollama python library ? its like storeing a chat in a list self.history = [] , there is not better way ?
Author
Owner

@zanderlewis commented on GitHub (May 12, 2024):

can you give me doc link, where i can learn properly

I have an example in one of my repos. here is the link.

i see you use , history , are you sure this is correct way to do this , there is no built in feature to that ollama python library ?

its like storeing a chat in a list self.history = [] , there is not better way ?

This is the correct way for there is not feature for this built into Ollama or the Ollama python library.

<!-- gh-comment-id:2106271313 --> @zanderlewis commented on GitHub (May 12, 2024): > > > can you give me doc link, where i can learn properly > > > > > > I have an example in one of my repos. here is the [link.](https://github.com/WolfTheDeveloper/llama3-ollama/blob/master/cli.py) > > > > i see you use , history , are you sure this is correct way to do this , there is no built in feature to that ollama python library ? > > its like storeing a chat in a list self.history = [] , there is not better way ? This is the correct way for there is not feature for this built into Ollama or the Ollama python library.
Author
Owner

@pdevine commented on GitHub (May 14, 2024):

I'm going to go ahead and close the issue, but you guys should feel free to keep commenting.

<!-- gh-comment-id:2110791325 --> @pdevine commented on GitHub (May 14, 2024): I'm going to go ahead and close the issue, but you guys should feel free to keep commenting.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28490