[GH-ISSUE #1203] Generating context from aborted request #47126

Closed
opened 2026-04-28 03:20:07 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @FairyTail2000 on GitHub (Nov 20, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1203

Originally assigned to: @BruceMacD on GitHub.

For my own frontend I noticed that it might be useful to have an endpoint where I can generate context from optionally previous context, the typed prompt from the user and the answer of the model before it was interrupted.

This could create a similiar experience to OpenAI's ChatGPT

Originally created by @FairyTail2000 on GitHub (Nov 20, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1203 Originally assigned to: @BruceMacD on GitHub. For my own frontend I noticed that it might be useful to have an endpoint where I can generate context from optionally previous context, the typed prompt from the user and the answer of the model before it was interrupted. This could create a similiar experience to OpenAI's ChatGPT
GiteaMirror added the feature request label 2026-04-28 03:20:07 -05:00
Author
Owner

@BruceMacD commented on GitHub (Nov 20, 2023):

Thanks for the feature request, this functionality will be possible once #991 gets in. #991 returns incremental message content so you won't have to generate a new context.

<!-- gh-comment-id:1819644433 --> @BruceMacD commented on GitHub (Nov 20, 2023): Thanks for the feature request, this functionality will be possible once #991 gets in. #991 returns incremental message content so you won't have to generate a new context.
Author
Owner

@FairyTail2000 commented on GitHub (Dec 5, 2023):

I would like to keep it open since the original pull request was reverted in commit 00d06619a1 and 1d5fa5d944 @BruceMacD

Just to keep track of it. It's not required just personal preference

<!-- gh-comment-id:1840473289 --> @FairyTail2000 commented on GitHub (Dec 5, 2023): I would like to keep it open since the original pull request was reverted in commit https://github.com/jmorganca/ollama/commit/00d06619a11356a155362013b8fc0bc9d0d8a146 and https://github.com/jmorganca/ollama/commit/1d5fa5d944ac0b35307f5c9ea13be34403e8315e @BruceMacD Just to keep track of it. It's not required just personal preference
Author
Owner

@BruceMacD commented on GitHub (Dec 5, 2023):

Thanks for the heads up @FairyTail2000, there was a flip-flop but the change is back in now. So still on track for the next release.

<!-- gh-comment-id:1841556175 --> @BruceMacD commented on GitHub (Dec 5, 2023): Thanks for the heads up @FairyTail2000, there was a flip-flop but the change is back in now. So still on track for the next release.
Author
Owner

@mdhuzaifapatel commented on GitHub (Nov 20, 2024):

Can you guys please tell me (if possible with a code example) how to use context (list of numbers) which we get from current response in next response?

Everytime I'm hitting Ollama API it's generating general answers, not related to the conversation history, I mean, I think it's not using previous responses context .
Please help me out
Thanks

<!-- gh-comment-id:2489432707 --> @mdhuzaifapatel commented on GitHub (Nov 20, 2024): Can you guys please tell me (if possible with a code example) how to use context (list of numbers) which we get from current response in next response? Everytime I'm hitting Ollama API it's generating general answers, not related to the conversation history, I mean, I think it's not using previous responses context . Please help me out Thanks
Author
Owner

@BruceMacD commented on GitHub (Nov 21, 2024):

Hi @mdhuzaifapatel, context returned from the /generate endpoint is deprecated and may not be supported in the future, you should be using the /chat endpoint for conversation history.

Note that I didnt test these examples, they are for reference only.

Chat example:

import requests
import json

def chat_with_ollama(messages, model="mistral"):
    """
    Send a chat request to Ollama's chat endpoint.
    
    Args:
        messages (list): List of message dictionaries with 'role' and 'content'
        model (str): Name of the Ollama model to use
    
    Returns:
        str: The assistant's response
    """
    url = "http://localhost:11434/api/chat"
    
    payload = {
        "model": model,
        "messages": messages,
        "stream": False
    }
    
    response = requests.post(url, json=payload)
    return response.json()

def main():
    # Initialize conversation history
    conversation = []
    
    # First message
    user_message = "What is the capital of France?"
    conversation.append({"role": "user", "content": user_message})
    
    print("User:", user_message)
    response = chat_with_ollama(conversation)
    assistant_message = response['message']['content']
    print("Assistant:", assistant_message)
    conversation.append({"role": "assistant", "content": assistant_message})
    print("---")
    
    # Second message
    user_message = "What other important cities are in that country?"
    conversation.append({"role": "user", "content": user_message})
    
    print("User:", user_message)
    response = chat_with_ollama(conversation)
    assistant_message = response['message']['content']
    print("Assistant:", assistant_message)
    conversation.append({"role": "assistant", "content": assistant_message})
    print("---")
    
    # Third message
    user_message = "Which of those cities has the largest population?"
    conversation.append({"role": "user", "content": user_message})
    
    print("User:", user_message)
    response = chat_with_ollama(conversation)
    assistant_message = response['message']['content']
    print("Assistant:", assistant_message)
    
    # Example of using the API with a more structured conversation handler
    def have_conversation():
        messages = []
        while True:
            user_input = input("\nYou (or 'quit' to end): ")
            if user_input.lower() == 'quit':
                break
                
            # Add user message to history
            messages.append({"role": "user", "content": user_input})
            
            # Get response from Ollama
            response = chat_with_ollama(messages)
            assistant_message = response['message']['content']
            
            # Add assistant response to history
            messages.append({"role": "assistant", "content": assistant_message})
            
            print("Assistant:", assistant_message)

if __name__ == "__main__":
    print("Running example conversation...")
    main()
    
    print("\nStarting interactive chat (type 'quit' to end)...")
    have_conversation()

However, since you asked, here is what using the context with the generate endpoint looks like:

import requests
import json

def generate_with_context(prompt, context=None):
    """
    Generate a response from Ollama with optional context from previous exchanges.
    
    Args:
        prompt (str): The current prompt to send
        context (list): Previous context from the conversation
    
    Returns:
        tuple: (response_text, new_context)
    """
    url = "http://localhost:11434/api/generate"
    
    # Prepare the request payload
    payload = {
        "model": "mistral",  # or any other model you have pulled
        "prompt": prompt,
        "stream": False
    }
    
    # Add context if provided
    if context is not None:
        payload["context"] = context
    
    # Make the API request
    response = requests.post(url, json=payload)
    response_data = response.json()
    
    # Extract the generated response and context
    response_text = response_data.get("response", "")
    new_context = response_data.get("context", None)
    
    return response_text, new_context

# Example usage showing a multi-turn conversation
def main():
    # First message - no context yet
    prompt1 = "What is the capital of France?"
    response1, context = generate_with_context(prompt1)
    print("User:", prompt1)
    print("Assistant:", response1)
    print("---")
    
    # Second message - using context from first exchange
    prompt2 = "What other important cities are in that country?"
    response2, context = generate_with_context(prompt2, context)
    print("User:", prompt2)
    print("Assistant:", response2)
    print("---")
    
    # Third message - using updated context
    prompt3 = "Which of those cities has the largest population?"
    response3, context = generate_with_context(prompt3, context)
    print("User:", prompt3)
    print("Assistant:", response3)

if __name__ == "__main__":
    main()
<!-- gh-comment-id:2492550939 --> @BruceMacD commented on GitHub (Nov 21, 2024): Hi @mdhuzaifapatel, context returned from the `/generate` endpoint is deprecated and may not be supported in the future, you should be using the `/chat` endpoint for conversation history. Note that I didnt test these examples, they are for reference only. Chat example: ```python import requests import json def chat_with_ollama(messages, model="mistral"): """ Send a chat request to Ollama's chat endpoint. Args: messages (list): List of message dictionaries with 'role' and 'content' model (str): Name of the Ollama model to use Returns: str: The assistant's response """ url = "http://localhost:11434/api/chat" payload = { "model": model, "messages": messages, "stream": False } response = requests.post(url, json=payload) return response.json() def main(): # Initialize conversation history conversation = [] # First message user_message = "What is the capital of France?" conversation.append({"role": "user", "content": user_message}) print("User:", user_message) response = chat_with_ollama(conversation) assistant_message = response['message']['content'] print("Assistant:", assistant_message) conversation.append({"role": "assistant", "content": assistant_message}) print("---") # Second message user_message = "What other important cities are in that country?" conversation.append({"role": "user", "content": user_message}) print("User:", user_message) response = chat_with_ollama(conversation) assistant_message = response['message']['content'] print("Assistant:", assistant_message) conversation.append({"role": "assistant", "content": assistant_message}) print("---") # Third message user_message = "Which of those cities has the largest population?" conversation.append({"role": "user", "content": user_message}) print("User:", user_message) response = chat_with_ollama(conversation) assistant_message = response['message']['content'] print("Assistant:", assistant_message) # Example of using the API with a more structured conversation handler def have_conversation(): messages = [] while True: user_input = input("\nYou (or 'quit' to end): ") if user_input.lower() == 'quit': break # Add user message to history messages.append({"role": "user", "content": user_input}) # Get response from Ollama response = chat_with_ollama(messages) assistant_message = response['message']['content'] # Add assistant response to history messages.append({"role": "assistant", "content": assistant_message}) print("Assistant:", assistant_message) if __name__ == "__main__": print("Running example conversation...") main() print("\nStarting interactive chat (type 'quit' to end)...") have_conversation() ``` However, since you asked, here is what using the context with the generate endpoint looks like: ```python import requests import json def generate_with_context(prompt, context=None): """ Generate a response from Ollama with optional context from previous exchanges. Args: prompt (str): The current prompt to send context (list): Previous context from the conversation Returns: tuple: (response_text, new_context) """ url = "http://localhost:11434/api/generate" # Prepare the request payload payload = { "model": "mistral", # or any other model you have pulled "prompt": prompt, "stream": False } # Add context if provided if context is not None: payload["context"] = context # Make the API request response = requests.post(url, json=payload) response_data = response.json() # Extract the generated response and context response_text = response_data.get("response", "") new_context = response_data.get("context", None) return response_text, new_context # Example usage showing a multi-turn conversation def main(): # First message - no context yet prompt1 = "What is the capital of France?" response1, context = generate_with_context(prompt1) print("User:", prompt1) print("Assistant:", response1) print("---") # Second message - using context from first exchange prompt2 = "What other important cities are in that country?" response2, context = generate_with_context(prompt2, context) print("User:", prompt2) print("Assistant:", response2) print("---") # Third message - using updated context prompt3 = "Which of those cities has the largest population?" response3, context = generate_with_context(prompt3, context) print("User:", prompt3) print("Assistant:", response3) if __name__ == "__main__": main() ```
Author
Owner

@mdhuzaifapatel commented on GitHub (Nov 22, 2024):

Thank you Bruce.

On Fri, Nov 22, 2024, 4:50 AM Bruce MacDonald @.***>
wrote:

Hi @mdhuzaifapatel https://github.com/mdhuzaifapatel, context returned
from the /generate endpoint is deprecated and may not be supported in the
future, you should be using the /chat endpoint for conversation history.

Note that I didnt test these examples, they are for reference only.

Chat example:

import requestsimport json
def chat_with_ollama(messages, model="mistral"):
""" Send a chat request to Ollama's chat endpoint. Args: messages (list): List of message dictionaries with 'role' and 'content' model (str): Name of the Ollama model to use Returns: str: The assistant's response """
url = "http://localhost:11434/api/chat"

payload = {
    "model": model,
    "messages": messages,
    "stream": False
}

response = requests.post(url, json=payload)
return response.json()

def main():
# Initialize conversation history
conversation = []

# First message
user_message = "What is the capital of France?"
conversation.append({"role": "user", "content": user_message})

print("User:", user_message)
response = chat_with_ollama(conversation)
assistant_message = response['message']['content']
print("Assistant:", assistant_message)
conversation.append({"role": "assistant", "content": assistant_message})
print("---")

# Second message
user_message = "What other important cities are in that country?"
conversation.append({"role": "user", "content": user_message})

print("User:", user_message)
response = chat_with_ollama(conversation)
assistant_message = response['message']['content']
print("Assistant:", assistant_message)
conversation.append({"role": "assistant", "content": assistant_message})
print("---")

# Third message
user_message = "Which of those cities has the largest population?"
conversation.append({"role": "user", "content": user_message})

print("User:", user_message)
response = chat_with_ollama(conversation)
assistant_message = response['message']['content']
print("Assistant:", assistant_message)

# Example of using the API with a more structured conversation handler
def have_conversation():
    messages = []
    while True:
        user_input = input("\nYou (or 'quit' to end): ")
        if user_input.lower() == 'quit':
            break

        # Add user message to history
        messages.append({"role": "user", "content": user_input})

        # Get response from Ollama
        response = chat_with_ollama(messages)
        assistant_message = response['message']['content']

        # Add assistant response to history
        messages.append({"role": "assistant", "content": assistant_message})

        print("Assistant:", assistant_message)

if name == "main":
print("Running example conversation...")
main()

print("\nStarting interactive chat (type 'quit' to end)...")
have_conversation()

However, since you asked, here is what using the context with the generate
endpoint looks like:

import requestsimport json
def generate_with_context(prompt, context=None):
""" Generate a response from Ollama with optional context from previous exchanges. Args: prompt (str): The current prompt to send context (list): Previous context from the conversation Returns: tuple: (response_text, new_context) """
url = "http://localhost:11434/api/generate"

# Prepare the request payload
payload = {
    "model": "mistral",  # or any other model you have pulled
    "prompt": prompt,
    "stream": False
}

# Add context if provided
if context is not None:
    payload["context"] = context

# Make the API request
response = requests.post(url, json=payload)
response_data = response.json()

# Extract the generated response and context
response_text = response_data.get("response", "")
new_context = response_data.get("context", None)

return response_text, new_context

Example usage showing a multi-turn conversationdef main():

# First message - no context yet
prompt1 = "What is the capital of France?"
response1, context = generate_with_context(prompt1)
print("User:", prompt1)
print("Assistant:", response1)
print("---")

# Second message - using context from first exchange
prompt2 = "What other important cities are in that country?"
response2, context = generate_with_context(prompt2, context)
print("User:", prompt2)
print("Assistant:", response2)
print("---")

# Third message - using updated context
prompt3 = "Which of those cities has the largest population?"
response3, context = generate_with_context(prompt3, context)
print("User:", prompt3)
print("Assistant:", response3)

if name == "main":
main()


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/1203#issuecomment-2492550939,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AVEXRVRJPGDN5USLV3SOUXD2BZTEVAVCNFSM6AAAAABSFL4JV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOJSGU2TAOJTHE
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2493018294 --> @mdhuzaifapatel commented on GitHub (Nov 22, 2024): Thank you Bruce. On Fri, Nov 22, 2024, 4:50 AM Bruce MacDonald ***@***.***> wrote: > Hi @mdhuzaifapatel <https://github.com/mdhuzaifapatel>, context returned > from the /generate endpoint is deprecated and may not be supported in the > future, you should be using the /chat endpoint for conversation history. > > Note that I didnt test these examples, they are for reference only. > > Chat example: > > import requestsimport json > def chat_with_ollama(messages, model="mistral"): > """ Send a chat request to Ollama's chat endpoint. Args: messages (list): List of message dictionaries with 'role' and 'content' model (str): Name of the Ollama model to use Returns: str: The assistant's response """ > url = "http://localhost:11434/api/chat" > > payload = { > "model": model, > "messages": messages, > "stream": False > } > > response = requests.post(url, json=payload) > return response.json() > def main(): > # Initialize conversation history > conversation = [] > > # First message > user_message = "What is the capital of France?" > conversation.append({"role": "user", "content": user_message}) > > print("User:", user_message) > response = chat_with_ollama(conversation) > assistant_message = response['message']['content'] > print("Assistant:", assistant_message) > conversation.append({"role": "assistant", "content": assistant_message}) > print("---") > > # Second message > user_message = "What other important cities are in that country?" > conversation.append({"role": "user", "content": user_message}) > > print("User:", user_message) > response = chat_with_ollama(conversation) > assistant_message = response['message']['content'] > print("Assistant:", assistant_message) > conversation.append({"role": "assistant", "content": assistant_message}) > print("---") > > # Third message > user_message = "Which of those cities has the largest population?" > conversation.append({"role": "user", "content": user_message}) > > print("User:", user_message) > response = chat_with_ollama(conversation) > assistant_message = response['message']['content'] > print("Assistant:", assistant_message) > > # Example of using the API with a more structured conversation handler > def have_conversation(): > messages = [] > while True: > user_input = input("\nYou (or 'quit' to end): ") > if user_input.lower() == 'quit': > break > > # Add user message to history > messages.append({"role": "user", "content": user_input}) > > # Get response from Ollama > response = chat_with_ollama(messages) > assistant_message = response['message']['content'] > > # Add assistant response to history > messages.append({"role": "assistant", "content": assistant_message}) > > print("Assistant:", assistant_message) > if __name__ == "__main__": > print("Running example conversation...") > main() > > print("\nStarting interactive chat (type 'quit' to end)...") > have_conversation() > > However, since you asked, here is what using the context with the generate > endpoint looks like: > > import requestsimport json > def generate_with_context(prompt, context=None): > """ Generate a response from Ollama with optional context from previous exchanges. Args: prompt (str): The current prompt to send context (list): Previous context from the conversation Returns: tuple: (response_text, new_context) """ > url = "http://localhost:11434/api/generate" > > # Prepare the request payload > payload = { > "model": "mistral", # or any other model you have pulled > "prompt": prompt, > "stream": False > } > > # Add context if provided > if context is not None: > payload["context"] = context > > # Make the API request > response = requests.post(url, json=payload) > response_data = response.json() > > # Extract the generated response and context > response_text = response_data.get("response", "") > new_context = response_data.get("context", None) > > return response_text, new_context > # Example usage showing a multi-turn conversationdef main(): > # First message - no context yet > prompt1 = "What is the capital of France?" > response1, context = generate_with_context(prompt1) > print("User:", prompt1) > print("Assistant:", response1) > print("---") > > # Second message - using context from first exchange > prompt2 = "What other important cities are in that country?" > response2, context = generate_with_context(prompt2, context) > print("User:", prompt2) > print("Assistant:", response2) > print("---") > > # Third message - using updated context > prompt3 = "Which of those cities has the largest population?" > response3, context = generate_with_context(prompt3, context) > print("User:", prompt3) > print("Assistant:", response3) > if __name__ == "__main__": > main() > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/1203#issuecomment-2492550939>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AVEXRVRJPGDN5USLV3SOUXD2BZTEVAVCNFSM6AAAAABSFL4JV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOJSGU2TAOJTHE> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47126