[GH-ISSUE #6777] Unable to specify context when using generate from the ollama API #30008

Closed
opened 2026-04-25 04:20:34 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @ma3oun on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/6777

Bug Report

Installation Method

Docker

Environment

  • Open WebUI Version: [v0.3.35]

  • Ollama (if applicable): [v0.4.0]

  • Operating System: [Ubuntu 24.04]

  • Browser (if applicable): [Brave Browser]

Confirmation:

  • [X ] I have read and followed all the instructions provided in the README.md.
  • [ X] I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • [ X] I have included the Docker container logs.
  • [ X] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

Calling the /generate POST function with a context should generate an answer that considers that context

Actual Behavior:

Calling the /generate POST function fails as "Bad request"

Description

Bug Summary:
Calling the /generate POST function fails when the user adds "context" in the POST. The context is the output of a previous call to /generate.

Reproduction Details

Steps to Reproduce:
Here's a sample python code to reproduce the bug.


import requests
import json


def post_and_get_openwebui_responses(url_base, model, prompt, api_key, context=None):
    """
    Posts a JSON payload to 'http://url_base/ollama/api/generate' using OpenAI API key and retrieves the response.

    Args:
        model (str): The model to use for generation.
        prompt (str): The prompt to send to the model.
        api_key (str): Your actual OpenAI API key. Replace with your own value.

    Returns:
        None

    Raises:
        Exception: If there's an issue with the POST or GET operation.
    """
    url = f"http://{url_base}/ollama/api/generate"

    # Define the JSON payload
    if not context:
        payload = {"model": model, "prompt": prompt, "stream": False}
    else:
        payload = {
            "model": model,
            "prompt": prompt,
            "context": str(context),
            "stream": False,
        }

    # Convert the payload to JSON format and encode in UTF-8
    payload_json = json.dumps(payload).encode("utf-8")

    try:
        # Post request with the JSON payload and API key
        response_post = requests.post(
            url, headers={"Authorization": f"Bearer {api_key}"}, data=payload_json
        )

        if response_post.status_code == 200:
            print("POST successful.")
            content = json.loads(response_post.content.decode("utf-8"))
            response = content["response"]
            context = content["context"]
            print(response)

        else:
            print("Error during POST operation.")
    except requests.exceptions.RequestException as e:
        print(f"An error occurred: {e}")

    return context


# Execute the function to post and get responses
if __name__ == "__main__":
    url_base = "localhost:8080"
    model = "llama3.1:8b"
    api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    context = None
try:
    while True:
        prompt = input("Ask the model (type 'exit' to quit): ")
        if prompt.lower() == "exit":
            break
        context = post_and_get_openwebui_responses(
            url_base, model, prompt, api_key, context
        )
except KeyboardInterrupt:
    print("\nInterrupted. Exiting...")


Logs and Screenshots

The server's response after the second prompt is:

'{"detail":"Ollama: 400, message='Bad Request', url='http://host.docker.internal:11434/api/generate'"}'

Additional information

If context is not provided as string, the error is a schema validation error.

Originally created by @ma3oun on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/6777 # Bug Report ## Installation Method Docker ## Environment - **Open WebUI Version:** [v0.3.35] - **Ollama (if applicable):** [v0.4.0] - **Operating System:** [Ubuntu 24.04] - **Browser (if applicable):** [Brave Browser] **Confirmation:** - [X ] I have read and followed all the instructions provided in the README.md. - [ X] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [ X] I have included the Docker container logs. - [ X] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: Calling the /generate POST function with a context should generate an answer that considers that context ## Actual Behavior: Calling the /generate POST function fails as "Bad request" ## Description **Bug Summary:** Calling the /generate POST function fails when the user adds "context" in the POST. The context is the output of a previous call to /generate. ## Reproduction Details **Steps to Reproduce:** Here's a sample python code to reproduce the bug. ```python import requests import json def post_and_get_openwebui_responses(url_base, model, prompt, api_key, context=None): """ Posts a JSON payload to 'http://url_base/ollama/api/generate' using OpenAI API key and retrieves the response. Args: model (str): The model to use for generation. prompt (str): The prompt to send to the model. api_key (str): Your actual OpenAI API key. Replace with your own value. Returns: None Raises: Exception: If there's an issue with the POST or GET operation. """ url = f"http://{url_base}/ollama/api/generate" # Define the JSON payload if not context: payload = {"model": model, "prompt": prompt, "stream": False} else: payload = { "model": model, "prompt": prompt, "context": str(context), "stream": False, } # Convert the payload to JSON format and encode in UTF-8 payload_json = json.dumps(payload).encode("utf-8") try: # Post request with the JSON payload and API key response_post = requests.post( url, headers={"Authorization": f"Bearer {api_key}"}, data=payload_json ) if response_post.status_code == 200: print("POST successful.") content = json.loads(response_post.content.decode("utf-8")) response = content["response"] context = content["context"] print(response) else: print("Error during POST operation.") except requests.exceptions.RequestException as e: print(f"An error occurred: {e}") return context # Execute the function to post and get responses if __name__ == "__main__": url_base = "localhost:8080" model = "llama3.1:8b" api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx" context = None try: while True: prompt = input("Ask the model (type 'exit' to quit): ") if prompt.lower() == "exit": break context = post_and_get_openwebui_responses( url_base, model, prompt, api_key, context ) except KeyboardInterrupt: print("\nInterrupted. Exiting...") ``` ## Logs and Screenshots The server's response after the second prompt is: > '{"detail":"Ollama: 400, message=\'Bad Request\', url=\'http://host.docker.internal:11434/api/generate\'"}' ## Additional information If context is not provided as string, the error is a schema validation error.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#30008