mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #9848] Chat Completions API endpoint with Knowledge Collection leads unconsistent results #15671
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @sblbl on GitHub (Feb 12, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/9848
Using chat completion with reference to a knowledge collection via REST API results in the model inventing the sources
Installation Method
pip install open-webui
Environment
Open WebUI Version: v0.5.10
Ollama (if applicable): v0.5.7
Operating System: macOS Sequoia
Browser: Brave
Expected Behavior:
I should get an answer that cites the documents in the knowledge collection, as it happens in the GUI environment.
Actual Behavior:
Openwebui called via python through API returns very bad answers with no real sources.
Description
Bug Summary:
[Provide a brief but clear summary of the bug]
Reproduction Details
Logs and Screenshots
python log:
INFO:utils:RAG response: {'id': 'llama3.2:3b-ed2507fb-f824-46e1-8ca7-dfe573c9a3af', 'created': 1739355112, 'model': 'llama3.2:3b', 'choices': [{'index': 0, 'logprobs': None, 'finish_reason': 'stop', 'message': {'content': 'The topic of AI raises several ethical concerns. Some of the key issues include:\n\n* Bias and fairness: AI systems can perpetuate existing biases if they are trained on biased data or designed with a particular worldview (Source: "AI Explainability: A Survey" [1])\n* Job displacement: As AI becomes more capable, it may displace certain jobs, particularly those that involve repetitive tasks (Source: "The Future of Work and the Role of Artificial Intelligence")\n* Privacy: AI systems often rely on vast amounts of personal data to function effectively, raising concerns about data protection and individual privacy (Source: "AI and Data Protection: A Survey")\n* Accountability: As AI makes decisions autonomously, there is a growing need for accountability mechanisms to ensure that these decisions are fair and unbiased.\n* Existential risks: Some experts worry that advanced AI could pose an existential risk to humanity if it becomes capable of causing significant harm (Source: "The Ethics of Artificial Intelligence")\n\nThese issues highlight the importance of ongoing research into AI ethics, as well as the need for careful consideration and regulation of these technologies.\n\nClarification is needed on which specific aspect of AI you would like to know more about.', 'role': 'assistant'}}], 'object': 'chat.completion', 'usage': {'response_token/s': 54.32, 'prompt_token/s': 678.88, 'total_duration': 5381510625, 'load_duration': 22859000, 'prompt_eval_count': 389, 'prompt_eval_duration': 573000000, 'eval_count': 244, 'eval_duration': 4492000000, 'approximate_total': '0h0m5s'}}Additional Information
I tried with mistral:latest and llama3.2:3b, with or without a system prompt
@rgaricano commented on GitHub (Feb 12, 2025):
Citation is a tool, don't expect (right now) the format through the api to be the same as in GUI.
For citations on API response, maybe you can improve your prompt, as it is in default RAG template:
e9d6ada25c/backend/open_webui/config.py (L1555-L1582)@austin3410 commented on GitHub (Feb 12, 2025):
I'm going through this same thing. I sort of expected that since we're going through the Open-WebUI API and not the proxied Ollama API, we'd get some sort of source information back, or at least some way to retrieve that information through the API.
Is there maybe a possible pipeline or tool we could use for this?