mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
[GH-ISSUE #7333] bug: RAG citations not visible 0.4.4 #30235
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Pekkari on GitHub (Nov 25, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/7333
Installation Method
Podman/docker
Environment
Open WebUI Version: v0.4.4
Ollama (if applicable): v0.4.2
Operating System: Fedora 41
Browser (if applicable): Firefox 132.0.1]
Confirmation:
Expected Behavior:
When model completes its answer, the spinning banner that says
Searching Knowledge for "<your comment here>"would be replaced with the document used in the answers, as in previous versions.
Actual Behavior:
The spinning banner persists in the answer indefinitely
Description
Bug Summary:
The spinning banner that states
Searching Knowledge for "<your comment here>"never gets replacedwith the relevant documents after the answer is provided by the LLM
Reproduction Details
Steps to Reproduce:
Logs and Screenshots
Browser Console Logs:
[Include relevant browser console logs, if applicable]
Docker Container Logs:
[Include relevant Docker container logs, if applicable]
Screenshots/Screen Recordings (if applicable):

@tjbck commented on GitHub (Nov 25, 2024):
Could you confirm this issue persists on the dev branch? I'm unable to reproduce this issue.
@tjbck commented on GitHub (Nov 25, 2024):
@ahyaha based on your issue description, i highly suspect your issues aren't the same issues described by the OP here.
With that being said, you might want to disabled the RAG query generation in the admin settings, which based on the provided description, would most likely resolve the issue for you.
@ahyaha commented on GitHub (Nov 26, 2024):
I disabled it and i got the inline reference in the response.
However, it takes more time to respond and also i see "searching knowledge
for ..."still running even after i got the response.. please solve this
issue
Secondly, what is the benefits of enabling the Rag query generation. I
mean, what i will lose if i disable this setting?
On Tue, Nov 26, 2024, 7:36 AM Timothy Jaeryang Baek <
@.***> wrote:
@ahyaha commented on GitHub (Nov 26, 2024):
How to see the inline citations? Please help
@redpenguin commented on GitHub (Nov 26, 2024):
When I set the model's Stream Chat Response to "off," the issue can be reproduced. Setting it to the default or turning it on works normally.

@Pekkari commented on GitHub (Nov 26, 2024):
@redpenguin I'm afraid I have Stream Chat Response to default in the model shown before, and I reproduce it daily:

@tjbck I'm afraid it also happens in dev, I just pulled the container and gave it a try, this is the pic:

@tjbck commented on GitHub (Nov 26, 2024):
Could anyone confirm if the issue has been resolved with 0.4.5? I'm still unable to reproduce the issue unfortunately 😢
@tjbck commented on GitHub (Nov 26, 2024):
@Pekkari


Here's my configuration
@hymnsea commented on GitHub (Nov 27, 2024):
I encountered the same problem as well. 0.4.4

@Pekkari commented on GitHub (Nov 27, 2024):
Hi @tjbck , yes, I updated my container to the new version and it still happens, I also upgraded ollama:rocm container to have ollama 0.4.4 right now, however this seems to be a problem in the UI, so ollama version may have little to say here.
@Pekkari commented on GitHub (Nov 27, 2024):
I synced the folder in knowledge base, to recreate the vector database, but it seems not to be solving the problem, so I may try to destroy the knowledge base, and recreate it, which most likely will require to recreate the model as well. I'll keep you posted in my progress.
@Pekkari commented on GitHub (Nov 27, 2024):
and yes, I'm currently in 0.4.6, just in case :)
@tjbck commented on GitHub (Nov 27, 2024):
@Pekkari I'd love to take a look at your instance personally, if you're comfortable sharing your instance details feel free to PM me on discord!
@Pekkari commented on GitHub (Nov 28, 2024):
I'd love to share it, but I'm afraid discord is very unwelcoming platform for such a thing, even if it doesn't require login for your community, is there any chance we can do by other mean?
@moblangeois commented on GitHub (Nov 30, 2024):
I've had the same issue, though my ollama instance is located in another computer that wasn't turned on when I uploaded files. I've had no error messages too AFAIK. It got fixed when I re-uploaded the documents with the computer turned on. Maybe there should be some indications about the presence of embeddings of the files ?
@Pekkari commented on GitHub (Nov 30, 2024):
For me, re-uploading the files, or starting a new conversation after that doesn't make the citations happen anymore, no wonder what went wrong, but it seems that it may be something related to some changes from 0.4.4 on, so picking a container of that version making a knowledge base and a model with it, and then upgrade to newer versions may be the way to test it
@tjbck commented on GitHub (Nov 30, 2024):
Discord is the best way for me to actually address the issue unfortunately.
My educated guess here is somehow the embedding process is failing here, or if you have custom prompt templates that could also be the culprit. I would really appreciate if you could send all the screenshot of document/interface sections from the admins settings.
Below are my configurations:


@Pekkari commented on GitHub (Dec 1, 2024):
On a quick read it seems the only difference seem to be that I'm using the mixedbread.ai model instead of nomic, but here is my pictures for your perusal:


@Pekkari commented on GitHub (Dec 1, 2024):
since the RAG template seems not to be fully visible in the pictures, you can have it here, I didn't change it though, it is a default value:
@mbond99 commented on GitHub (Dec 2, 2024):
Hi, I was also experiencing this issue but was finally able to resolve it today!
Under Admin panel -> Settings -> Documents, above the embedding batch size, verify that the URL is correct for your embedding model. We are running ollama on a separate sever not in a docker container, but after the update that URL was set to http://host.docker.internal:11434. I updated this value to http://'our server ip':11434 and it fixed all of our issues. I believe if you are just running ollama on your local machine without a docker container you could just do http://localhost:11434 (I also have another instance of Open WebUI running on a different computer where docker and Ollama are on the same machine and the http://host.docker.internal:11434 URL doesn't cause any issues).
Devs please correct me if I'm wrong, but I believe the reason this was happening is because the URL value set in the document settings maps to the environment variable RAG_OLLAMA_BASE_URL which is later used in a HTTP call to access the embedding model. This causes the following error to be thrown 404 Client Error: Not Found for url: http://host.docker.internal:11434/api/embed resulting in the spinning wheel of death for searching the knowledge base.
@Pekkari commented on GitHub (Dec 3, 2024):
in my case host.docker.internal should have been ok, but it turns out that its value is problably destroying it(169.254.1.2), I don't know why podman, or pasta assigned that, that may not go out of the container itself. The picture is that I have a podman pod with a container of ollama:rocm image, and another container for the open-webui, so open-webui container shows an /etc/hosts like:
Where ollama-rocm is the ollama:rocm container, rag is open-webui, ollama is the pod, so in theory, both localhost, ollama, and ollama-rocm should be possible values, however, if I set them up, citations just disappear, is there any need to recreate the knowledge base after changing the documents url @mbond99 ? Thanks!
@Pekkari commented on GitHub (Dec 3, 2024):
Yep, after setting other value of the host, for my case ollama(pod name) was the best option, and recreating the knowledge base I can see again the citations, however the citations that went wrong remains wrong :

@mbond99 commented on GitHub (Dec 3, 2024):
@Pekkari Hmm that is strange that your host.docker.internal is a different IP if you're running it locally. I'm not super familiar with Podman though so probably just something weird with that. I'm glad swapping it to ollama(pod name) worked for you! As for the knowledge base, ours worked fine after changing the URL. We are also able to upload and get correct citations again too. Sounds like you're still having some issues with your citations, but I'm not sure I understand what you mean by the wrong citations remain wrong. Are you using an old chat from when things were broken? If so, I wonder if maybe it uses cache.
@Pekkari commented on GitHub (Dec 4, 2024):
Not for all tests, but yes, I have some old chats that I don't want to destroy, that were generated while having the issue, and after fixing it, the new chat entries gets the citations correct, but the former entries keeps spinning, so I don't expect those are going to be fixable anyways, but I still want the content.
@mbond99 commented on GitHub (Dec 4, 2024):
Ah I see now. You don't happen to have seed set on the model you used, do you? If you do, you should be able to ask the same exact question you originally asked and get the same results (hopefully without the spinning circle). If I understand seeds correctly, I don't think you can set the seed after the original prompt and get the same results though unfortunately. But yea, I agree that the broken chats you want to save probably aren't fixable.
@Pekkari commented on GitHub (Dec 4, 2024):
No, I don't, it is set to default, so may be there is some chance to be recoverable, but not something I can do in short term, since I may need to repeat a big set of questions in it's order, and blah blah blah. Thanks for the hint anyways :)
@flexiondotorg commented on GitHub (Dec 12, 2024):
Installation Method
Open WebUI and Ollama are installed as part of my NixOS configuration in a three-host deployment:
https://revan.domain.tldhttp://127.0.0.1:8000(not the Chroma bundled with Open WebUI)http://127.0.0.1:9998http://phasma:11434http://vader:11434Submitting prompt from Open WebUI works, with the response being returned from either of the Ollama instances.
Environment
Confirmation:
Expected Behavior:
After creating a Knowledge base of three manuals, submitting a prompt to any LLM that tags the Knowledge base or any individual document for use as additional context should include citations/references to the relevant context in the response.
After creating a Custom model with a Knowledge base attached, submitting a prompt to that custom model should search the attached Knowledge base for additional context and include citations/references in the response.
Actual Behavior:
Submitting a prompt to any LLM that tags a Knowledge base or individual document to provide additional context does not include citations/references to the relevant context in the response.
After submitting a prompt to a custom model with a Knowledge base attached, the "Searching for..." sub-title spins indefinitely, and the response doesn't include any citations/references of additional context used.
Description
Bug Summary:
Reproduction Details
Steps to Reproduce:
Add Open WebUI and Ollama to a NixOS configuration. When deployed, create a Knowledge base and custom model as described above and submit prompts that use them as additional context.
Logs and Screenshots
Screenshots:
Document embedding settings

Additional Information
While I have outlined a three-host deployment, the behaviour I describe is reproducible when I deploy on a single host with Open WebUI connected to a single Ollama instance on the same host. The Open WebUI document embedding settings use the defaults, and no Tika server is involved.
In comment https://github.com/open-webui/open-webui/issues/7333#issuecomment-2512287381 @mbond99 commented:
I am not using Docker; all services are on the host. However, I did try changing host references to
RAG_OLLAMA_BASE_URLand in the configuration settings UI tolocalhost,127.0.0.1, and remote hostnames (that resolve via DNS correctly). Sadly, this didn't affect any changes for me.