mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 03:18:23 -05:00
bug : GGUF model file upload not working #197
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @hemangjoshi37a on GitHub (Jan 17, 2024).
GGFU model file upload not working
@ihor-sokoliuk commented on GitHub (Jan 17, 2024):
I will add more info there as I am facing it as well.
When I select a GGUF file for upload and click "upload", the UI resets like I did not choose anything, and that's it. There is no error message on the UI.
In requests, there is one post request:
https://ollama/api/v1/utils/upload
With response:
413 Request Entity Too Large
The file I tried to upload is over 40 gigs, taken from there:
https://huggingface.co/senseable/MoMo-70B-lora-1.8.6-DPO-gguf/tree/main
There is probably a config that adjusts the maximum file size to upload.
Also, it will be great to have the option to download GUFFs by URL directly into Ollama.
I hope it helps!
@justinh-rahb commented on GitHub (Jan 17, 2024):
@ihor-sokoliuk, GGUF download by URL is already possible, just click the
file modeto toggle it toURL mode:@ihor-sokoliuk commented on GitHub (Jan 17, 2024):
It saved me today! Thank you @justinh-rahb
Then, only the upload feature requires attention.
@tjbck commented on GitHub (Jan 18, 2024):
I'm aware that there is somewhat of an issue with indicating the upload progress, but besides the progress bar, everything else should work as intended (tested with uploading 2gb model)!
@hemangjoshi37a commented on GitHub (Jan 19, 2024):
Should I upload only the zip file or all the files in a gguf model repo ?
here is chrome dev console log
@tjbck commented on GitHub (Jan 19, 2024):
Please provide us with the steps to reproduce, thanks.
@hemangjoshi37a commented on GitHub (Jan 19, 2024):
1 : download file
[mpt-7b-instruct-Q4_0.gguffrom https://huggingface.co/filipealmeida/mpt-7b-instruct-GGUF/blob/main/mpt-7b-instruct-Q4_0.gguf2 : try to upload that file using GGUF file upload
@hemangjoshi37a commented on GitHub (Jan 19, 2024):
@bmabir17 commented on GitHub (Feb 4, 2024):
I am also facing this issue, after selecting the file and pressing upload button. The UI shows a upload loader with 0%. Then in the console it shows
My network do not have access to openAI api, And i do not intend to use it. What can be the solution for this?
@mhussaincov94 commented on GitHub (Feb 8, 2024):
I also am faceing this issue.
no upload progress is displayed.
I have moddles localy is there any other way to import them untill a fix is added?
I would be greatful for any help.
Majid
@KevinKrueger commented on GitHub (Mar 12, 2024):
Where are the uploaded models located?
Can I put them in the directory for the first time?
@zer0ish commented on GitHub (Mar 21, 2024):
I'm using Unraid.

Using the downloaded gguf file, doesn't seem to work.
It's stuck at 0%. Looking at my ollama share, there isn't any new file being uploaded anywhere.
Edit: I let it run all night, when I looked at it, the process seemed to have finished but nothing new in my usable models.
But using the link from huggingface somewhat works?
Link used: https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/resolve/main/llava-v1.6-mistral-7b.Q8_0.gguf?download=true
The progress bar moves up to 100%, I see the sha256 file in my docker share for ollama, it has the proper size for the model I'm trying to get, but it just stays on a rotating circle indicating it's doing something. But it's been doing something for an hour.
You can see the circle progress like it's trying to do something bellow the "URL Mode" text.

All the regular models from ollama.com work fine for me including Llava model which was broken until the ollama 0.1.29 update.
I know it's in Experimental, so I'm not to concerned about it yet.
@imadreamerboy commented on GitHub (Mar 24, 2024):
I have the same error, i get this output:

Ollama log:
[GIN] 2024/03/25 - 00:18:24 | 404 | 0s | 127.0.0.1 | POST "/blobs/sha256:0068f25d1fc37cb25aa6be85064432eeeb1a0754d97139c0d2eb3529fc8fc32b"
@syberphunk commented on GitHub (May 3, 2024):
I'm experiencing this problem also, except it's saying that the connection was aborted when attempting to upload. The gguf file appears in the upload folder but doesn't get any further
@abhishek-ch commented on GitHub (May 10, 2024):
When I am trying https://huggingface.co/TheBloke/medicine-LLM-GGUF/blob/main/medicine-llm.Q8_0.gguf?download=true , its going for forever downloading without any progress beyond 0
@hemangjoshi37a commented on GitHub (May 12, 2024):
now not even my ollama is getting connected with open-webui i did so much work but did not work
@Andreaux commented on GitHub (May 23, 2024):
I'm having the exact same issue. I am absolutely unable to add through either upload or URL downloading any GGUF model file. Nothing works :(
@mysterium-coniunctionis commented on GitHub (May 28, 2024):
Same errors as others here - unable to complete the GGUF upload. I am running two instances of Open WebUI + Ollama:
When attempting to "Upload a GGUF model" via my M1 MacBook Pro Ollama (official macOS app) + Docker Desktop installation of Open WebUI. GGUF files will upload to 100% and then they just hang forever. It used to be that there was a slight delay at this point but then the modelfile would be updated and the upload would complete. Now, just stuck.
I also have Ollama (docker container with a dedicated NVIDIA GPU) + Open WebUI (Cloudron app) installed on my QNAP NAS and that one just stays stuck at 0% when it used to work typically much faster than the instance on my laptop.
@sivertheisholt commented on GitHub (May 30, 2024):
Same error here, tried a few versions without any luck.
Edit: Removed all files + clean install fixed the problem for me. Not sure why because it's the exact same setup/settings.
@syberphunk commented on GitHub (Jun 3, 2024):
Unfortunately I still have this problem

@SchneiderSam commented on GitHub (Jun 4, 2024):
@justinh-rahb i got this error:

@tjbck commented on GitHub (Jun 4, 2024):
GGUF file upload for Ollama will remain experimental (will not work for certain cases), file size exceeding 4gb are known to have issues with the upload process. I'll be moving this to discussion, however, If anyone's interested in fixing the issue, feel free to make a PR!