mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 03:18:23 -05:00
[GH-ISSUE #2909] Local-AI: Empty model selections (when they shouldn't be) #28590
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @senpro-ingwersenk on GitHub (Jun 7, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/2909
Bug Report
Description
Bug Summary:
I configured my LocalAI instance as the OpenAI API endpoint; when I ose curl to verify, I see the models just fine:
However, the same URL (minus
/modelsof course) doesn't result in any models being loaded at all. But it also still reports the connection as successful.Steps to Reproduce:
You may wish to borrow my Caddyfile for proper reproduction:
Launch the Docker containers accordingly. Here is a snippet from my
docker-compose.yml:Adjust this and the Caddyfile accordingly, then run.

When you open the UI, there will be no available models to chose from:
Expected Behavior:
I expected to be able to find and see models to chat with or run RAG.
Actual Behavior:
An empty list while the API clearly reports them all.
Environment
Open WebUI Version:
ghcr.io/open-webui/open-webui:ollama(Chosen, so I can potentially utilize both LocalAI and Ollama models situationally.)Ollama (if applicable): As embedded in the container.
Operating System: Debian Bookworm, amd64
Browser (if applicable): Firefox
Reproduction Details
Confirmation:
Logs and Screenshots
Browser Console Logs:

Docker Container Logs:
This happens during the connection test:
Screenshots (if applicable):
See above.
Installation Method
Docker Compose; the service configurations are above.
Additional Information
I am running this as a temporary setup on an old IBM server with 2x Intel Xeon, 160GB RAM and a RTX 3060. Because of the way the network is built and restricted, I would like to use Caddy to simplify access to the service. Due to the abundance of RAM, it'd be neat to take advantage of it at some point - which is why I use both LocalAI and Ollama; let one manage the GPU and the other the CPU. That is the plan, at least.
Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
@senpro-ingwersenk commented on GitHub (Jun 7, 2024):
I also tried with the non-ollama embedded model; same result, no models show up in the list.