mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-12 10:04:14 -05:00
LM Studio link #311
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @fred-gb on GitHub (Feb 17, 2024).
Bonjour, 👋🏻
Description
Bug Summary:
It's not a bug, it's misunderstood about configuration.
I don't understand how to make work open-webui with open API BASE URL. As said in README.md
Steps to Reproduce:
Run HTTP server from LM-Studio
And in terminal:
I can try many URL, nothing works.
In LM Studio HTTP Server console:
Expected Behavior:
Use LM Studio as API BASE. Because LM Studio can use accelerated GPU on my MacBook M1. If you have solution with ollama, I want to know!
Actual Behavior:
When I go on my localhost page, I cannot find models or have standard error:
Ollama Version: Not DetectedEnvironment
Reproduction Details
Confirmation:
Logs and Screenshots
Browser Console Logs:
LM Studio
Docker Container Logs:
Installation Method
Docker
Thanks!
@justinh-rahb commented on GitHub (Feb 17, 2024):
If the LM Studio is running on a different system, that'll be why it isn't working. It only listens on 127.0.0.1 (localhost).
@fred-gb commented on GitHub (Feb 17, 2024):
Thanks,
LM Studio on same system.
127.0.0.1 or localhost does not work for docker isolation. It's why I put my lan IP.
@justinh-rahb commented on GitHub (Feb 17, 2024):
On mobile, didn't see your full issue report, there's a couple problems:
You're trying to use an OpenAI compatible API where an Ollama API is expected. This won't work. Not sure why you're involving LM Studio at all, Ollama can do GPU acceleration as well.
@fred-gb commented on GitHub (Feb 17, 2024):
Thanks @justinh-rahb
For GPU acceleration, I read this: https://github.com/ollama/ollama/issues/1986
I found on LM Studio:

It's

Metal! It's impressive on my simple MacBook M1 16Go!I hope Ollama and open-webui can integrate
Metal!Thanks
@justinh-rahb commented on GitHub (Feb 17, 2024):
Yes but, Ollama does integrate metal already, I get identical performance on my own 2020 M1 MacBook Pro (touchbar) and even better on my 2021 M1 Pro 14" MacBook Pro.
2020 M1 MBP 16GB:

2021 M1 Pro MBP 16GB:

and for funsies, 2023 M2 Max Mac Studio 96GB:

These performance numbers would not be possible without Metal GPU support in Ollama. You can ditch LM Studio if you're running WebUI, just follow the README.md instructions for setting up. You'll want to install Ollama with the macOS app from their website, and setup WebUI with a
docker runcommand with yourOLLAMA_API_BASE_URL=http://host.docker.internal:11434/apienvironment variable set.@tjbck commented on GitHub (Feb 17, 2024):
Our webui still has Ollama dependency at the moment so your installation command should've been
Let us know if this resolves your issue!