[GH-ISSUE #14253] issue: Custom models are not selectable when serving with VLLM and using pip to install open webui. #55858

Closed
opened 2026-05-05 18:10:37 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @nickeisenberg on GitHub (May 23, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/14253

Image
Image
Image

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Pip Install

Open WebUI Version

v0.6.10

Ollama Version (if applicable)

N/A

Operating System

Ubuntu 24.04 (WSL)

Browser (if applicable)

136.0.7103.114

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

I am serving CodeLlama-13b-Instruct-hf with VLLM and using Open Webui to connect. This all works fine and I am able to interact with the model. However, If I add knowledge and create a custom model with CodeLlama-13b-Instruct-hf as the base and attach the knowledge to it, I am unable to select this model to use. However, if I open up the webui.db sqlite file, I can see that custom model is in there but I am still not able to select and use this model from within the UI.

Actual Behavior

I am unable to use my custom model.

Steps to Reproduce

  1. Install code llama with the huggingface cli
huggingface-cli download codellama/CodeLlama-13b-Instruct-hf \
    --local-dir /path/to/save/the/model/to/CodeLlama-13b-Instruct-hf \
    --local-dir-use-symlinks False
  1. Serve the model with VLLM by
MODEL="/path/to/save/the/model/to/CodeLlama-13b-Instruct-hf"

PYTHON_EXEC="/path/to/python/bin/with/vllm/installed/python3"

$PYTHON_EXEC -m vllm.entrypoints.openai.api_server \
  --model "${MODEL}" \
  --served-model-name "CodeLlama-13b-Instruct-hf" \
  --port 8000 \
  --host 0.0.0.0 \
  --gpu-memory-utilization 0.85
  1. Launch open web ui
open-webui serve --host 0.0.0.0 --port 8501
  1. Go to the user settings connections is the open web ui and do the following:
API Base URL: http://localhost:8000/v1
API Key: <place any non empty string here>
  1. At this point you can use code llama just fine

  2. Go to workspace > knowledge and add the md files

  3. Go to workspace > models and create the new model with code llama as the base model and add the knowledge

  4. Save the model.

  5. The model will now appear in the .../lib/python3.11/site-packages/open_webui/data/webui.db file but will not be accessible to use from the UI.

Logs & Screenshots

N/A

Additional Information

No response

Originally created by @nickeisenberg on GitHub (May 23, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/14253 ![Image](https://github.com/user-attachments/assets/eb67912d-e1dd-4a88-b94b-310c890dc6fb) ![Image](https://github.com/user-attachments/assets/207df2e0-1a62-4d4c-99d8-892677e14f16) ![Image](https://github.com/user-attachments/assets/18c595ab-1955-4374-b9cb-48d25321cc44) ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Pip Install ### Open WebUI Version v0.6.10 ### Ollama Version (if applicable) N/A ### Operating System Ubuntu 24.04 (WSL) ### Browser (if applicable) 136.0.7103.114 ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior I am serving CodeLlama-13b-Instruct-hf with VLLM and using Open Webui to connect. This all works fine and I am able to interact with the model. However, If I add knowledge and create a custom model with CodeLlama-13b-Instruct-hf as the base and attach the knowledge to it, I am unable to select this model to use. However, if I open up the webui.db sqlite file, I can see that custom model is in there but I am still not able to select and use this model from within the UI. ### Actual Behavior I am unable to use my custom model. ### Steps to Reproduce 1. Install code llama with the huggingface cli ``` huggingface-cli download codellama/CodeLlama-13b-Instruct-hf \ --local-dir /path/to/save/the/model/to/CodeLlama-13b-Instruct-hf \ --local-dir-use-symlinks False ``` 2. Serve the model with VLLM by ``` MODEL="/path/to/save/the/model/to/CodeLlama-13b-Instruct-hf" PYTHON_EXEC="/path/to/python/bin/with/vllm/installed/python3" $PYTHON_EXEC -m vllm.entrypoints.openai.api_server \ --model "${MODEL}" \ --served-model-name "CodeLlama-13b-Instruct-hf" \ --port 8000 \ --host 0.0.0.0 \ --gpu-memory-utilization 0.85 ``` 3. Launch open web ui ``` open-webui serve --host 0.0.0.0 --port 8501 ``` 4. Go to the user settings connections is the open web ui and do the following: ``` API Base URL: http://localhost:8000/v1 API Key: <place any non empty string here> ``` 5. At this point you can use code llama just fine 6. Go to workspace > knowledge and add the md files 7. Go to workspace > models and create the new model with code llama as the base model and add the knowledge 8. Save the model. 9. The model will now appear in the .../lib/python3.11/site-packages/open_webui/data/webui.db file but will not be accessible to use from the UI. ### Logs & Screenshots N/A ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-05 18:10:37 -05:00
Author
Owner

@tjbck commented on GitHub (May 23, 2025):

Use system level connections instead of direct connections.

<!-- gh-comment-id:2905758564 --> @tjbck commented on GitHub (May 23, 2025): Use system level connections instead of direct connections.
Author
Owner

@nickeisenberg commented on GitHub (May 23, 2025):

Thanks for the reply. What do you mean by system level connections? I Turned off direct connections in the admin panel, but I do not see a system connections option.

<!-- gh-comment-id:2905933142 --> @nickeisenberg commented on GitHub (May 23, 2025): Thanks for the reply. What do you mean by system level connections? I Turned off direct connections in the admin panel, but I do not see a system connections option.
Author
Owner

@nickeisenberg commented on GitHub (May 23, 2025):

I got it working. In the admin panel under open api connections, I had to add http://localhost:8000/v1

<!-- gh-comment-id:2905974348 --> @nickeisenberg commented on GitHub (May 23, 2025): I got it working. In the admin panel under open api connections, I had to add http://localhost:8000/v1
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#55858