[GH-ISSUE #16358] issue: models from external server not showing in model list #17872

Closed
opened 2026-04-19 23:46:03 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @craigers521 on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/16358

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Pip Install

Open WebUI Version

v0.6.18

Ollama Version (if applicable)

No response

Operating System

macOS

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Adding a verifed external openai server that returns models will allow me to select those models from dropdown menu.

Actual Behavior

Not seeing the model available to use

Steps to Reproduce

Added external openai compatible server and was able to verify it. The verify response json is below in logs section.

I would expect since it verified correctly i would see the llama3.3 model available in the model selection menu but it is not there.

Logs & Screenshots

{
    "items": [
        {
            "source": "huggingface",
            "huggingface_repo_id": "bartowski/Llama-3.3-70B-Instruct-GGUF",
            "huggingface_filename": "*-Q4_K_M*.gguf",
            "ollama_library_model_name": null,
            "model_scope_model_id": null,
            "model_scope_file_path": null,
            "local_path": null,
            "name": "llama3.3",
            "description": null,
            "meta": {
                "ctx_shift": true,
                "dry_allowed_length": 2,
                "dry_base": 1.75,
                "dry_multiplier": 0.0,
                "dry_penalty_last_n": -1,
                "dry_sequence_breakers": [
                    "\n",
                    ":",
                    "\"",
                    "*"
                ],
                "dynatemp_exponent": 1.0,
                "dynatemp_range": 0.0,
                "frequency_penalty": 0.0,
                "min_p": 0.05000000074505806,
                "mirostat": 0,
                "mirostat_eta": 0.10000000149011612,
                "mirostat_tau": 5.0,
                "n_ctx": 8192,
                "n_ctx_train": 131072,
                "n_embd": 8192,
                "n_params": 70553706560,
                "n_slot": 1,
                "n_slot_ctx": 8192,
                "n_vocab": 128256,
                "presence_penalty": 0.0,
                "prompt_cache": true,
                "repeat_last_n": 64,
                "repeat_penalty": 1.0,
                "seed": -1,
                "size": 42512531712,
                "support_audio": false,
                "support_reasoning": false,
                "support_speculative": false,
                "support_tool_calls": true,
                "support_vision": false,
                "temperature": 0.800000011920929,
                "top_k": 40,
                "top_n_sigma": -1.0,
                "top_p": 0.949999988079071,
                "typical_p": 1.0,
                "vocab_type": 2,
                "xtc_probability": 0.0,
                "xtc_threshold": 0.10000000149011612
            },
            "replicas": 1,
            "ready_replicas": 1,
            "categories": [
                "llm"
            ],
            "embedding_only": false,
            "image_only": false,
            "reranker": false,
            "speech_to_text": false,
            "text_to_speech": false,
            "placement_strategy": "spread",
            "cpu_offloading": true,
            "distributed_inference_across_workers": true,
            "worker_selector": {},
            "gpu_selector": null,
            "backend": "llama-box",
            "backend_version": null,
            "backend_parameters": null,
            "env": null,
            "restart_on_error": true,
            "distributable": true,
            "id": 1,
            "created_at": "2025-08-06T19:48:39Z",
            "updated_at": "2025-08-07T17:55:15Z"
        }
    ],
    "pagination": {
        "page": 1,
        "perPage": 100,
        "total": 1,
        "totalPage": 1
    }
}

Additional Information

No response

Originally created by @craigers521 on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/16358 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Pip Install ### Open WebUI Version v0.6.18 ### Ollama Version (if applicable) _No response_ ### Operating System macOS ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Adding a verifed external openai server that returns models will allow me to select those models from dropdown menu. ### Actual Behavior Not seeing the model available to use ### Steps to Reproduce Added external openai compatible server and was able to verify it. The verify response json is below in logs section. I would expect since it verified correctly i would see the llama3.3 model available in the model selection menu but it is not there. ### Logs & Screenshots ``` { "items": [ { "source": "huggingface", "huggingface_repo_id": "bartowski/Llama-3.3-70B-Instruct-GGUF", "huggingface_filename": "*-Q4_K_M*.gguf", "ollama_library_model_name": null, "model_scope_model_id": null, "model_scope_file_path": null, "local_path": null, "name": "llama3.3", "description": null, "meta": { "ctx_shift": true, "dry_allowed_length": 2, "dry_base": 1.75, "dry_multiplier": 0.0, "dry_penalty_last_n": -1, "dry_sequence_breakers": [ "\n", ":", "\"", "*" ], "dynatemp_exponent": 1.0, "dynatemp_range": 0.0, "frequency_penalty": 0.0, "min_p": 0.05000000074505806, "mirostat": 0, "mirostat_eta": 0.10000000149011612, "mirostat_tau": 5.0, "n_ctx": 8192, "n_ctx_train": 131072, "n_embd": 8192, "n_params": 70553706560, "n_slot": 1, "n_slot_ctx": 8192, "n_vocab": 128256, "presence_penalty": 0.0, "prompt_cache": true, "repeat_last_n": 64, "repeat_penalty": 1.0, "seed": -1, "size": 42512531712, "support_audio": false, "support_reasoning": false, "support_speculative": false, "support_tool_calls": true, "support_vision": false, "temperature": 0.800000011920929, "top_k": 40, "top_n_sigma": -1.0, "top_p": 0.949999988079071, "typical_p": 1.0, "vocab_type": 2, "xtc_probability": 0.0, "xtc_threshold": 0.10000000149011612 }, "replicas": 1, "ready_replicas": 1, "categories": [ "llm" ], "embedding_only": false, "image_only": false, "reranker": false, "speech_to_text": false, "text_to_speech": false, "placement_strategy": "spread", "cpu_offloading": true, "distributed_inference_across_workers": true, "worker_selector": {}, "gpu_selector": null, "backend": "llama-box", "backend_version": null, "backend_parameters": null, "env": null, "restart_on_error": true, "distributable": true, "id": 1, "created_at": "2025-08-06T19:48:39Z", "updated_at": "2025-08-07T17:55:15Z" } ], "pagination": { "page": 1, "perPage": 100, "total": 1, "totalPage": 1 } } ``` ### Additional Information _No response_
GiteaMirror added the bug label 2026-04-19 23:46:03 -05:00
Author
Owner

@SecureBot commented on GitHub (Aug 7, 2025):

I'm having a very similar issue. Models locally hosted with vLLM are suddenly not showing up. It will be listed under the models tab, but won't show in the model config, so it keeps dropping for users.

<!-- gh-comment-id:3166007559 --> @SecureBot commented on GitHub (Aug 7, 2025): I'm having a very similar issue. Models locally hosted with vLLM are suddenly not showing up. It will be listed under the models tab, but won't show in the model config, so it keeps dropping for users.
Author
Owner

@craigers521 commented on GitHub (Aug 7, 2025):

Manually specifying the model ID in the connection setup ending up working for me. So rather than it just find all models available under list model I add the specific tags I want and now the API will send chat completions, and I'm able to select remote model from list.

<!-- gh-comment-id:3166029591 --> @craigers521 commented on GitHub (Aug 7, 2025): Manually specifying the model ID in the connection setup ending up working for me. So rather than it just find all models available under list model I add the specific tags I want and now the API will send chat completions, and I'm able to select remote model from list.
Author
Owner

@Eisaichen commented on GitHub (Aug 8, 2025):

Try to disable Cache Base Model List under Admin Panel > Settings > Connections

<!-- gh-comment-id:3166428431 --> @Eisaichen commented on GitHub (Aug 8, 2025): Try to disable `Cache Base Model List` under `Admin Panel` > `Settings` > `Connections`
Author
Owner

@tjbck commented on GitHub (Aug 8, 2025):

We're unable to reproduce, keep us updated!

<!-- gh-comment-id:3166955310 --> @tjbck commented on GitHub (Aug 8, 2025): We're unable to reproduce, keep us updated!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#17872