[PR #14139] fix: resolve model visibility issue by adding discovery module #40405

Open
opened 2026-04-23 01:18:21 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14139
Author: @BurakBebek1
Created: 2/7/2026
Status: 🔄 Open

Base: mainHead: fix/model-discovery-visibility


📝 Commits (1)

  • 3ff617c fix: resolve model visibility issue by adding discovery module

📊 Changes

1 file changed (+91 additions, -15 deletions)

View changed files

📝 server/routes.go (+91 -15)

📄 Description

Description
This PR introduces a Model Discovery mechanism to address the visibility issue of newly released or trending models in the library. Currently, users must manually pull or run models via CLI (e.g., qwen3-coder-next) before they appear in GUI-based clients. This implementation automates that process by fetching trending models directly from the Ollama library.

Key Features & Improvements:
Automated Discovery: Fetches top trending models from ollama.com/library and integrates them into the model list response.

Performance-First Caching: Implements a thread-safe, in-memory caching layer (sync.RWMutex) with a 1-hour expiration to prevent redundant network overhead and ensure high performance.

Visual Indicators: Remote/Trending models are tagged with a ☁️📥 icon and clearly distinguished using RemoteModel: true and RemoteHost metadata.

Smart Matching: Automatically handles :latest tag normalization to prevent duplicate entries for models already present on the local disk.

Robust Networking: Includes a 5-second HTTP timeout and structured error logging (slog) for a more stable server environment.

Why this is needed?
Users often miss out on the latest model releases because GUI clients only reflect what is already downloaded. This change enhances user experience by making the Ollama library "browsable" directly within the interface, bridging the gap between the CLI and GUI workflows.

Fixes: #14129
Resolves the issue where latest cloud/trending models (like glm-4.7, qwen3-coder-next) are hidden from the user by default.

I noticed @drifkin is assigned to the related issue, so I wanted to share this implementation as a potential solution for review.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14139 **Author:** [@BurakBebek1](https://github.com/BurakBebek1) **Created:** 2/7/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `fix/model-discovery-visibility` --- ### 📝 Commits (1) - [`3ff617c`](https://github.com/ollama/ollama/commit/3ff617ceea92f0400a6c202d84f2aa6556217f61) fix: resolve model visibility issue by adding discovery module ### 📊 Changes **1 file changed** (+91 additions, -15 deletions) <details> <summary>View changed files</summary> 📝 `server/routes.go` (+91 -15) </details> ### 📄 Description Description This PR introduces a Model Discovery mechanism to address the visibility issue of newly released or trending models in the library. Currently, users must manually pull or run models via CLI (e.g., qwen3-coder-next) before they appear in GUI-based clients. This implementation automates that process by fetching trending models directly from the Ollama library. Key Features & Improvements: Automated Discovery: Fetches top trending models from ollama.com/library and integrates them into the model list response. Performance-First Caching: Implements a thread-safe, in-memory caching layer (sync.RWMutex) with a 1-hour expiration to prevent redundant network overhead and ensure high performance. Visual Indicators: Remote/Trending models are tagged with a ☁️📥 icon and clearly distinguished using RemoteModel: true and RemoteHost metadata. Smart Matching: Automatically handles :latest tag normalization to prevent duplicate entries for models already present on the local disk. Robust Networking: Includes a 5-second HTTP timeout and structured error logging (slog) for a more stable server environment. Why this is needed? Users often miss out on the latest model releases because GUI clients only reflect what is already downloaded. This change enhances user experience by making the Ollama library "browsable" directly within the interface, bridging the gap between the CLI and GUI workflows. Fixes: #14129 Resolves the issue where latest cloud/trending models (like glm-4.7, qwen3-coder-next) are hidden from the user by default. I noticed @drifkin is assigned to the related issue, so I wanted to share this implementation as a potential solution for review. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-23 01:18:21 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#40405