[PR #15571] launch: fetch recommended models from server endpoint #46461

Open
opened 2026-04-25 01:53:06 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15571
Author: @BruceMacD
Created: 4/14/2026
Status: 🔄 Open

Base: mainHead: brucemacd/launch-fetch-reccomended


📝 Commits (4)

  • bf7d15b launch: fetch recommended models from server endpoint
  • 4c92f25 launch: fetch recommended models from ollama.com
  • cc178cd remove test variable
  • cfee09b fix test

📊 Changes

5 files changed (+308 additions, -49 deletions)

View changed files

📝 cmd/launch/integrations_test.go (+40 -25)
📝 cmd/launch/launch.go (+6 -1)
cmd/launch/launch_models_test.go (+144 -0)
📝 cmd/launch/models.go (+117 -22)
📝 server/routes.go (+1 -1)

📄 Description

Summary

  • Fetch launch recommendations directly from the ollama.com experimental launch models endpoint during the launch flow, instead of adding a local Ollama server endpoint/cache.
  • Use a short timeout and silently fall back to built-in recommendations when the remote endpoint is slow, unavailable, or returns an invalid response.
  • Keep the built-in fallback list as the full cloud + local recommendation set, then filter cloud models when cloud is disabled.
  • Decode remote recommendation metadata in cmd/launch and update cloud model context/output limits using stripped cloud model names.
  • Remove the local /api/x/launch-models route, background refresh, cache state, and public API client/types that were no longer needed.

Companion PR: https://github.com/ollama/ollama.com/pull/3114

flowchart TD
    A["User opens ollama launch"] --> B["Check local cloud status"]
    B --> C{"Cloud disabled?"}

    C -->|yes| D["Use built-in fallback recommendations"]
    D --> E["Filter out cloud models"]

    C -->|no / unknown| F["Fetch recommendations from ollama.com"]
    F --> G{"Fetch succeeds quickly?"}

    G -->|yes| H["Use remote ordered recommendations"]
    G -->|no| I["Use built-in fallback recommendations"]

    H --> J["Merge with installed models"]
    I --> J
    E --> J

    J --> K["Render launch picker"]

Behavior

Cloud enabled or unknown

  • ollama launch tries to fetch recommendations from ollama.com.
  • if the request succeeds quickly, launch uses the remote ordered list.
  • If the request fails, times out, or returns invalid data, launch silently falls back to the built-in list.

Cloud disabled

  • ollama launch skips the remote recommendation fetch and uses the built-in fallback list, then filters out cloud models.
    The picker only shows local recommendations.

Test plan

  • GOCACHE=/tmp/go-build-ollama go test ./cmd/launch -run 'Test(FetchRecommendedModels|LoadSelectableModelsCloudDisabledSkipsRemoteRecommendations)'
  • GOCACHE=/tmp/go-build-ollama go test ./cmd/launch -run 'Test(FetchRecommendedModels|BuildModelList_NoExistingModels|BuildModelList_OnlyLocalModels_CloudRecsAtBottom|BuildModelList_BothCloudAndLocal_RegularSort)'
  • GOCACHE=/tmp/go-build-ollama go test ./api -run '^$'
  • GOCACHE=/tmp/go-build-ollama go test ./server -run '^$'
  • Manual: point launch recommendations at a local ollama.com checkout and verify ollama launch shows the local endpoint's ordered recommendations.
  • Manual: stop local ollama.com and verify ollama launch silently falls back to built-in recommendations.
  • Manual: verify cloud-disabled launch shows only local recommendations.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15571 **Author:** [@BruceMacD](https://github.com/BruceMacD) **Created:** 4/14/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `brucemacd/launch-fetch-reccomended` --- ### 📝 Commits (4) - [`bf7d15b`](https://github.com/ollama/ollama/commit/bf7d15bdf295b06afe943e45a5a582cb9c5e37d8) launch: fetch recommended models from server endpoint - [`4c92f25`](https://github.com/ollama/ollama/commit/4c92f25354c75f0469be888b209efb10045c5d51) launch: fetch recommended models from ollama.com - [`cc178cd`](https://github.com/ollama/ollama/commit/cc178cd84dc6b92ba6c18c07b474b92368c84c7f) remove test variable - [`cfee09b`](https://github.com/ollama/ollama/commit/cfee09b3ab60367ce031fd43aee5586229f62bee) fix test ### 📊 Changes **5 files changed** (+308 additions, -49 deletions) <details> <summary>View changed files</summary> 📝 `cmd/launch/integrations_test.go` (+40 -25) 📝 `cmd/launch/launch.go` (+6 -1) ➕ `cmd/launch/launch_models_test.go` (+144 -0) 📝 `cmd/launch/models.go` (+117 -22) 📝 `server/routes.go` (+1 -1) </details> ### 📄 Description ## Summary - Fetch launch recommendations directly from the `ollama.com` experimental launch models endpoint during the launch flow, instead of adding a local Ollama server endpoint/cache. - Use a short timeout and silently fall back to built-in recommendations when the remote endpoint is slow, unavailable, or returns an invalid response. - Keep the built-in fallback list as the full cloud + local recommendation set, then filter cloud models when cloud is disabled. - Decode remote recommendation metadata in `cmd/launch` and update cloud model context/output limits using stripped cloud model names. - Remove the local `/api/x/launch-models` route, background refresh, cache state, and public API client/types that were no longer needed. Companion PR: https://github.com/ollama/ollama.com/pull/3114 ```mermaid flowchart TD A["User opens ollama launch"] --> B["Check local cloud status"] B --> C{"Cloud disabled?"} C -->|yes| D["Use built-in fallback recommendations"] D --> E["Filter out cloud models"] C -->|no / unknown| F["Fetch recommendations from ollama.com"] F --> G{"Fetch succeeds quickly?"} G -->|yes| H["Use remote ordered recommendations"] G -->|no| I["Use built-in fallback recommendations"] H --> J["Merge with installed models"] I --> J E --> J J --> K["Render launch picker"] ``` ## Behavior **Cloud enabled or unknown** - `ollama launch` tries to fetch recommendations from `ollama.com`. - if the request succeeds quickly, launch uses the remote ordered list. - If the request fails, times out, or returns invalid data, launch silently falls back to the built-in list. **Cloud disabled** - ollama launch skips the remote recommendation fetch and uses the built-in fallback list, then filters out cloud models. The picker only shows local recommendations. ## Test plan - [x] `GOCACHE=/tmp/go-build-ollama go test ./cmd/launch -run 'Test(FetchRecommendedModels|LoadSelectableModelsCloudDisabledSkipsRemoteRecommendations)'` - [x] `GOCACHE=/tmp/go-build-ollama go test ./cmd/launch -run 'Test(FetchRecommendedModels|BuildModelList_NoExistingModels|BuildModelList_OnlyLocalModels_CloudRecsAtBottom|BuildModelList_BothCloudAndLocal_RegularSort)'` - [x] `GOCACHE=/tmp/go-build-ollama go test ./api -run '^$'` - [x] `GOCACHE=/tmp/go-build-ollama go test ./server -run '^$'` - [x] Manual: point launch recommendations at a local `ollama.com` checkout and verify `ollama launch` shows the local endpoint's ordered recommendations. - [x] Manual: stop local `ollama.com` and verify `ollama launch` silently falls back to built-in recommendations. - [x] Manual: verify cloud-disabled launch shows only local recommendations. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-25 01:53:06 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46461