[PR #15571] launch: fetch recommended models from server endpoint #20435

Open
opened 2026-04-16 07:37:47 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15571
Author: @BruceMacD
Created: 4/14/2026
Status: 🔄 Open

Base: mainHead: brucemacd/launch-fetch-reccomended


📝 Commits (1)

  • 8ddbd9b launch: fetch recommended models from server endpoint

📊 Changes

7 files changed (+221 additions, -44 deletions)

View changed files

📝 api/client.go (+9 -0)
📝 api/types.go (+14 -0)
📝 cmd/launch/integrations_test.go (+42 -20)
📝 cmd/launch/launch.go (+3 -1)
📝 cmd/launch/launch_test.go (+6 -0)
📝 cmd/launch/models.go (+57 -23)
📝 server/routes.go (+90 -0)

📄 Description

Summary

  • Add /api/x/launch-models server endpoint that fetches recommended models from ollama.com with a 24h cache, merged with built-in local defaults (gemma4, qwen3.5)
  • Client now calls this endpoint instead of using a hardcoded recommended models list, with a client-side fallback for older servers
  • Add LaunchModel/LaunchModelsResponse API types and VRAM field on ModelItem to replace the separate recommendedVRAM map

Test plan

  • All TestBuildModelList_* tests pass (18 tests)
  • Verify endpoint returns expected models when registry is reachable
  • Verify fallback to local defaults when registry is unreachable
  • Verify launch model picker displays correctly with fetched recommendations

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15571 **Author:** [@BruceMacD](https://github.com/BruceMacD) **Created:** 4/14/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `brucemacd/launch-fetch-reccomended` --- ### 📝 Commits (1) - [`8ddbd9b`](https://github.com/ollama/ollama/commit/8ddbd9bf60effdf3b6e8660fa61d55c95910d661) launch: fetch recommended models from server endpoint ### 📊 Changes **7 files changed** (+221 additions, -44 deletions) <details> <summary>View changed files</summary> 📝 `api/client.go` (+9 -0) 📝 `api/types.go` (+14 -0) 📝 `cmd/launch/integrations_test.go` (+42 -20) 📝 `cmd/launch/launch.go` (+3 -1) 📝 `cmd/launch/launch_test.go` (+6 -0) 📝 `cmd/launch/models.go` (+57 -23) 📝 `server/routes.go` (+90 -0) </details> ### 📄 Description ## Summary - Add `/api/x/launch-models` server endpoint that fetches recommended models from `ollama.com` with a 24h cache, merged with built-in local defaults (gemma4, qwen3.5) - Client now calls this endpoint instead of using a hardcoded recommended models list, with a client-side fallback for older servers - Add `LaunchModel`/`LaunchModelsResponse` API types and `VRAM` field on `ModelItem` to replace the separate `recommendedVRAM` map ## Test plan - [x] All `TestBuildModelList_*` tests pass (18 tests) - [ ] Verify endpoint returns expected models when registry is reachable - [ ] Verify fallback to local defaults when registry is unreachable - [ ] Verify launch model picker displays correctly with fetched recommendations --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-16 07:37:47 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#20435