[PR #15805] Add ollama launch qwen support for Qwen Code CLI #62010

Open
opened 2026-04-29 16:58:11 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15805
Author: @B-A-M-N
Created: 4/25/2026
Status: 🔄 Open

Base: mainHead: launch-qwen


📝 Commits (7)

  • d2e6b07 Fix Qwen launch: correct repo URL, use model id for provider id, remove OLLAMA_THINK
  • 9546281 Add launch flags and LaunchConfigurator interface for Qwen integration
  • 72fd455 Add tests for LaunchConfigurator interface and launch flags
  • ced38b9 refactor(qwen): remove interface changes, make qwen.go self-contained
  • 768196f test(qwen): add integration, findPath, models, and paths tests
  • 9769ca7 fix(qwen): make Ollama launch configuration deterministic
  • c56d4c4 fix: remove (experimental) from Qwen help text

📊 Changes

4 files changed (+652 additions, -4 deletions)

View changed files

📝 cmd/launch/launch.go (+2 -1)
cmd/launch/qwen.go (+273 -0)
cmd/launch/qwen_test.go (+362 -0)
📝 cmd/launch/registry.go (+15 -3)

📄 Description

Summary

Adds ollama launch qwen support by configuring Qwen Code CLI for Ollama's OpenAI-compatible endpoint.

Changes

This integration:

  • Uses OPENAI_MODEL and OPENAI_BASE_URL to route Qwen Code CLI through Ollama
  • Applies additive, idempotent config updates (no destructive mutation)
  • Preserves existing auth configuration
  • Supports env-only mode (no config writes)
  • Migrates legacy single-provider configs to array form

Testing

  • TestQwenEdit — config write with correct provider shape (id, envKey, baseUrl)
  • TestQwenEditPreservesAuth — existing auth config is not overwritten
  • TestQwenEditEnvMode — env-mode skips config writes entirely
  • TestQwenLegacyMigration — legacy single-object openai format migrated to array
  • TestLaunchCmdFlagsParsing — --think, --config-scope, --provider-mode, --experimental flags registered with correct defaults
  • TestLaunchConfigurator — LaunchConfigurator interface correctly passes IntegrationLaunchRequest

All launch package tests pass.

Dependencies

Relies on upstream Qwen Code CLI model/env precedence behavior (QwenLM/qwen-code#3567, merged 2026-04-24).

Scope

Narrow, additive change:

  • No engine modifications
  • No breaking defaults
  • Purely additive integration

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15805 **Author:** [@B-A-M-N](https://github.com/B-A-M-N) **Created:** 4/25/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `launch-qwen` --- ### 📝 Commits (7) - [`d2e6b07`](https://github.com/ollama/ollama/commit/d2e6b07e7292ad747dd4b76af98e13886ffca237) Fix Qwen launch: correct repo URL, use model id for provider id, remove OLLAMA_THINK - [`9546281`](https://github.com/ollama/ollama/commit/9546281e3e0c0cadb9575f18827bb7835fc1f6b7) Add launch flags and LaunchConfigurator interface for Qwen integration - [`72fd455`](https://github.com/ollama/ollama/commit/72fd455dea21b52efd096160eb824e3c5e83d8fd) Add tests for LaunchConfigurator interface and launch flags - [`ced38b9`](https://github.com/ollama/ollama/commit/ced38b9faadbf5c721a885f553d25b2cc45c686e) refactor(qwen): remove interface changes, make qwen.go self-contained - [`768196f`](https://github.com/ollama/ollama/commit/768196faf1ad442d0e3589df6f254f613d677e3a) test(qwen): add integration, findPath, models, and paths tests - [`9769ca7`](https://github.com/ollama/ollama/commit/9769ca7fd8cfb1571b1fc956226af8310b51cfee) fix(qwen): make Ollama launch configuration deterministic - [`c56d4c4`](https://github.com/ollama/ollama/commit/c56d4c4cfd705b3dc7d81065bd5150be16bd0844) fix: remove (experimental) from Qwen help text ### 📊 Changes **4 files changed** (+652 additions, -4 deletions) <details> <summary>View changed files</summary> 📝 `cmd/launch/launch.go` (+2 -1) ➕ `cmd/launch/qwen.go` (+273 -0) ➕ `cmd/launch/qwen_test.go` (+362 -0) 📝 `cmd/launch/registry.go` (+15 -3) </details> ### 📄 Description ## Summary Adds `ollama launch qwen` support by configuring Qwen Code CLI for Ollama's OpenAI-compatible endpoint. ## Changes This integration: - Uses `OPENAI_MODEL` and `OPENAI_BASE_URL` to route Qwen Code CLI through Ollama - Applies additive, idempotent config updates (no destructive mutation) - Preserves existing auth configuration - Supports env-only mode (no config writes) - Migrates legacy single-provider configs to array form ## Testing - `TestQwenEdit` — config write with correct provider shape (id, envKey, baseUrl) - `TestQwenEditPreservesAuth` — existing auth config is not overwritten - `TestQwenEditEnvMode` — env-mode skips config writes entirely - `TestQwenLegacyMigration` — legacy single-object openai format migrated to array - `TestLaunchCmdFlagsParsing` — --think, --config-scope, --provider-mode, --experimental flags registered with correct defaults - `TestLaunchConfigurator` — LaunchConfigurator interface correctly passes IntegrationLaunchRequest All launch package tests pass. ## Dependencies Relies on upstream Qwen Code CLI model/env precedence behavior ([QwenLM/qwen-code#3567](https://github.com/QwenLM/qwen-code/pull/3567), merged 2026-04-24). ## Scope Narrow, additive change: - No engine modifications - No breaking defaults - Purely additive integration --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 16:58:11 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62010