[PR #14701] fix: handle missing verbose flag in launch command context #77088

Open
opened 2026-05-05 09:47:38 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14701
Author: @MaxwellCalkin
Created: 3/8/2026
Status: 🔄 Open

Base: mainHead: fix-launch-verbose-flag


📝 Commits (1)

  • ebf863e fix: handle missing verbose flag when running model from launch command

📊 Changes

2 files changed (+8 additions, -8 deletions)

View changed files

📝 cmd/cmd.go (+2 -8)
📝 cmd/interactive.go (+6 -0)

📄 Description

Note: This PR was authored by Claude (AI), operated by @maxwellcalkin.

Fixes #14654

Summary

When running a model via ollama launch, the app crashes with:

Error running model: flag accessed but not defined: verbose

This happens because the verbose flag is registered on the run and root commands, but not on the launch command. When the shared chat() and generate() functions try to access the flag via cmd.Flags().GetBool("verbose"), it panics/errors because the flag does not exist in the launch command context.

Changes

cmd/cmd.go

  • In chat(): ignore the error from GetBool("verbose") instead of returning it. When the flag is not registered, GetBool returns false (the zero value), which is the correct default behavior.
  • In generate(): same fix.

cmd/interactive.go

  • In the /set verbose and /set quiet interactive commands: register the verbose flag on the command if it does not already exist before attempting to set it. This ensures these interactive commands work correctly regardless of which command (run or launch) started the interactive session.

Test plan

  • Run ollama launch, select "Run a model", send a message — should complete without the verbose flag error
  • Run ollama launch, run a model, type /set verbose then send a message — should show timing stats
  • Run ollama launch, run a model, type /set quiet — should suppress timing stats
  • Run ollama run <model> — existing behavior unchanged
  • Run ollama run <model> --verbose — existing verbose behavior unchanged

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14701 **Author:** [@MaxwellCalkin](https://github.com/MaxwellCalkin) **Created:** 3/8/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `fix-launch-verbose-flag` --- ### 📝 Commits (1) - [`ebf863e`](https://github.com/ollama/ollama/commit/ebf863ecdcbf4bed824526a8473faeec69dc6667) fix: handle missing verbose flag when running model from launch command ### 📊 Changes **2 files changed** (+8 additions, -8 deletions) <details> <summary>View changed files</summary> 📝 `cmd/cmd.go` (+2 -8) 📝 `cmd/interactive.go` (+6 -0) </details> ### 📄 Description **Note: This PR was authored by Claude (AI), operated by @maxwellcalkin.** Fixes #14654 ## Summary When running a model via `ollama launch`, the app crashes with: ``` Error running model: flag accessed but not defined: verbose ``` This happens because the `verbose` flag is registered on the `run` and root commands, but not on the `launch` command. When the shared `chat()` and `generate()` functions try to access the flag via `cmd.Flags().GetBool("verbose")`, it panics/errors because the flag does not exist in the launch command context. ## Changes ### `cmd/cmd.go` - In `chat()`: ignore the error from `GetBool("verbose")` instead of returning it. When the flag is not registered, `GetBool` returns `false` (the zero value), which is the correct default behavior. - In `generate()`: same fix. ### `cmd/interactive.go` - In the `/set verbose` and `/set quiet` interactive commands: register the `verbose` flag on the command if it does not already exist before attempting to set it. This ensures these interactive commands work correctly regardless of which command (`run` or `launch`) started the interactive session. ## Test plan - Run `ollama launch`, select "Run a model", send a message — should complete without the verbose flag error - Run `ollama launch`, run a model, type `/set verbose` then send a message — should show timing stats - Run `ollama launch`, run a model, type `/set quiet` — should suppress timing stats - Run `ollama run <model>` — existing behavior unchanged - Run `ollama run <model> --verbose` — existing verbose behavior unchanged --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 09:47:38 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#77088