[GH-ISSUE #15449] Ollama CLI should ask which model to load in case user has not chosen yet. #9874

Open
opened 2026-04-12 22:44:13 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @lipstick-turtleback on GitHub (Apr 9, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15449

➜  ~ ollama list
NAME                      ID              SIZE      MODIFIED
x/flux2-klein:4b          50a0c0ab15ac    5.7 GB    4 hours ago
gemma4:31b-cloud          c382fbfbc73b    -         14 hours ago
gemma4:latest             c6eb396dbd59    9.6 GB    44 hours ago
gemma4:e2b                7fbdbf8f5e45    7.2 GB    6 days ago
gemma3:27b-cloud          9e1580299085    -         5 weeks ago
glm-5:cloud               c313cd065935    -         6 weeks ago
lfm2.5-thinking:latest    95bd9d45385f    731 MB    2 months ago
➜  ~ ollama run
Error: requires at least 1 arg(s), only received 0

Why ollama cant just provide the list of avaialble models to run and ask which one to load?
this will save a lot of time for cli users :)

Image
Originally created by @lipstick-turtleback on GitHub (Apr 9, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15449 ``` ➜ ~ ollama list NAME ID SIZE MODIFIED x/flux2-klein:4b 50a0c0ab15ac 5.7 GB 4 hours ago gemma4:31b-cloud c382fbfbc73b - 14 hours ago gemma4:latest c6eb396dbd59 9.6 GB 44 hours ago gemma4:e2b 7fbdbf8f5e45 7.2 GB 6 days ago gemma3:27b-cloud 9e1580299085 - 5 weeks ago glm-5:cloud c313cd065935 - 6 weeks ago lfm2.5-thinking:latest 95bd9d45385f 731 MB 2 months ago ➜ ~ ollama run Error: requires at least 1 arg(s), only received 0 ``` Why ollama cant just provide the list of avaialble models to run and ask which one to load? this will save a lot of time for cli users :) <img width="472" height="206" alt="Image" src="https://github.com/user-attachments/assets/260c7aac-4ae8-41f0-890e-7cc35f6aa544" />
GiteaMirror added the feature request label 2026-04-12 22:44:13 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 9, 2026):

#5380

<!-- gh-comment-id:4213671553 --> @rick-github commented on GitHub (Apr 9, 2026): #5380
Author
Owner

@somera commented on GitHub (Apr 9, 2026):

#5380

Here extended version of the script. All embeddings will be excluded.

#!/usr/bin/env bash
set -euo pipefail

OLLAMA_HOST="${OLLAMA_HOST:-http://localhost:11434}"

require_cmd() {
  command -v "$1" >/dev/null 2>&1 || {
    printf 'Error: required command not found: %s\n' "$1" >&2
    exit 127
  }
}

require_cmd curl
require_cmd jq
require_cmd ollama
require_cmd sort

api_get() {
  curl -fsS "${OLLAMA_HOST%/}$1"
}

api_post() {
  local endpoint="$1"
  local data="$2"
  curl -fsS \
    -H 'Content-Type: application/json' \
    -d "$data" \
    "${OLLAMA_HOST%/}$endpoint"
}

is_selectable_model() {
  local model="$1"
  local show_json

  if ! show_json="$(api_post '/api/show' "$(jq -nc --arg model "$model" '{model: $model}')" 2>/dev/null)"; then
    return 1
  fi

  jq -e '
    (.capabilities // []) | index("completion")
  ' >/dev/null <<<"$show_json"
}

# Enable colors only for terminals and only if NO_COLOR is not set.
if [[ -t 1 && -z "${NO_COLOR:-}" ]]; then
  CLOUD_COLOR=$'\033[1;36m'   # bold cyan
  TAG_COLOR=$'\033[2;36m'     # dim cyan
  RESET_COLOR=$'\033[0m'
else
  CLOUD_COLOR=''
  TAG_COLOR=''
  RESET_COLOR=''
fi

tmp_json="$(mktemp)"
trap 'rm -f "$tmp_json"' EXIT

if ! api_get '/api/tags' >"$tmp_json"; then
  printf 'Error: unable to query the Ollama API at %s.\n' "$OLLAMA_HOST" >&2
  exit 1
fi

declare -a all_models=()
declare -a models=()
declare -A model_is_cloud=()

while IFS=$'\t' read -r model_name is_cloud; do
  [[ -n "$model_name" ]] || continue
  all_models+=("$model_name")
  model_is_cloud["$model_name"]="$is_cloud"
done < <(
  jq -r '
    .models[]
    | [
        .name,
        (if ((.remote_host // "") != "" or (.remote_model // "") != "") then "1" else "0" end)
      ]
    | @tsv
  ' "$tmp_json"
)

if ((${#all_models[@]} == 0)); then
  printf 'Error: no models found.\n' >&2
  exit 1
fi

for model in "${all_models[@]}"; do
  if is_selectable_model "$model"; then
    models+=("$model")
  fi
done

if ((${#models[@]} == 0)); then
  printf 'Error: no selectable chat/completion models found.\n' >&2
  exit 1
fi

mapfile -t models < <(
  printf '%s\n' "${models[@]}" | sort -f
)

printf 'Available models:\n'
for i in "${!models[@]}"; do
  model="${models[$i]}"
  if [[ "${model_is_cloud[$model]:-0}" == "1" ]]; then
    printf '[%2d] %s%s%s %s[cloud]%s\n' \
      "$i" \
      "$CLOUD_COLOR" "$model" "$RESET_COLOR" \
      "$TAG_COLOR" "$RESET_COLOR"
  else
    printf '[%2d] %s\n' "$i" "$model"
  fi
done

while true; do
  read -r -p 'Select model [q=quit]: ' selection || {
    printf '\n'
    exit 0
  }

  case "$selection" in
    q|Q|quit|exit)
      exit 0
      ;;
  esac

  if [[ "$selection" =~ ^[0-9]+$ ]] && (( selection < ${#models[@]} )); then
    break
  fi

  printf 'Invalid selection. Please enter a number between 0 and %d or q to quit.\n' "$(( ${#models[@]} - 1 ))" >&2
done

exec ollama run "${models[$selection]}"
<!-- gh-comment-id:4218177446 --> @somera commented on GitHub (Apr 9, 2026): > [#5380](https://github.com/ollama/ollama/issues/5380) Here extended version of the script. All embeddings will be excluded. ``` #!/usr/bin/env bash set -euo pipefail OLLAMA_HOST="${OLLAMA_HOST:-http://localhost:11434}" require_cmd() { command -v "$1" >/dev/null 2>&1 || { printf 'Error: required command not found: %s\n' "$1" >&2 exit 127 } } require_cmd curl require_cmd jq require_cmd ollama require_cmd sort api_get() { curl -fsS "${OLLAMA_HOST%/}$1" } api_post() { local endpoint="$1" local data="$2" curl -fsS \ -H 'Content-Type: application/json' \ -d "$data" \ "${OLLAMA_HOST%/}$endpoint" } is_selectable_model() { local model="$1" local show_json if ! show_json="$(api_post '/api/show' "$(jq -nc --arg model "$model" '{model: $model}')" 2>/dev/null)"; then return 1 fi jq -e ' (.capabilities // []) | index("completion") ' >/dev/null <<<"$show_json" } # Enable colors only for terminals and only if NO_COLOR is not set. if [[ -t 1 && -z "${NO_COLOR:-}" ]]; then CLOUD_COLOR=$'\033[1;36m' # bold cyan TAG_COLOR=$'\033[2;36m' # dim cyan RESET_COLOR=$'\033[0m' else CLOUD_COLOR='' TAG_COLOR='' RESET_COLOR='' fi tmp_json="$(mktemp)" trap 'rm -f "$tmp_json"' EXIT if ! api_get '/api/tags' >"$tmp_json"; then printf 'Error: unable to query the Ollama API at %s.\n' "$OLLAMA_HOST" >&2 exit 1 fi declare -a all_models=() declare -a models=() declare -A model_is_cloud=() while IFS=$'\t' read -r model_name is_cloud; do [[ -n "$model_name" ]] || continue all_models+=("$model_name") model_is_cloud["$model_name"]="$is_cloud" done < <( jq -r ' .models[] | [ .name, (if ((.remote_host // "") != "" or (.remote_model // "") != "") then "1" else "0" end) ] | @tsv ' "$tmp_json" ) if ((${#all_models[@]} == 0)); then printf 'Error: no models found.\n' >&2 exit 1 fi for model in "${all_models[@]}"; do if is_selectable_model "$model"; then models+=("$model") fi done if ((${#models[@]} == 0)); then printf 'Error: no selectable chat/completion models found.\n' >&2 exit 1 fi mapfile -t models < <( printf '%s\n' "${models[@]}" | sort -f ) printf 'Available models:\n' for i in "${!models[@]}"; do model="${models[$i]}" if [[ "${model_is_cloud[$model]:-0}" == "1" ]]; then printf '[%2d] %s%s%s %s[cloud]%s\n' \ "$i" \ "$CLOUD_COLOR" "$model" "$RESET_COLOR" \ "$TAG_COLOR" "$RESET_COLOR" else printf '[%2d] %s\n' "$i" "$model" fi done while true; do read -r -p 'Select model [q=quit]: ' selection || { printf '\n' exit 0 } case "$selection" in q|Q|quit|exit) exit 0 ;; esac if [[ "$selection" =~ ^[0-9]+$ ]] && (( selection < ${#models[@]} )); then break fi printf 'Invalid selection. Please enter a number between 0 and %d or q to quit.\n' "$(( ${#models[@]} - 1 ))" >&2 done exec ollama run "${models[$selection]}" ```
Author
Owner

@rick-github commented on GitHub (Apr 9, 2026):

Embedding models can be run from the CLI and can be useful for generating one-off embeddings.

<!-- gh-comment-id:4218192979 --> @rick-github commented on GitHub (Apr 9, 2026): Embedding models can be run from the CLI and can be useful for generating one-off embeddings.
Author
Owner

@somera commented on GitHub (Apr 9, 2026):

I know. But in this case you need the text:
ollama run mxbai-embed-large:latest "test"

If you run it without text you get an error:

$ ollama run mxbai-embed-large:latest
Error: embedding models require input text. Usage: ollama run mxbai-embed-large:latest "your text here"
<!-- gh-comment-id:4218205640 --> @somera commented on GitHub (Apr 9, 2026): I know. But in this case you need the text: `ollama run mxbai-embed-large:latest "test"` If you run it without text you get an error: ``` $ ollama run mxbai-embed-large:latest Error: embedding models require input text. Usage: ollama run mxbai-embed-large:latest "your text here" ```
Author
Owner

@rick-github commented on GitHub (Apr 9, 2026):

You are correct, I thought I had previously used it interactively but I must be thinking of a different tool.

<!-- gh-comment-id:4218217607 --> @rick-github commented on GitHub (Apr 9, 2026): You are correct, I thought I had previously used it interactively but I must be thinking of a different tool.
Author
Owner

@somera commented on GitHub (Apr 9, 2026):

I updated my version above. The output is now:

Image
<!-- gh-comment-id:4218258697 --> @somera commented on GitHub (Apr 9, 2026): I updated my version above. The output is now: <img width="373" height="688" alt="Image" src="https://github.com/user-attachments/assets/0a9f4a3e-be87-43a3-a052-476fdeb93346" />
Author
Owner

@somera commented on GitHub (Apr 9, 2026):

@rick-github thx for the idea!

<!-- gh-comment-id:4218271066 --> @somera commented on GitHub (Apr 9, 2026): @rick-github thx for the idea!
Author
Owner

@drifkin commented on GitHub (Apr 9, 2026):

Good suggestion, we'll think about improving that. In the meantime, if you run ollama with no arguments, you can choose Chat with a model and press right arrow to see a model list.

<!-- gh-comment-id:4218369936 --> @drifkin commented on GitHub (Apr 9, 2026): Good suggestion, we'll think about improving that. In the meantime, if you run `ollama` with no arguments, you can choose `Chat with a model` and press right arrow to see a model list.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9874