[GH-ISSUE #10097] Add an easy way to list all models and their capabilities #68676

Open
opened 2026-05-04 14:48:49 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @BruceMacD on GitHub (Apr 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10097

Originally assigned to: @BruceMacD on GitHub.

Currently retrieving model capabilities requires multiple API calls to the show endpoint or parsing documentation. An endpoint that provides a comprehensive list of all available models along with their respective capabilities would significantly improve developer experience.

This could be implemented by enhancing the existing /api/tags endpoint to include capability information.

This would enable developers to:

  • Programmatically filter models based on required capabilities
  • Display comprehensive model selection interfaces
  • Make informed decisions about which models to use for specific tasks
  • More easily keep application logic in sync with available model functionality
Originally created by @BruceMacD on GitHub (Apr 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10097 Originally assigned to: @BruceMacD on GitHub. Currently retrieving model capabilities requires multiple API calls to the show endpoint or parsing documentation. An endpoint that provides a comprehensive list of all available models along with their respective capabilities would significantly improve developer experience. This could be implemented by enhancing the existing `/api/tags` endpoint to include capability information. This would enable developers to: - Programmatically filter models based on required capabilities - Display comprehensive model selection interfaces - Make informed decisions about which models to use for specific tasks - More easily keep application logic in sync with available model functionality
GiteaMirror added the feature request label 2026-05-04 14:48:49 -05:00
Author
Owner

@JasonHonKL commented on GitHub (Apr 6, 2025):

@BruceMacD I also want this feature. Noticing you have self assigned, mind to ask if anything I could help ?

<!-- gh-comment-id:2781200797 --> @JasonHonKL commented on GitHub (Apr 6, 2025): @BruceMacD I also want this feature. Noticing you have self assigned, mind to ask if anything I could help ?
Author
Owner

@werty1st commented on GitHub (Jul 16, 2025):

in the meantime:

# copyright grok.com
alias ollist='f() {
  list_output=$(ollama list)

  # Print header
  echo "NAME                                           ID              SIZE      MODIFIED       CAPABILITIES"
  echo "------------------------------------------------------------------------------------------------"

  # Process each model from ollama list
  while IFS= read -r line; do
    # Skip the header line
    if [[ "$line" =~ ^NAME.* ]]; then
      continue
    fi

    # Extract fields from ollama list (assuming fixed-width columns)
    name=$(echo "$line" | awk "{print \$1}")
    id=$(echo "$line" | awk "{print \$2}")
    size=$(echo "$line" | awk "{print \$3}")
    modified=$(echo "$line" | awk "{print \$4, \$5, \$6}")

    # Get capabilities from ollama show, stopping at blank line or new section
    capabilities=$(ollama show "$name" | awk "/Capabilities/{flag=1; next} flag && /^[[:space:]]*[a-zA-Z]/ && !/^[[:space:]]*$/{gsub(/^[[:space:]]+|[[:space:]]+$/, \"\"); print} flag && (/^[[:space:]]*$/ || /^[[:space:]]*[A-Z][a-zA-Z]*$/){flag=0}" | paste -sd, -)

    # Print combined output
    printf "%-46s %-15s %-9s %-15s %-s\n" "$name" "$id" "$size" "$modified" "$capabilities"
  done <<< "$list_output"
}; f'

<!-- gh-comment-id:3078214499 --> @werty1st commented on GitHub (Jul 16, 2025): in the meantime: ```bash # copyright grok.com alias ollist='f() { list_output=$(ollama list) # Print header echo "NAME ID SIZE MODIFIED CAPABILITIES" echo "------------------------------------------------------------------------------------------------" # Process each model from ollama list while IFS= read -r line; do # Skip the header line if [[ "$line" =~ ^NAME.* ]]; then continue fi # Extract fields from ollama list (assuming fixed-width columns) name=$(echo "$line" | awk "{print \$1}") id=$(echo "$line" | awk "{print \$2}") size=$(echo "$line" | awk "{print \$3}") modified=$(echo "$line" | awk "{print \$4, \$5, \$6}") # Get capabilities from ollama show, stopping at blank line or new section capabilities=$(ollama show "$name" | awk "/Capabilities/{flag=1; next} flag && /^[[:space:]]*[a-zA-Z]/ && !/^[[:space:]]*$/{gsub(/^[[:space:]]+|[[:space:]]+$/, \"\"); print} flag && (/^[[:space:]]*$/ || /^[[:space:]]*[A-Z][a-zA-Z]*$/){flag=0}" | paste -sd, -) # Print combined output printf "%-46s %-15s %-9s %-15s %-s\n" "$name" "$id" "$size" "$modified" "$capabilities" done <<< "$list_output" }; f' ```
Author
Owner

@SuperPat45 commented on GitHub (Aug 22, 2025):

Mistral API platform return a capabilities object in their /models API return object:

"capabilities": {
    "completion_chat": true,
    "completion_fim": false,
    "function_calling": true,
    "fine_tuning": false,
    "vision": false,
    "classification": false
},

I would love to have something similar for ollama /tags API;

"capabilities": {
    "completion_chat": true,
    "function_calling": true,
    "vision": false,
    "thinking": true, // Eventually with supported reasoning efforts values or if model support disable thinking
    "structured_outputs": true
}

I could then ensure that my application adapts more precisely to the capabilities of the selected model.

<!-- gh-comment-id:3213674042 --> @SuperPat45 commented on GitHub (Aug 22, 2025): Mistral API platform return a capabilities object in their /models API return object: ``` "capabilities": { "completion_chat": true, "completion_fim": false, "function_calling": true, "fine_tuning": false, "vision": false, "classification": false }, ``` I would love to have something similar for ollama /tags API; ``` "capabilities": { "completion_chat": true, "function_calling": true, "vision": false, "thinking": true, // Eventually with supported reasoning efforts values or if model support disable thinking "structured_outputs": true } ``` I could then ensure that my application adapts more precisely to the capabilities of the selected model.
Author
Owner

@gregglind commented on GitHub (Nov 11, 2025):

Are there any updates on this?

Or help needed or blockers?

<!-- gh-comment-id:3517947791 --> @gregglind commented on GitHub (Nov 11, 2025): Are there any updates on this? Or help needed or blockers?
Author
Owner

@lgautier commented on GitHub (Dec 2, 2025):

Are there any updates on this?

Or help needed or blockers?

My understanding is that a changes were briefly merged (https://github.com/ollama/ollama/pull/10174) before this was reverted out of performance concerns: an astonishing 100x slowdown reported. My guess from skimming through https://github.com/ollama/ollama/pull/10822 is that fetching the information required a lot of I/O (reading a lot of model data even though this is not used to report capabilities).

What I am missing is whether #10822 does solve the underlying slowdown issue. If is does, may be #10174 should be "un-reverted"?

<!-- gh-comment-id:3603320338 --> @lgautier commented on GitHub (Dec 2, 2025): > Are there any updates on this? > > Or help needed or blockers? My understanding is that a changes were briefly merged (https://github.com/ollama/ollama/pull/10174) before this was reverted out of performance concerns: an astonishing 100x slowdown reported. My guess from skimming through https://github.com/ollama/ollama/pull/10822 is that fetching the information required a lot of I/O (reading a lot of model data even though this is not used to report capabilities). What I am missing is whether #10822 does solve the underlying slowdown issue. If is does, may be #10174 should be "un-reverted"?
Author
Owner

@gregglind commented on GitHub (Dec 2, 2025):

Thanks for the links!

That is a surprising outcome, but also understandable for non-optimized metadata api call.

<!-- gh-comment-id:3603832100 --> @gregglind commented on GitHub (Dec 2, 2025): Thanks for the links! That is a surprising outcome, but also understandable for non-optimized metadata api call.
Author
Owner

@OnyxynO commented on GitHub (Mar 29, 2026):

We're running into both issues in a local NLP/RAG app that uses Ollama as its LLM backend.

1. Embedding vs LLM filtering — We currently filter models by checking if the name contains "embed", which is fragile and breaks with any non-standard naming. A type or capabilities field in /api/tags would fix this cleanly.

2. Thinking model detection — We send think: false unconditionally to all models as a safe fallback (ignored by non-thinking models), but ideally we'd surface a toggle in the UI only for models that actually support it. There's no reliable way to know that today without calling /api/show and parsing the modelfile.

Both needs point to the same solution: expose capabilities (e.g. embedding, thinking, vision) directly in /api/tags so clients don't need extra round-trips or string-matching heuristics.

<!-- gh-comment-id:4149714201 --> @OnyxynO commented on GitHub (Mar 29, 2026): We're running into both issues in a local NLP/RAG app that uses Ollama as its LLM backend. **1. Embedding vs LLM filtering** — We currently filter models by checking if the name contains `"embed"`, which is fragile and breaks with any non-standard naming. A `type` or `capabilities` field in `/api/tags` would fix this cleanly. **2. Thinking model detection** — We send `think: false` unconditionally to all models as a safe fallback (ignored by non-thinking models), but ideally we'd surface a toggle in the UI only for models that actually support it. There's no reliable way to know that today without calling `/api/show` and parsing the modelfile. Both needs point to the same solution: expose capabilities (e.g. `embedding`, `thinking`, `vision`) directly in `/api/tags` so clients don't need extra round-trips or string-matching heuristics.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68676