[GH-ISSUE #15183] Add Clarvia AEO score badge — agent compatibility score for Ollama #56233

Closed
opened 2026-04-29 10:27:35 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @digitamaz on GitHub (Mar 31, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15183

Hey Ollama team 👋

I built Clarvia — a platform that scores AI tools on how well they work with autonomous AI agents (AEO: Agent Enablement Optimization). We've indexed 27,894+ tools including Ollama.

Ollama has excellent OpenAI-compatible API and is widely used in agent pipelines (LangChain, LlamaIndex, AutoGen, etc.). View Ollama's Clarvia profile

What the badge looks like

[![AEO Score](https://clarvia.art/api/badge/ollama)](https://clarvia.art/profile/ollama)

The AEO score (0–100) evaluates: API schema completeness, OpenAI compatibility depth, streaming support quality, tool/function calling support, structured output, and error handling patterns.

Why it matters for Ollama users

Developers choosing a local LLM runtime for their agent stack look for signals of production agent compatibility. Given Ollama's growing use in agentic workflows, an AEO badge in the README directly helps your community.

Thanks for making local LLM inference so accessible — Ollama is foundational infrastructure for self-hosted agent stacks! 🙏

Originally created by @digitamaz on GitHub (Mar 31, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15183 ## Hey Ollama team 👋 I built [Clarvia](https://clarvia.art) — a platform that scores AI tools on how well they work with autonomous AI agents (AEO: Agent Enablement Optimization). We've indexed 27,894+ tools including Ollama. Ollama has excellent OpenAI-compatible API and is widely used in agent pipelines (LangChain, LlamaIndex, AutoGen, etc.). **[View Ollama's Clarvia profile](https://clarvia.art)** ### What the badge looks like ```markdown [![AEO Score](https://clarvia.art/api/badge/ollama)](https://clarvia.art/profile/ollama) ``` The AEO score (0–100) evaluates: API schema completeness, OpenAI compatibility depth, streaming support quality, tool/function calling support, structured output, and error handling patterns. ### Why it matters for Ollama users Developers choosing a local LLM runtime for their agent stack look for signals of production agent compatibility. Given Ollama's growing use in agentic workflows, an AEO badge in the README directly helps your community. Thanks for making local LLM inference so accessible — Ollama is foundational infrastructure for self-hosted agent stacks! 🙏
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56233