[PR #2786] added llm-async #2099

Open
opened 2025-11-06 13:29:26 -06:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/vinta/awesome-python/pull/2786
Author: @sonic182
Created: 11/3/2025
Status: 🔄 Open

Base: masterHead: feature/add_llm_async


📝 Commits (1)

📊 Changes

1 file changed (+1 additions, -0 deletions)

View changed files

📝 README.md (+1 -0)

📄 Description

What is this Python project?

llm_async is an async-first, lightweight Python library for interacting with modern Large Language Model (LLM) providers such as OpenAI, Google Gemini, Anthropic Claude, and OpenRouter.

It provides a unified async API for chat completions, streaming responses, tool/agent execution, and JSON-schema-validated structured outputs — all built with asyncio and designed for production integration.

Key Features

  • Async-first design — built entirely around asyncio, no blocking I/O
  • Unified provider interface — same API for OpenAI, Gemini, Claude, and OpenRouter
  • Automatic tool execution — define tools once and use them across providers
  • Pub/Sub events — real-time event emission for tool execution (start/complete/error)
  • Structured outputs — enforce JSON Schema validation across supported models
  • Extensible architecture — easily add new providers by inheriting from BaseProvider
  • Streaming support — async iterator interface for live model responses

GitHub: https://github.com/sonic182/llm_async


What's the difference between this Python project and similar ones?

  • Unlike synchronous SDKs (e.g. openai, anthropic), llm_async is async-first, not a wrapper.
  • Unlike high-level frameworks (e.g. LangChain, LlamaIndex), it’s minimal and provider-agnostic — focused on clean async primitives rather than orchestration layers.
  • Supports tool calling + structured outputs + streaming under one unified API surface.
  • Designed for developers who want low-level control and high throughput without extra dependencies.

Anyone who agrees with this pull request could submit an Approve review to it.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/vinta/awesome-python/pull/2786 **Author:** [@sonic182](https://github.com/sonic182) **Created:** 11/3/2025 **Status:** 🔄 Open **Base:** `master` ← **Head:** `feature/add_llm_async` --- ### 📝 Commits (1) - [`17fd16e`](https://github.com/vinta/awesome-python/commit/17fd16edfcf427dfee52147b73739e914ce357d3) added llm-async ### 📊 Changes **1 file changed** (+1 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `README.md` (+1 -0) </details> ### 📄 Description ## What is this Python project? `llm_async` is an **async-first**, lightweight Python library for interacting with modern Large Language Model (LLM) providers such as **OpenAI**, **Google Gemini**, **Anthropic Claude**, and **OpenRouter**. It provides a unified async API for chat completions, streaming responses, tool/agent execution, and JSON-schema-validated structured outputs — all built with `asyncio` and designed for production integration. ### Key Features * **Async-first design** — built entirely around `asyncio`, no blocking I/O * **Unified provider interface** — same API for OpenAI, Gemini, Claude, and OpenRouter * **Automatic tool execution** — define tools once and use them across providers * **Pub/Sub events** — real-time event emission for tool execution (start/complete/error) * **Structured outputs** — enforce JSON Schema validation across supported models * **Extensible architecture** — easily add new providers by inheriting from `BaseProvider` * **Streaming support** — async iterator interface for live model responses GitHub: [https://github.com/sonic182/llm_async](https://github.com/sonic182/llm_async) --- ## What's the difference between this Python project and similar ones? * Unlike synchronous SDKs (e.g. `openai`, `anthropic`), **`llm_async` is async-first**, not a wrapper. * Unlike high-level frameworks (e.g. LangChain, LlamaIndex), it’s **minimal and provider-agnostic** — focused on clean async primitives rather than orchestration layers. * Supports **tool calling + structured outputs + streaming** under one unified API surface. * Designed for developers who want **low-level control** and high throughput without extra dependencies. --- Anyone who agrees with this pull request could submit an *Approve* review to it. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2025-11-06 13:29:26 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/awesome-python#2099