refactor: Update local RAG agent to use AgentOS and enhance knowledge management

- Replaced PDFUrlKnowledgeBase with Knowledge class for improved document management.
- Updated to use AgentOS for the application interface instead of Playground.
- Adjusted README to reflect changes in the interface and knowledge base loading process.
- Specified minimum version for the 'agno' package in requirements.txt.
This commit is contained in:
Shubhamsaboo
2025-11-09 14:41:25 -08:00
parent 7c532cd8d6
commit 9bc6394fae
3 changed files with 24 additions and 20 deletions

View File

@@ -1,13 +1,14 @@
## 🦙 Local RAG Agent with Llama 3.2
This application implements a Retrieval-Augmented Generation (RAG) system using Llama 3.2 via Ollama, with Qdrant as the vector database.
This application implements a Retrieval-Augmented Generation (RAG) system using Llama 3.2 via Ollama, with Qdrant as the vector database. Built with Agno v2.0.
### Features
- Fully local RAG implementation
- Powered by Llama 3.2 through Ollama
- Vector search using Qdrant
- Interactive playground interface
- Interactive AgentOS interface
- No external API dependencies
- Uses Agno v2.0 Knowledge class for document management
### How to get Started?
@@ -36,11 +37,15 @@ ollama pull llama3.2
ollama pull openhermes
```
4. Run the AI RAG Agent
5. Run the AI RAG Agent
```bash
python local_rag_agent.py
```
5. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.
6. Open your web browser and navigate to the URL provided in the console output (typically `http://localhost:7777`) to interact with the RAG agent through the AgentOS interface.
### Note
- The knowledge base loads a Thai Recipes PDF on the first run. You can comment out the `knowledge_base.add_content()` line after the first run to avoid reloading.
- The AgentOS interface provides a web-based UI for interacting with your agent.

View File

@@ -1,10 +1,10 @@
# Import necessary libraries
from agno.agent import Agent
from agno.models.ollama import Ollama
from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
from agno.knowledge.knowledge import Knowledge
from agno.vectordb.qdrant import Qdrant
from agno.embedder.ollama import OllamaEmbedder
from agno.playground import Playground, serve_playground_app
from agno.knowledge.embedder.ollama import OllamaEmbedder
from agno.os import AgentOS
# Define the collection name for the vector database
collection_name = "thai-recipe-index"
@@ -16,14 +16,15 @@ vector_db = Qdrant(
embedder=OllamaEmbedder()
)
# Define the knowledge base with the specified PDF URL
knowledge_base = PDFUrlKnowledgeBase(
urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
# Define the knowledge base
knowledge_base = Knowledge(
vector_db=vector_db,
)
# Load the knowledge base, comment out after the first run to avoid reloading
knowledge_base.load(recreate=True, upsert=True)
# Add content to the knowledge base, comment out after the first run to avoid reloading
knowledge_base.add_content(
url="https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"
)
# Create the Agent using Ollama's llama3.2 model and the knowledge base
agent = Agent(
@@ -33,8 +34,9 @@ agent = Agent(
)
# UI for RAG agent
app = Playground(agents=[agent]).get_app()
agent_os = AgentOS(agents=[agent])
app = agent_os.get_app()
# Run the Playground app
# Run the AgentOS app
if __name__ == "__main__":
serve_playground_app("local_rag_agent:app", reload=True)
agent_os.serve(app="local_rag_agent:app", reload=True)

View File

@@ -1,7 +1,4 @@
agno
agno>=2.2.10
qdrant-client
ollama
pypdf
openai
fastapi
uvicorn
pypdf