Files
awesome-llm-apps/rag_tutorials/local_rag_agent
2024-11-09 18:50:46 -06:00
..
2024-11-09 18:50:46 -06:00
2024-11-09 18:50:46 -06:00
2024-11-09 18:50:46 -06:00

🦙 Local RAG Agent with Llama 3.2

This application implements a Retrieval-Augmented Generation (RAG) system using Llama 3.2 via Ollama, with Qdrant as the vector database.

Features

  • Fully local RAG implementation
  • Powered by Llama 3.2 through Ollama
  • Vector search using Qdrant
  • Interactive playground interface
  • No external API dependencies

How to get Started?

  1. Clone the GitHub repository
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
  1. Install the required dependencies:
cd rag_tutorials/local_rag_agent
pip install -r requirements.txt
  1. Install and start Qdrant vector database locally
docker pull qdrant/qdrant
docker run -p 6333:6333 qdrant/qdrant
  1. Install Ollama and pull Llama 3.2
ollama pull llama3.2

  1. Run the AI RAG Agent
python local_rag_agent.py
  1. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.