mirror of
https://github.com/Shubhamsaboo/awesome-llm-apps.git
synced 2026-03-11 17:48:31 -05:00
Fixes The class `OllamaEmbeddings` was deprecated in LangChain 0.3.1 and will be removed in 1.0.0. An updated version of the class exists in the :class:`~langchain-ollama package and should be used instead.
💻 Local Lllama-3.1 with RAG
Streamlit app that allows you to chat with any webpage using local Llama-3.1 and Retrieval Augmented Generation (RAG). This app runs entirely on your computer, making it 100% free and without the need for an internet connection.
Features
- Input a webpage URL
- Ask questions about the content of the webpage
- Get accurate answers using RAG and the Llama-3.1 model running locally on your computer
How to get Started?
- Clone the GitHub repository
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
cd awesome-llm-apps/rag_tutorials/llama3.1_local_rag
- Install the required dependencies:
pip install -r requirements.txt
- Run the Streamlit App
streamlit run llama3.1_local_rag.py
How it Works?
- The app loads the webpage data using WebBaseLoader and splits it into chunks using RecursiveCharacterTextSplitter.
- It creates Ollama embeddings and a vector store using Chroma.
- The app sets up a RAG (Retrieval-Augmented Generation) chain, which retrieves relevant documents based on the user's question.
- The Llama-3.1 model is called to generate an answer using the retrieved context.
- The app displays the answer to the user's question.