mirror of
https://github.com/Shubhamsaboo/awesome-llm-apps.git
synced 2026-03-09 07:25:00 -05:00
2.3 KiB
2.3 KiB
Contextual AI RAG Agent
A Streamlit app that integrates Contextual AI's managed RAG platform. Create a datastore, ingest documents, spin up an agent, and chat grounded on your data.
Features
- Document ingestion to Contextual AI datastores
- Agent creation bound to one or more datastores
- Response generation via Contextual’s Grounded Language Model (GLM) for faithful, retrieval-grounded answers
- Reranking of retrieved documents by query relevance and custom instructions (multilingual)
- Retrieval visualization (show attribution page image and metadata)
- LMUnit evaluation of answers using a custom rubric
Prerequisites
- Contextual AI account and API key (Dashboard → API Keys)
Generate an API key
- Log in to your tenant at
app.contextual.ai. - Click on "API Keys".
- Click on "Create API Key".
- Copy the key and paste it into the app sidebar when prompted.
How to Run
- Clone the repository and navigate to the app folder:
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
cd awesome-llm-apps/rag_tutorials/contextualai_rag_agent
- Create and activate a virtual environment.
- Install dependencies:
pip install -r requirements.txt
- Launch the app:
streamlit run contextualai_rag_agent.py
Usage
-
In the sidebar, paste your Contextual AI API key. Optionally provide an existing Agent ID and/or Datastore ID if you already have them.
-
If needed, create a new datastore. Upload PDFs or text files to ingest. The app waits until documents finish processing.
-
Create a new agent (or use an existing one) linked to the datastore.
-
Ask questions in the chat input. Responses are generated by your Contextual AI agent.
-
Optional advanced features:
- Agent Settings: Update the agent system prompt via the UI.
- Debug & Evaluation: Toggle retrieval info to view attributions; run LMUnit evaluation on the last answer with a custom rubric.
Configuration Notes
- If you're on a non-US cloud instance, set the Base URL in the sidebar (e.g.,
http://api.contextual.ai/v1). The app will use this base URL for all API calls, including readiness polling. - Retrieval visualization uses
agents.query.retrieval_infoto fetch base64 page images and displays them directly. - LMUnit evaluation uses
lmunit.createto score the last answer against your rubric.