feat: Add Contextual AI RAG agent

This commit is contained in:
Jinash Rouniyar
2025-09-05 05:05:07 -04:00
parent 4045e92d8b
commit 84fe9fbe1a
4 changed files with 395 additions and 0 deletions

View File

@@ -0,0 +1,62 @@
# Contextual AI RAG Agent
A Streamlit app that integrates Contextual AI's managed RAG platform. Create a datastore, ingest documents, spin up an agent, and chat grounded on your data.
## Features
- Document ingestion to Contextual AI datastores
- Agent creation bound to one or more datastores
- Response generation via Contextuals Grounded Language Model (GLM) for faithful, retrieval-grounded answers
- Reranking of retrieved documents by query relevance and custom instructions (multilingual)
- Retrieval visualization (show attribution page image and metadata)
- LMUnit evaluation of answers using a custom rubric
## Prerequisites
- Contextual AI account and API key (Dashboard → API Keys)
### Generate an API key
1. Log in to your tenant at `app.contextual.ai`.
2. Click on "API Keys".
3. Click on "Create API Key".
4. Copy the key and paste it into the app sidebar when prompted.
## How to Run
1. Clone the repository and navigate to the app folder:
```bash
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
cd awesome-llm-apps/rag_tutorials/contextualai_rag_agent
```
2. Create and activate a virtual environment.
3. Install dependencies:
```bash
pip install -r requirements.txt
```
4. Launch the app:
```bash
streamlit run contextualai_rag_agent.py
```
## Usage
1) In the sidebar, paste your Contextual AI API key. Optionally provide an existing Agent ID and/or Datastore ID if you already have them.
2) If needed, create a new datastore. Upload PDFs or text files to ingest. The app waits until documents finish processing.
3) Create a new agent (or use an existing one) linked to the datastore.
4) Ask questions in the chat input. Responses are generated by your Contextual AI agent.
5) Optional advanced features:
- Agent Settings: Update the agent system prompt via the UI.
- Debug & Evaluation: Toggle retrieval info to view attributions; run LMUnit evaluation on the last answer with a custom rubric.
## Configuration Notes
- If you're on a non-US cloud instance, set the Base URL in the sidebar (e.g., `http://api.contextual.ai/v1`). The app will use this base URL for all API calls, including readiness polling.
- Retrieval visualization uses `agents.query.retrieval_info` to fetch base64 page images and displays them directly.
- LMUnit evaluation uses `lmunit.create` to score the last answer against your rubric.