mirror of
https://github.com/Shubhamsaboo/awesome-llm-apps.git
synced 2026-04-30 23:31:31 -05:00
63 lines
2.3 KiB
Markdown
63 lines
2.3 KiB
Markdown
# Contextual AI RAG Agent
|
||
|
||
A Streamlit app that integrates Contextual AI's managed RAG platform. Create a datastore, ingest documents, spin up an agent, and chat grounded on your data.
|
||
|
||
## Features
|
||
|
||
- Document ingestion to Contextual AI datastores
|
||
- Agent creation bound to one or more datastores
|
||
- Response generation via Contextual’s Grounded Language Model (GLM) for faithful, retrieval-grounded answers
|
||
- Reranking of retrieved documents by query relevance and custom instructions (multilingual)
|
||
- Retrieval visualization (show attribution page image and metadata)
|
||
- LMUnit evaluation of answers using a custom rubric
|
||
|
||
|
||
## Prerequisites
|
||
|
||
- Contextual AI account and API key (Dashboard → API Keys)
|
||
|
||
### Generate an API key
|
||
|
||
1. Log in to your tenant at `app.contextual.ai`.
|
||
2. Click on "API Keys".
|
||
3. Click on "Create API Key".
|
||
4. Copy the key and paste it into the app sidebar when prompted.
|
||
|
||
## How to Run
|
||
|
||
1. Clone the repository and navigate to the app folder:
|
||
```bash
|
||
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
|
||
cd awesome-llm-apps/rag_tutorials/contextualai_rag_agent
|
||
```
|
||
|
||
2. Create and activate a virtual environment.
|
||
3. Install dependencies:
|
||
```bash
|
||
pip install -r requirements.txt
|
||
```
|
||
4. Launch the app:
|
||
```bash
|
||
streamlit run contextualai_rag_agent.py
|
||
```
|
||
|
||
## Usage
|
||
|
||
1) In the sidebar, paste your Contextual AI API key. Optionally provide an existing Agent ID and/or Datastore ID if you already have them.
|
||
|
||
2) If needed, create a new datastore. Upload PDFs or text files to ingest. The app waits until documents finish processing.
|
||
|
||
3) Create a new agent (or use an existing one) linked to the datastore.
|
||
|
||
4) Ask questions in the chat input. Responses are generated by your Contextual AI agent.
|
||
|
||
5) Optional advanced features:
|
||
- Agent Settings: Update the agent system prompt via the UI.
|
||
- Debug & Evaluation: Toggle retrieval info to view attributions; run LMUnit evaluation on the last answer with a custom rubric.
|
||
|
||
## Configuration Notes
|
||
|
||
- If you're on a non-US cloud instance, set the Base URL in the sidebar (e.g., `http://api.contextual.ai/v1`). The app will use this base URL for all API calls, including readiness polling.
|
||
- Retrieval visualization uses `agents.query.retrieval_info` to fetch base64 page images and displays them directly.
|
||
- LMUnit evaluation uses `lmunit.create` to score the last answer against your rubric.
|