[PR #638] [MERGED] Add Trust-Gated Multi-Agent Research Team with Cryptographic Audit Trail #4120

Closed
opened 2026-05-07 10:00:56 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/Shubhamsaboo/awesome-llm-apps/pull/638
Author: @vinaybhosle
Created: 3/24/2026
Status: Merged
Merged: 4/16/2026
Merged by: @Shubhamsaboo

Base: mainHead: trust-gated-agent-team-v2


📝 Commits (2)

  • 390a849 feat: Add Trust-Gated Multi-Agent Research Team with Cryptographic Audit Trail
  • 0d654a4 fix: remove promo footer from trust-gated agent tutorial

📊 Changes

4 files changed (+772 additions, -0 deletions)

View changed files

📝 README.md (+1 -0)
advanced_ai_agents/multi_agent_apps/trust_gated_agent_team/README.md (+117 -0)
advanced_ai_agents/multi_agent_apps/trust_gated_agent_team/requirements.txt (+3 -0)
advanced_ai_agents/multi_agent_apps/trust_gated_agent_team/trust_gated_agents.py (+651 -0)

📄 Description

Summary

  • Streamlit app where GPT-4o-mini agents must pass a trust verification before participating in a research pipeline
  • Every agent action is recorded in a SHA-256 hash-chained audit trail — tamper-evident and independently verifiable
  • Fully self-contained: only openai and streamlit as dependencies. No external services required to run

Rework of #631 — addresses all feedback from @awesomekoder:

  • Core logic is completely self-contained (no proprietary service dependency)
  • Trust registry, audit chain, and pipeline all live in one file
  • No wallet addresses or external API calls needed

What's New vs the Existing Trust Layer (#467)

The existing multi_agent_trust_layer tutorial covers trust scoring + delegation chains. This tutorial adds a different concept: hash-chained audit trails — the blockchain-style tamper-evident logging pattern applied to AI agent actions.

Existing Trust Layer This Tutorial
CLI-based Streamlit UI with visual dashboard
Trust scoring + delegation chains Trust gating + hash-chained audit trail
No audit export Verifiable JSON export with chain integrity check

How to Test

pip install -r requirements.txt
streamlit run trust_gated_agents.py
  1. Paste OpenAI API key in the sidebar
  2. Click Run Trust-Gated Pipeline — agents are pre-selected via dropdowns
  3. Researcher (score 75) and Analyst (score 60) pass; Untrusted Bot (score 5) gets blocked
  4. Scroll down to see the audit trail with chain verification

Test Plan

  • pip install -r requirements.txt succeeds with only openai + streamlit
  • App launches with streamlit run trust_gated_agents.py
  • Default agent selection shows the trust gate blocking the untrusted bot
  • Swapping Writer to "Report Writer" lets all 3 agents through
  • Adjusting threshold slider changes which agents pass/fail
  • Audit trail shows chain integrity verification
  • Export JSON button produces valid, verifiable audit data

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/Shubhamsaboo/awesome-llm-apps/pull/638 **Author:** [@vinaybhosle](https://github.com/vinaybhosle) **Created:** 3/24/2026 **Status:** ✅ Merged **Merged:** 4/16/2026 **Merged by:** [@Shubhamsaboo](https://github.com/Shubhamsaboo) **Base:** `main` ← **Head:** `trust-gated-agent-team-v2` --- ### 📝 Commits (2) - [`390a849`](https://github.com/Shubhamsaboo/awesome-llm-apps/commit/390a8498a15ad3fe63062aa62da3bdf52ba0bda4) feat: Add Trust-Gated Multi-Agent Research Team with Cryptographic Audit Trail - [`0d654a4`](https://github.com/Shubhamsaboo/awesome-llm-apps/commit/0d654a4f01d8615b7f1abc11570b59bbfe6b358d) fix: remove promo footer from trust-gated agent tutorial ### 📊 Changes **4 files changed** (+772 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `README.md` (+1 -0) ➕ `advanced_ai_agents/multi_agent_apps/trust_gated_agent_team/README.md` (+117 -0) ➕ `advanced_ai_agents/multi_agent_apps/trust_gated_agent_team/requirements.txt` (+3 -0) ➕ `advanced_ai_agents/multi_agent_apps/trust_gated_agent_team/trust_gated_agents.py` (+651 -0) </details> ### 📄 Description ## Summary - Streamlit app where GPT-4o-mini agents must pass a **trust verification** before participating in a research pipeline - Every agent action is recorded in a **SHA-256 hash-chained audit trail** — tamper-evident and independently verifiable - Fully self-contained: only `openai` and `streamlit` as dependencies. No external services required to run **Rework of #631** — addresses all feedback from @awesomekoder: - Core logic is completely self-contained (no proprietary service dependency) - Trust registry, audit chain, and pipeline all live in one file - No wallet addresses or external API calls needed ## What's New vs the Existing Trust Layer (#467) The existing `multi_agent_trust_layer` tutorial covers trust scoring + delegation chains. This tutorial adds a **different concept**: hash-chained audit trails — the blockchain-style tamper-evident logging pattern applied to AI agent actions. | Existing Trust Layer | This Tutorial | |---|---| | CLI-based | Streamlit UI with visual dashboard | | Trust scoring + delegation chains | Trust gating + **hash-chained audit trail** | | No audit export | **Verifiable JSON export** with chain integrity check | ## How to Test ```bash pip install -r requirements.txt streamlit run trust_gated_agents.py ``` 1. Paste OpenAI API key in the sidebar 2. Click **Run Trust-Gated Pipeline** — agents are pre-selected via dropdowns 3. Researcher (score 75) and Analyst (score 60) pass; Untrusted Bot (score 5) gets blocked 4. Scroll down to see the audit trail with chain verification ## Test Plan - [ ] `pip install -r requirements.txt` succeeds with only openai + streamlit - [ ] App launches with `streamlit run trust_gated_agents.py` - [ ] Default agent selection shows the trust gate blocking the untrusted bot - [ ] Swapping Writer to "Report Writer" lets all 3 agents through - [ ] Adjusting threshold slider changes which agents pass/fail - [ ] Audit trail shows chain integrity verification - [ ] Export JSON button produces valid, verifiable audit data --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-07 10:00:56 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/awesome-llm-apps#4120