feat: Update GitHub MCP Agent to use official GitHub MCP Server via Docker

- Replaced Node.js MCP server with the official GitHub MCP server running in Docker.
- Added OpenAI API key input for enhanced query processing.
- Updated README to reflect new requirements and usage instructions.
This commit is contained in:
Shubhamsaboo
2025-10-04 10:09:02 -07:00
parent 84da05a52c
commit e55a2eb5e6
2 changed files with 51 additions and 50 deletions

View File

@@ -2,6 +2,8 @@
A Streamlit application that allows you to explore and analyze GitHub repositories using natural language queries through the Model Context Protocol (MCP).
**✨ Now using the official [GitHub MCP Server](https://github.com/github/github-mcp-server) from GitHub!**
## Features
- **Natural Language Interface**: Ask questions about repositories in plain English
@@ -15,11 +17,11 @@ A Streamlit application that allows you to explore and analyze GitHub repositori
### Requirements
- Python 3.8+
- Node.js and npm (for MCP GitHub server)
- This is a critical requirement! The app uses `npx` to run the MCP GitHub server
- Download and install from [nodejs.org](https://nodejs.org/)
- GitHub Personal Access Token with appropriate permissions
- Docker (for official GitHub MCP server)
- Download and install from [docker.com](https://www.docker.com/get-started)
- Make sure Docker is running before starting the app
- OpenAI API Key
- GitHub Personal Access Token
### Installation
@@ -34,35 +36,26 @@ A Streamlit application that allows you to explore and analyze GitHub repositori
pip install -r requirements.txt
```
3. Verify Node.js and npm are installed:
3. Verify Docker is installed and running:
```bash
node --version
npm --version
npx --version
docker --version
docker ps
```
All of these commands should return version numbers. If they don't, please install Node.js.
4. Set up your API keys:
- Set OpenAI API Key as an environment variable:
```bash
export OPENAI_API_KEY=your-openai-api-key
```
- GitHub token will be entered directly in the app interface
5. Create a GitHub Personal Access Token:
- Visit https://github.com/settings/tokens
- Create a new token with `repo` and `user` scopes
- Save the token somewhere secure
4. Get your API keys:
- **OpenAI API Key**: Get from [platform.openai.com/api-keys](https://platform.openai.com/api-keys)
- **GitHub Token**: Create at [github.com/settings/tokens](https://github.com/settings/tokens) with `repo` scope
### Running the App
1. Start the Streamlit app:
```bash
streamlit run app.py
streamlit run github_agent.py
```
2. In the app interface:
- Enter your GitHub token in the sidebar
- Enter your OpenAI API key
- Enter your GitHub token
- Specify a repository to analyze
- Select a query type or write your own
- Click "Run Query"

View File

@@ -7,19 +7,21 @@ from agno.tools.mcp import MCPTools
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
# Page config
st.set_page_config(page_title="🐙 GitHub MCP Agent", page_icon="🐙", layout="wide")
# Title and description
st.markdown("<h1 class='main-header'>🐙 GitHub MCP Agent</h1>", unsafe_allow_html=True)
st.markdown("Explore GitHub repositories with natural language using the Model Context Protocol")
# Setup sidebar for API key
with st.sidebar:
st.header("🔑 Authentication")
openai_key = st.text_input("OpenAI API Key", type="password",
help="Required for the AI agent to interpret queries and format results")
if openai_key:
os.environ["OPENAI_API_KEY"] = openai_key
github_token = st.text_input("GitHub Token", type="password",
help="Create a token with repo scope at github.com/settings/tokens")
if github_token:
os.environ["GITHUB_TOKEN"] = github_token
@@ -41,7 +43,6 @@ with st.sidebar:
st.markdown("---")
st.caption("Note: Always specify the repository in your query if not already selected in the main input.")
# Query input
col1, col2 = st.columns([3, 1])
with col1:
repo = st.text_input("Repository", value="Shubhamsaboo/awesome-llm-apps", help="Format: owner/repo")
@@ -50,7 +51,6 @@ with col2:
"Issues", "Pull Requests", "Repository Activity", "Custom"
])
# Create predefined queries based on type
if query_type == "Issues":
query_template = f"Find issues labeled as bugs in {repo}"
elif query_type == "Pull Requests":
@@ -63,25 +63,34 @@ else:
query = st.text_area("Your Query", value=query_template,
placeholder="What would you like to know about this repository?")
# Main function to run agent
async def run_github_agent(message):
if not os.getenv("GITHUB_TOKEN"):
return "Error: GitHub token not provided"
if not os.getenv("OPENAI_API_KEY"):
return "Error: OpenAI API key not provided"
try:
server_params = StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-github"],
command="docker",
args=[
"run", "-i", "--rm",
"-e", "GITHUB_PERSONAL_ACCESS_TOKEN",
"-e", "GITHUB_TOOLSETS",
"ghcr.io/github/github-mcp-server"
],
env={
**os.environ,
"GITHUB_PERSONAL_ACCESS_TOKEN": os.getenv('GITHUB_TOKEN'),
"GITHUB_TOOLSETS": "repos,issues,pull_requests"
}
)
# Create client session
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize MCP toolkit
mcp_tools = MCPTools(session=session)
await mcp_tools.initialize()
# Create agent
agent = Agent(
tools=[mcp_tools],
instructions=dedent("""\
@@ -93,24 +102,25 @@ async def run_github_agent(message):
- Include links to relevant GitHub pages when helpful
"""),
markdown=True,
show_tool_calls=True,
)
# Run agent
response = await agent.arun(message)
response = await asyncio.wait_for(agent.arun(message), timeout=120.0)
return response.content
except asyncio.TimeoutError:
return "Error: Request timed out after 120 seconds"
except Exception as e:
return f"Error: {str(e)}"
# Run button
if st.button("🚀 Run Query", type="primary", use_container_width=True):
if not github_token:
if not openai_key:
st.error("Please enter your OpenAI API key in the sidebar")
elif not github_token:
st.error("Please enter your GitHub token in the sidebar")
elif not query:
st.error("Please enter a query")
else:
with st.spinner("Analyzing GitHub repository..."):
# Ensure the repository is explicitly mentioned in the query
if repo and repo not in query:
full_query = f"{query} in {repo}"
else:
@@ -118,32 +128,30 @@ if st.button("🚀 Run Query", type="primary", use_container_width=True):
result = asyncio.run(run_github_agent(full_query))
# Display results in a nice container
st.markdown("### Results")
st.markdown(result)
# Display help text for first-time users
if 'result' not in locals():
st.markdown(
"""<div class='info-box'>
<h4>How to use this app:</h4>
<ol>
<li>Enter your GitHub token in the sidebar</li>
<li>Enter your <strong>OpenAI API key</strong> in the sidebar (powers the AI agent)</li>
<li>Enter your <strong>GitHub token</strong> in the sidebar</li>
<li>Specify a repository (e.g., Shubhamsaboo/awesome-llm-apps)</li>
<li>Select a query type or write your own</li>
<li>Click 'Run Query' to see results</li>
</ol>
<p><strong>Important Notes:</strong></p>
<p><strong>How it works:</strong></p>
<ul>
<li>The Model Context Protocol (MCP) provides real-time access to GitHub repositories</li>
<li>Queries work best when they focus on specific aspects like issues, PRs, or repository info</li>
<li>More specific queries yield better results</li>
<li>This app requires Node.js to be installed (for the npx command)</li>
<li>Uses the official GitHub MCP server via Docker for real-time access to GitHub API</li>
<li>AI Agent (powered by OpenAI) interprets your queries and calls appropriate GitHub APIs</li>
<li>Results are formatted in readable markdown with insights and links</li>
<li>Queries work best when focused on specific aspects like issues, PRs, or repository info</li>
</ul>
</div>""",
unsafe_allow_html=True
)
# Footer
st.markdown("---")
st.write("Built with Streamlit, Agno, and Model Context Protocol ❤️")
st.write("Built with Streamlit, Agno, and Model Context Protocol ❤️")