refactor(ai): remove OpenAI dependency, use only Ollama

- Remove OpenAI API key requirements and dependencies
- Make Ollama the default and only AI option
- Simplify workflow to use only local AI processing
- Remove OpenAI fallback logic
- Update documentation to reflect Ollama-only approach
- Clean up Python dependencies (remove openai package)

This eliminates external API dependencies and simplifies
the AI integration to use only local Ollama models.
This commit is contained in:
Vijay Janapa Reddi
2025-08-02 11:25:31 -04:00
parent a623867919
commit 2ba8b3890a
3 changed files with 21 additions and 46 deletions

View File

@@ -342,7 +342,7 @@ jobs:
# Install Python dependencies
python -m pip install --upgrade pip
python -m pip install openai requests
python -m pip install requests
# Install Ollama
echo "🤖 Installing Ollama..."
@@ -382,20 +382,9 @@ jobs:
echo "✅ AI release notes generated successfully"
cat "release_notes_${{ steps.version.outputs.new_version }}.md"
else
echo "⚠️ AI release notes generation failed, trying fallback..."
# Try without AI (basic analysis)
echo "🔄 Attempting basic release notes generation..."
python tools/scripts/maintenance/update_changelog.py \
--release-notes \
--version ${{ steps.version.outputs.new_version }} \
--previous-version ${{ steps.version.outputs.previous_version }} \
--description "${{ github.event.inputs.description }}" \
--openai \
--verbose || {
echo "⚠️ Basic generation also failed, using template"
# Fallback to basic release notes template
cat > "release_notes_${{ steps.version.outputs.new_version }}.md" << EOF
echo "⚠️ AI release notes generation failed, using template"
# Fallback to basic release notes template
cat > "release_notes_${{ steps.version.outputs.new_version }}.md" << EOF
## 📚 Release ${{ steps.version.outputs.new_version }}
**${{ github.event.inputs.description }}**
@@ -423,7 +412,6 @@ jobs:
---
*Note: AI analysis was unavailable, using template release notes*
EOF
}
fi
- name: 📦 Create GitHub Release with PDF

View File

@@ -114,7 +114,7 @@ This analyzes:
### AI Model Configuration
The workflow uses Ollama with configurable AI models:
The workflow uses Ollama (local AI) with configurable models:
- **Default**: `gemma2:9b` (fast, good quality)
- **Alternative**: `gemma2:27b` (better quality, slower)
@@ -125,6 +125,8 @@ You can specify the model in the workflow inputs:
- Set "AI model" field to your preferred model
- Leave empty for default (`gemma2:9b`)
**No API keys required** - all AI processing happens locally via Ollama.
### Manual Release Notes Workflow
1. **Run publish-live workflow** → Creates draft release with AI notes

View File

@@ -8,11 +8,8 @@ import requests
import json
from collections import defaultdict
from datetime import datetime
from openai import OpenAI
# Initialize OpenAI client (will be None if not using OpenAI)
client = None
use_ollama = False # Global flag to track which service to use
# Initialize Ollama as default
use_ollama = True # Global flag to track which service to use
CHANGELOG_FILE = "CHANGELOG.md"
QUARTO_YML_FILE = "book/config/_quarto-pdf.yml" # Default to PDF config which has chapters structure
@@ -752,7 +749,6 @@ if __name__ == "__main__":
parser.add_argument("--demo", action="store_true", help="Generate a demo changelog entry with sample data.")
parser.add_argument("-v", "--verbose", action="store_true", help="Verbose output.")
parser.add_argument("-q", "--quarto-config", type=str, help="Path to quarto config file (default: book/config/_quarto-pdf.yml)")
parser.add_argument("--openai", action="store_true", help="Use OpenAI for summarization instead of Ollama (default).")
parser.add_argument("-m", "--model", type=str, default="gemma2:9b", help="Ollama model to use (default: gemma2:9b). Popular options: gemma2:9b, gemma2:27b, llama3.1:8b, llama3.1:70b")
parser.add_argument("--release-notes", action="store_true", help="Generate release notes instead of changelog entry.")
parser.add_argument("--version", type=str, help="Version for release notes (required with --release-notes).")
@@ -805,10 +801,7 @@ if __name__ == "__main__":
print("📝 CHANGELOG GENERATION CONFIG")
print("=" * 60)
print(f"🎯 Mode: {mode.upper()}")
if args.openai:
print("🤖 AI Model: OpenAI GPT")
else:
print(f"🤖 AI Model: {args.model} (via Ollama)")
print(f"🤖 AI Model: {args.model} (via Ollama)")
print(f"🔧 Test Mode: {'ON' if args.test else 'OFF'}")
print(f"📢 Verbose: {'ON' if args.verbose else 'OFF'}")
print(f"📋 Features: Impact bars, importance sorting, specific summaries")
@@ -817,25 +810,17 @@ if __name__ == "__main__":
print(f"🚀 Starting changelog generation in {mode} mode...")
if args.openai:
print("🤖 Using OpenAI for summarization.")
use_ollama = False
# Initialize OpenAI client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
if not os.getenv("OPENAI_API_KEY"):
raise ValueError("OPENAI_API_KEY not set. Please set it in your environment variables.")
else:
print(f"🤖 Using Ollama for summarization with model: {args.model}")
use_ollama = True
# Test Ollama connection
test_response = call_ollama("Hello", model=args.model, verbose=False)
if test_response is None:
print("❌ Failed to connect to Ollama. Make sure it's running on localhost:11434")
print("💡 To install models in Ollama:")
print(" ollama pull gemma2:9b")
print(" ollama pull gemma2:27b")
exit(1)
print("✅ Ollama connection successful")
print(f"🤖 Using Ollama for summarization with model: {args.model}")
use_ollama = True
# Test Ollama connection
test_response = call_ollama("Hello", model=args.model, verbose=False)
if test_response is None:
print("❌ Failed to connect to Ollama. Make sure it's running on localhost:11434")
print("💡 To install models in Ollama:")
print(" ollama pull gemma2:9b")
print(" ollama pull gemma2:27b")
exit(1)
print(" Ollama connection successful")
if mode == "release_notes":
# Generate release notes