Examples & Use Cases
Examples & Use Cases
See real-world AI code reviews with actual costs, analysis depth, and practical outcomes across different project types and scenarios.
Real-World Review Examples
Large Framework Change
FastAPI Packaging Change - Architectural impact analysis
- Cost: $0.034
- Model: claude-sonnet-4
- Files Changed: 12 files, 150+ lines
- Focus: Architectural impact, dependency management, breaking changes
- Key Findings: Identified potential breaking changes, suggested migration strategies
Why this example matters: Shows how kit handles complex architectural changes with full repository context, identifying cross-module impacts that diff-only tools miss.
Frontend UI Enhancement
React.dev UI Feature - Accessibility-focused review
- Cost: $0.012
- Model: gpt-4.1
- Files Changed: 6 files, 85 lines
- Focus: Accessibility, component design, user experience
- Key Findings: Accessibility improvements, component reusability suggestions
Why this example matters: Demonstrates kitβs ability to provide specialized feedback on UI/UX concerns, not just technical correctness.
Documentation Update
BioPython Documentation Fix - Proportional response
- Cost: $0.006
- Model: gpt-4.1-mini
- Files Changed: 2 files, 15 lines
- Focus: Documentation clarity, example accuracy
- Key Findings: Minor suggestions for clarity, validation of examples
Why this example matters: Shows how kit provides proportional feedback - thorough but concise for documentation changes.
Multi-Model Comparison
Model Comparison Analysis - Cost vs quality analysis
Compares the same PR reviewed with different models:
- GPT-4.1-nano: $0.004 - High-level issues
- GPT-4.1: $0.034 - Detailed analysis
- Claude Sonnet: $0.087 - Comprehensive review
- Claude Opus: $0.156 - Architectural insights
Why this example matters: Helps teams choose the right model for their budget and quality requirements.
Use Case Scenarios
Security-Critical Changes
# Use security-focused profile with premium modelkit review --profile security-standards \ --model claude-opus-4 \ --priority=high \ https://github.com/company/auth-service/pull/234
Typical output focus:
- Input validation vulnerabilities
- Authentication/authorization issues
- Secrets management problems
- Dependency security concerns
- Logging of sensitive data
High-Volume Development
# Cost-optimized for daily reviewskit review --model gpt-4.1-nano \ --priority=high,medium \ https://github.com/company/api/pull/456
Benefits:
- Reviews at ~$0.002-0.015 per PR
- Focus on important issues only
- Fast turnaround for daily workflow
- Sustainable for 100+ PRs/month
Large Refactoring
# Comprehensive analysis for major changeskit review --model claude-sonnet-4 \ --profile architecture-standards \ https://github.com/company/core/pull/789
Typical output focus:
- Cross-module impact analysis
- Breaking change identification
- Performance implications
- Backward compatibility concerns
- Migration strategy suggestions
Code Quality Focus
# Emphasize style and improvementskit review --priority=low \ --profile code-quality \ --model gpt-4.1-mini \ https://github.com/company/utils/pull/101
Typical output focus:
- Code style improvements
- Refactoring opportunities
- Documentation enhancements
- Test coverage suggestions
- Performance optimizations
Team Workflow Examples
Startup Team (Budget-Conscious)
name: Budget AI Reviewon: pull_request: types: [opened, synchronize]
jobs: review: runs-on: ubuntu-latest steps: - name: AI Review run: | pip install cased-kit # Use ultra-budget model for all PRs kit review --model gpt-4.1-nano \ --priority=high,medium \ ${{ github.event.pull_request.html_url }}
Results:
- Cost: ~$5-15/month for 500 PRs
- Coverage: Critical and important issues
- Speed: Fast reviews, good for rapid iteration
Enterprise Team (Quality-Focused)
name: Enterprise AI Reviewon: pull_request: types: [opened, synchronize]
jobs: review: runs-on: ubuntu-latest steps: - name: AI Review with Smart Selection run: | pip install cased-kit
# Different models based on target branch if [ "${{ github.event.pull_request.base.ref }}" == "main" ]; then MODEL="claude-sonnet-4" PROFILE="production-standards" else MODEL="gpt-4.1" PROFILE="development-standards" fi
kit review --model "$MODEL" \ --profile "$PROFILE" \ ${{ github.event.pull_request.html_url }}
Results:
- Cost: ~$50-150/month for 500 PRs
- Coverage: Comprehensive analysis
- Quality: High-quality, detailed feedback
Open Source Project
name: Community AI Reviewon: pull_request: types: [opened, synchronize] # Only review PRs from outside contributors if: github.event.pull_request.head.repo.full_name != github.repository
jobs: review: runs-on: ubuntu-latest steps: - name: Community PR Review run: | pip install cased-kit # Focus on contribution guidelines kit review --profile community-standards \ --model gpt-4.1-mini \ ${{ github.event.pull_request.html_url }}
Results:
- Purpose: Help external contributors
- Focus: Style, testing, documentation
- Cost: Minimal, only for external PRs
DevSecOps Team (Security-First)
name: Security-Focused Reviewon: pull_request: types: [opened, synchronize] paths: - 'src/auth/**' - 'src/api/**' - '**/*security*' - '**/*auth*'
jobs: security-review: runs-on: ubuntu-latest steps: - name: Security Review run: | pip install cased-kit # Premium model for security-critical code kit review --model claude-opus-4 \ --profile security-hardening \ --priority=high \ ${{ github.event.pull_request.html_url }}
Results:
- Focus: Security vulnerabilities only
- Quality: Maximum thoroughness for critical code
- Cost: Higher per review, but targeted scope
Cost Analysis Examples
Monthly Budget Planning
Small Team (10 developers, 200 PRs/month):
# Budget option: ~$10-30/monthkit review --model gpt-4.1-nano --priority=high,medium
# Balanced option: ~$20-60/monthkit review --model gpt-4.1-mini
# Premium option: ~$60-180/monthkit review --model claude-sonnet-4
Large Team (50 developers, 1000 PRs/month):
# Smart tiering based on PR sizesmall_pr="gpt-4.1-nano" # <5 files: ~$2-8/month per devmedium_pr="gpt-4.1-mini" # 5-20 files: ~$10-30/month per devlarge_pr="claude-sonnet-4" # >20 files: ~$20-60/month per dev
# Total: ~$32-98/month per developer
ROI Analysis Examples
Bug Prevention:
- Cost: $50/month for AI reviews
- Prevented: 2-3 production bugs/month
- Savings: $2000-15000 in bug fix costs
- ROI: 40-300x return on investment
Code Quality Improvement:
- Cost: $100/month for comprehensive reviews
- Result: 25% reduction in tech debt accumulation
- Savings: Faster development velocity
- ROI: Pays for itself in reduced maintenance time
Integration Examples
Slack Notifications
#!/bin/bashREVIEW=$(kit review -p --priority=high "$1")CRITICAL_COUNT=$(echo "$REVIEW" | grep -c "High Priority")
if [ "$CRITICAL_COUNT" -gt 0 ]; then curl -X POST "$SLACK_WEBHOOK" \ -H 'Content-type: application/json' \ --data '{ "text": "π¨ Critical issues found in PR '"$1"'", "attachments": [{ "color": "danger", "text": "'"$(echo "$REVIEW" | head -500)"'" }] }'else curl -X POST "$SLACK_WEBHOOK" \ -H 'Content-type: application/json' \ --data '{ "text": "β
PR '"$1"' looks good to go!" }'fi
Dashboard Metrics
#!/usr/bin/env python3import subprocessimport jsonimport requestsfrom datetime import datetime
def collect_review_metrics(pr_url): # Get review with cost information result = subprocess.run([ 'kit', 'review', '--dry-run', '-p', pr_url ], capture_output=True, text=True)
# Parse metrics lines = result.stderr.split('\n') cost = next((l for l in lines if 'Total cost:' in l), '').split('$')[-1] model = next((l for l in lines if 'Model:' in l), '').split(':')[-1].strip()
# Extract issue counts issues = result.stdout.count('Priority:') high_priority = result.stdout.count('High Priority')
# Send to dashboard metrics = { 'pr_url': pr_url, 'timestamp': datetime.now().isoformat(), 'cost': float(cost) if cost else 0, 'model': model, 'total_issues': issues, 'critical_issues': high_priority }
requests.post('https://dashboard.company.com/api/reviews', json=metrics) return metrics
if __name__ == "__main__": import sys collect_review_metrics(sys.argv[1])
Issue Tracker Integration
#!/bin/bashREVIEW=$(kit review -p --priority=high "$1")SECURITY_ISSUES=$(echo "$REVIEW" | grep -i "security\|vulnerability" | wc -l)
if [ "$SECURITY_ISSUES" -gt 0 ]; then # Create security ticket jira issue create \ --project="SEC" \ --type="Security" \ --summary="Security issues found in $1" \ --description="$REVIEW" \ --priority="High"fi
PERFORMANCE_ISSUES=$(echo "$REVIEW" | grep -i "performance\|slow\|optimization" | wc -l)if [ "$PERFORMANCE_ISSUES" -gt 0 ]; then # Create performance ticket jira issue create \ --project="PERF" \ --type="Task" \ --summary="Performance issues found in $1" \ --description="$REVIEW" \ --priority="Medium"fi
Best Practices from Examples
Model Selection Strategy
- Documentation/Small Changes:
gpt-4.1-nano
orgpt-4.1-mini
- Regular Development:
gpt-4.1
orgemini-2.5-flash
- Critical/Security Changes:
claude-sonnet-4
orclaude-opus-4
- Architectural Changes:
claude-opus-4
for comprehensive analysis - High-Volume Teams: Mix of models based on PR complexity
Priority Filtering Strategy
- Daily Development:
--priority=high,medium
(focus on important issues) - Pre-Release:
--priority=high
(only critical blockers) - Code Quality Reviews:
--priority=low
(style and improvements) - Security Audits:
--priority=high
with security profile - Architecture Reviews: All priorities with premium model
Profile Usage Patterns
- General Development:
company-standards
profile - Security-Critical:
security-hardening
profile - Frontend Work:
frontend-react
orui-standards
profile - Backend APIs:
backend-api
ormicroservice-standards
profile - External Contributors:
community-guidelines
profile