Getting started with PR Reviews
Kit AI PR Reviewer
Kit includes a production-ready AI PR reviewer that provides professional-grade code analysis with full repository context. Use almost any LLM and pay just for tokens. High-quality reviews with SOTA models like Claude Sonnet 4 generally cost about 10 cents.
Use per-organization profiles and prioritization for further customization. Use kitβs local output to pipe to other unix tools.
π Quick Start
# 1. Install kit (lightweight - no ML dependencies needed for PR review!)pip install cased-kit
# 2. Set up configurationkit review --init-config
# 3. Set API keysexport KIT_GITHUB_TOKEN="ghp_your_token"export KIT_ANTHROPIC_TOKEN="sk-ant-your_key"export KIT_OPENAI_TOKEN="sk-openai-your_key"export KIT_GOOGLE_API_KEY="AIzaSy-your_google_key"
# 4. Review any GitHub PRkit review https://github.com/owner/repo/pull/123
# 5. Test without posting (dry run with full formatting)kit review --dry-run https://github.com/owner/repo/pull/123
# 6. Use custom context profiles for organization standardskit review --profile company-standards https://github.com/owner/repo/pull/123
# 7. Focus on specific priority levelskit review --priority=high,medium https://github.com/owner/repo/pull/123
π° Transparent Pricing
Some examples based on real-world testing on production open source PRs:
Model | Typical Cost | Quality | Best For |
---|---|---|---|
gemini-1.5-flash-8b | $0.003 | βββ | Ultra-budget, high volume |
gpt-4.1-nano | $0.0015-0.004 | βββ | High-volume, ultra-budget |
gpt-4.1-mini | $0.005-0.015 | ββββ | Budget-friendly, often very good for the price |
gemini-2.5-flash | $0.007 | ββββ | Excellent value, fast |
claude-sonnet-4 | 0.08-$0.14 | βββββ | Recommended for most |
In Practice
Even without optimizing your model mix, a team doing 500 large PRs a month will generally pay under $50 a month total for reviews with SOTA models.
π― Key Features
Intelligent Analysis
- Repository Context: Full codebase understanding, not just diff analysis
- Symbol Analysis: Identifies when functions/classes are used elsewhere
- Cross-Impact Assessment: Understands how changes affect the broader system
- Multi-Language Support: Works with any language kit supports
Professional Output
- Priority-Based Issues: High/Medium/Low issue categorization with filtering options
- Specific Recommendations: Concrete code suggestions with examples
- GitHub Integration: Clickable links to all referenced files
- Quality Scoring: Objective metrics for review effectiveness
Cost & Transparency
- Real-Time Cost Tracking: See exact LLM usage and costs
- Token Breakdown: Understand what drives costs
- Model Information: Know which AI provided the analysis
- No Hidden Fees: Pay only for actual LLM usage
π Custom Context Profiles
Store and apply organization-specific coding standards and review guidelines through custom context profiles. Create profiles that automatically inject your companyβs coding standards, security requirements, and style guidelines into every PR review.
# Create a profile from your existing coding guidelineskit review-profile create --name company-standards \ --file coding-guidelines.md \ --description "Acme Corp coding standards"
# Use in any reviewkit review --profile company-standards https://github.com/owner/repo/pull/123
# List all profileskit review-profile list
β Complete Profiles Guide - Profile management, team workflows, and examples
π Output Modes & Integration
Kit provides different output modes for various workflows - from direct GitHub posting to piping output to CLI code writers:
# Standard mode - posts directly to GitHubkit review https://github.com/owner/repo/pull/123
# Plain mode - clean output for piping to other toolskit review --plain https://github.com/owner/repo/pull/123 | \ claude "implement these suggestions"
# Priority filtering - focus on what matterskit review --priority=high,medium https://github.com/owner/repo/pull/123
β Integration Guide - Output modes, piping workflows, and multi-stage AI analysis
π CI/CD Integration
Add AI code reviews to your GitHub Actions workflow:
name: AI PR Reviewon: pull_request: types: [opened, synchronize, reopened]
jobs: ai-review: runs-on: ubuntu-latest permissions: pull-requests: write contents: read
steps: - name: AI Code Review run: | pip install cased-kit kit review ${{ github.event.pull_request.html_url }} env: KIT_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} KIT_ANTHROPIC_TOKEN: ${{ secrets.ANTHROPIC_API_KEY }}
β CI/CD Guide - GitHub Actions, advanced workflows, and cost optimization strategies
π§ Configuration
Quick configuration for common setups:
# Override model for specific reviewkit review --model gpt-4.1-nano https://github.com/owner/repo/pull/123
# Free local AI with Ollamakit review --model qwen2.5-coder:latest https://github.com/owner/repo/pull/123
β Configuration Guide - Model selection, API keys, and configuration files
π Examples
See real-world reviews with actual costs and analysis:
- FastAPI Packaging Change ($0.034) - Architectural impact analysis
- React.dev UI Feature ($0.012) - Accessibility-focused review
- Documentation Fix ($0.006) - Proportional response
β More Examples - Real review examples and use cases
π Whatβs Next: Roadmap
Recently Shipped β
- Custom Context Profiles: Store and apply organization-specific coding standards and guidelines
- Priority Filtering: Focus reviews on what matters most
In Development
- Feedback Learning: Simple database to learn from review feedback and improve over time
- Inline Comments: Post comments directly on specific lines instead of summary comments
- Follow-up Review Awareness: Take previous reviews into account for better, more targeted feedback
Future Features
- Multi-Model Consensus: Compare reviews from multiple models for high-stakes changes
- Smart Review Routing: Automatically select the best model based on change type and team preferences
π‘ Best Practices
Cost Optimization
- Use free local AI for unlimited reviews with Ollama (requires self-hosted setup)
- Use budget models for routine changes, premium for breaking changes
- Use the
--model
flag to override models per PR - Leverage caching - repeat reviews of same repo are 5-10x faster
- Set up profiles to avoid redundant context
Team Adoption
- Start with free local AI to build confidence without costs
- Use budget models initially to control costs
- Create organization-specific guidelines for consistent reviews
- Add to CI/CD for all PRs or just high-impact branches
The kit AI PR reviewer provides professional-grade code analysis at costs accessible to any team size, from $0.00/month with free local AI to enterprise-scale deployment. With full repository context and transparent pricing, itβs designed to enhance your development workflow without breaking the budget.