Skip to content

Getting started with PR Reviews

Kit AI PR Reviewer

Kit includes a production-ready AI PR reviewer that provides professional-grade code analysis with full repository context. Use almost any LLM and pay just for tokens. High-quality reviews with SOTA models like Claude Sonnet 4 generally cost about 10 cents.

Use per-organization profiles and prioritization for further customization. Use kit’s local output to pipe to other unix tools.

πŸš€ Quick Start

Terminal window
# 1. Install kit (lightweight - no ML dependencies needed for PR review!)
pip install cased-kit
# 2. Set up configuration
kit review --init-config
# 3. Set API keys
export KIT_GITHUB_TOKEN="ghp_your_token"
export KIT_ANTHROPIC_TOKEN="sk-ant-your_key"
export KIT_OPENAI_TOKEN="sk-openai-your_key"
export KIT_GOOGLE_API_KEY="AIzaSy-your_google_key"
# 4. Review any GitHub PR
kit review https://github.com/owner/repo/pull/123
# 5. Test without posting (dry run with full formatting)
kit review --dry-run https://github.com/owner/repo/pull/123
# 6. Use custom context profiles for organization standards
kit review --profile company-standards https://github.com/owner/repo/pull/123
# 7. Focus on specific priority levels
kit review --priority=high,medium https://github.com/owner/repo/pull/123

πŸ’° Transparent Pricing

Some examples based on real-world testing on production open source PRs:

ModelTypical CostQualityBest For
gemini-1.5-flash-8b$0.003⭐⭐⭐Ultra-budget, high volume
gpt-4.1-nano$0.0015-0.004⭐⭐⭐High-volume, ultra-budget
gpt-4.1-mini$0.005-0.015⭐⭐⭐⭐Budget-friendly, often very good for the price
gemini-2.5-flash$0.007⭐⭐⭐⭐Excellent value, fast
claude-sonnet-40.08-$0.14⭐⭐⭐⭐⭐Recommended for most

In Practice

Even without optimizing your model mix, a team doing 500 large PRs a month will generally pay under $50 a month total for reviews with SOTA models.

🎯 Key Features

Intelligent Analysis

  • Repository Context: Full codebase understanding, not just diff analysis
  • Symbol Analysis: Identifies when functions/classes are used elsewhere
  • Cross-Impact Assessment: Understands how changes affect the broader system
  • Multi-Language Support: Works with any language kit supports

Professional Output

  • Priority-Based Issues: High/Medium/Low issue categorization with filtering options
  • Specific Recommendations: Concrete code suggestions with examples
  • GitHub Integration: Clickable links to all referenced files
  • Quality Scoring: Objective metrics for review effectiveness

Cost & Transparency

  • Real-Time Cost Tracking: See exact LLM usage and costs
  • Token Breakdown: Understand what drives costs
  • Model Information: Know which AI provided the analysis
  • No Hidden Fees: Pay only for actual LLM usage

πŸ“‹ Custom Context Profiles

Store and apply organization-specific coding standards and review guidelines through custom context profiles. Create profiles that automatically inject your company’s coding standards, security requirements, and style guidelines into every PR review.

Terminal window
# Create a profile from your existing coding guidelines
kit review-profile create --name company-standards \
--file coding-guidelines.md \
--description "Acme Corp coding standards"
# Use in any review
kit review --profile company-standards https://github.com/owner/repo/pull/123
# List all profiles
kit review-profile list

β†’ Complete Profiles Guide - Profile management, team workflows, and examples

πŸ”„ Output Modes & Integration

Kit provides different output modes for various workflows - from direct GitHub posting to piping output to CLI code writers:

Terminal window
# Standard mode - posts directly to GitHub
kit review https://github.com/owner/repo/pull/123
# Plain mode - clean output for piping to other tools
kit review --plain https://github.com/owner/repo/pull/123 | \
claude "implement these suggestions"
# Priority filtering - focus on what matters
kit review --priority=high,medium https://github.com/owner/repo/pull/123

β†’ Integration Guide - Output modes, piping workflows, and multi-stage AI analysis

πŸš€ CI/CD Integration

Add AI code reviews to your GitHub Actions workflow:

name: AI PR Review
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
ai-review:
runs-on: ubuntu-latest
permissions:
pull-requests: write
contents: read
steps:
- name: AI Code Review
run: |
pip install cased-kit
kit review ${{ github.event.pull_request.html_url }}
env:
KIT_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
KIT_ANTHROPIC_TOKEN: ${{ secrets.ANTHROPIC_API_KEY }}

β†’ CI/CD Guide - GitHub Actions, advanced workflows, and cost optimization strategies

πŸ”§ Configuration

Quick configuration for common setups:

Terminal window
# Override model for specific review
kit review --model gpt-4.1-nano https://github.com/owner/repo/pull/123
# Free local AI with Ollama
kit review --model qwen2.5-coder:latest https://github.com/owner/repo/pull/123

β†’ Configuration Guide - Model selection, API keys, and configuration files

πŸ“Š Examples

See real-world reviews with actual costs and analysis:

β†’ More Examples - Real review examples and use cases

πŸ“ˆ What’s Next: Roadmap

Recently Shipped βœ…

  • Custom Context Profiles: Store and apply organization-specific coding standards and guidelines
  • Priority Filtering: Focus reviews on what matters most

In Development

  • Feedback Learning: Simple database to learn from review feedback and improve over time
  • Inline Comments: Post comments directly on specific lines instead of summary comments
  • Follow-up Review Awareness: Take previous reviews into account for better, more targeted feedback

Future Features

  • Multi-Model Consensus: Compare reviews from multiple models for high-stakes changes
  • Smart Review Routing: Automatically select the best model based on change type and team preferences

πŸ’‘ Best Practices

Cost Optimization

  • Use free local AI for unlimited reviews with Ollama (requires self-hosted setup)
  • Use budget models for routine changes, premium for breaking changes
  • Use the --model flag to override models per PR
  • Leverage caching - repeat reviews of same repo are 5-10x faster
  • Set up profiles to avoid redundant context

Team Adoption

  • Start with free local AI to build confidence without costs
  • Use budget models initially to control costs
  • Create organization-specific guidelines for consistent reviews
  • Add to CI/CD for all PRs or just high-impact branches

The kit AI PR reviewer provides professional-grade code analysis at costs accessible to any team size, from $0.00/month with free local AI to enterprise-scale deployment. With full repository context and transparent pricing, it’s designed to enhance your development workflow without breaking the budget.