Configuration
Configuration
Configure kit’s AI PR reviewer for your team’s needs with flexible model selection, API key management, and configuration options.
Model Override via CLI
Override the model for any specific review without modifying your configuration:
kit review --model gpt-4.1-nano https://github.com/owner/repo/pull/123kit review --model gpt-4.1 https://github.com/owner/repo/pull/123
# Short flag also workskit review -m claude-sonnet-4-20250514 https://github.com/owner/repo/pull/123
Available Models
Free Local AI (Ollama)
Perfect for unlimited reviews without external API costs:
# Popular coding modelsqwen2.5-coder:latest # Excellent for code analysisdeepseek-r1:latest # Strong reasoning capabilitiesgemma3:latest # Good general purposedevstral:latest # Mistral's coding modelllama3.2:latest # Meta's latest modelcodellama:latest # Code-specialized Llama
Setup:
# 1. Install Ollamacurl -fsSL https://ollama.ai/install.sh | sh
# 2. Pull a modelollama pull qwen2.5-coder:latest
# 3. Use with kitkit review --model qwen2.5-coder:latest <pr-url>
OpenAI Models
# Budget optionsgpt-4.1-nano # Ultra-budget: ~$0.0015-0.004gpt-4.1-mini # Budget-friendly: ~$0.005-0.015gpt-4o-mini # Newer mini model
# Standard optionsgpt-4.1 # Good balance: ~$0.02-0.10gpt-4o # Latest GPT-4 modelgpt-4-turbo # Fast GPT-4 variant
Anthropic Claude
# Budget optionclaude-3-5-haiku-20241022 # Fast and economical
# Recommendedclaude-3-5-sonnet-20241022 # Excellent balanceclaude-sonnet-4-20250514 # Latest Sonnet (recommended)
# Premiumclaude-opus-4-20250514 # Highest quality
Google Gemini
# Ultra-budgetgemini-1.5-flash-8b # ~$0.003 per review
# Standard optionsgemini-2.5-flash # Excellent value: ~$0.007gemini-1.5-flash # Fast and efficientgemini-1.5-pro # More capablegemini-2.5-pro # Latest pro model
API Key Setup
GitHub Token
Get from GitHub Settings → Developer settings → Personal access tokens
Required permissions:
repo
(for private repositories)public_repo
(for public repositories)pull_requests:write
(to post comments)
export KIT_GITHUB_TOKEN="ghp_your_token_here"
LLM Provider API Keys
Anthropic Claude (Recommended):
export KIT_ANTHROPIC_TOKEN="sk-ant-your_key"
Get from: Anthropic Console
OpenAI GPT Models:
export KIT_OPENAI_TOKEN="sk-your_openai_key"
Get from: OpenAI Platform
Google Gemini:
export KIT_GOOGLE_API_KEY="AIzaSy-your_google_key"
Get from: Google AI Studio
Ollama (Local - No API Key Required):
# Just ensure Ollama is runningollama serve
Configuration Files
Basic Configuration
Edit ~/.kit/review-config.yaml
:
github: token: ghp_your_token_here base_url: https://api.github.com
llm: provider: anthropic # or "openai", "google", "ollama" model: claude-sonnet-4-20250514 api_key: sk-ant-your_key_here max_tokens: 4000 temperature: 0.1
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50
Provider-Specific Configurations
Anthropic Claude:
github: token: ghp_your_token_here base_url: https://api.github.com
llm: provider: anthropic model: claude-sonnet-4-20250514 api_key: sk-ant-your_key_here max_tokens: 4000 temperature: 0.1
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50
OpenAI GPT:
github: token: ghp_your_token_here base_url: https://api.github.com
llm: provider: openai model: gpt-4.1 api_key: sk-your_openai_key max_tokens: 4000 temperature: 0.1
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50
Google Gemini:
github: token: ghp_your_token_here base_url: https://api.github.com
llm: provider: google model: gemini-2.5-flash # or gemini-1.5-flash-8b for ultra-budget api_key: AIzaSy-your_google_key max_tokens: 4000 temperature: 0.1
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50
Free Local AI (Ollama):
github: token: ghp_your_token_here # Still need GitHub API access base_url: https://api.github.com
llm: provider: ollama model: qwen2.5-coder:latest # or deepseek-r1:latest api_base_url: http://localhost:11434 api_key: ollama # Placeholder (Ollama doesn't use API keys) max_tokens: 4000 temperature: 0.1
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50
Priority Filtering Configuration
Default Priority Settings
github: token: ghp_your_token_here base_url: https://api.github.com
llm: provider: anthropic model: claude-sonnet-4-20250514 api_key: sk-ant-your_key_here max_tokens: 4000 temperature: 0.1
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50 # Optional: Set default priority filter priority_filter: ["high", "medium"] # Only show important issues by default
Priority Configuration Examples
Security-focused configuration:
review: priority_filter: ["high"] # Critical issues only post_as_comment: true clone_for_analysis: true cache_repos: true
General development workflow:
review: priority_filter: ["high", "medium"] # Skip style suggestions post_as_comment: true clone_for_analysis: true cache_repos: true
Code quality/style reviews:
review: priority_filter: ["low"] # Focus on improvements post_as_comment: true clone_for_analysis: true cache_repos: true
Default (show all priorities):
review: priority_filter: ["high", "medium", "low"] # Same as omitting priority_filter post_as_comment: true clone_for_analysis: true cache_repos: true
Advanced Configuration Options
Repository Analysis Settings
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50
# Advanced settings max_file_size: 1048576 # 1MB max file size exclude_patterns: # Files to ignore - "*.lock" - "package-lock.json" - "yarn.lock" - "*.min.js" - "dist/" - "build/"
include_patterns: # Only analyze these files (if specified) - "*.py" - "*.js" - "*.ts" - "*.java" - "*.go"
analysis_timeout: 300 # 5 minute timeout retry_attempts: 3 # Retry failed requests
Multi-Provider Configuration
github: token: ghp_your_token_here base_url: https://api.github.com
# Default providerllm: provider: anthropic model: claude-sonnet-4-20250514 api_key: sk-ant-your_key_here max_tokens: 4000 temperature: 0.1
# Alternative providers (for CLI override)providers: openai: api_key: sk-your_openai_key api_base_url: https://api.openai.com/v1
google: api_key: AIzaSy-your_google_key api_base_url: https://generativelanguage.googleapis.com/v1beta
ollama: api_base_url: http://localhost:11434 api_key: ollama
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50
Custom Profile Defaults
github: token: ghp_your_token_here base_url: https://api.github.com
llm: provider: anthropic model: claude-sonnet-4-20250514 api_key: sk-ant-your_key_here max_tokens: 4000 temperature: 0.1
review: post_as_comment: true clone_for_analysis: true cache_repos: true max_files: 50
# Default profile for all reviews default_profile: "company-standards"
# Repository-specific profiles profile_overrides: "cased/frontend-app": "frontend-react" "cased/api-service": "backend-python" "cased/security-lib": "security-focused"
Environment Variable Overrides
You can override any configuration setting using environment variables:
# GitHub settingsexport KIT_GITHUB_TOKEN="ghp_your_token"export KIT_GITHUB_BASE_URL="https://api.github.com"
# LLM settingsexport KIT_LLM_PROVIDER="anthropic"export KIT_LLM_MODEL="claude-sonnet-4-20250514"export KIT_ANTHROPIC_TOKEN="sk-ant-your_key"export KIT_LLM_MAX_TOKENS="4000"export KIT_LLM_TEMPERATURE="0.1"
# Review settingsexport KIT_REVIEW_POST_AS_COMMENT="true"export KIT_REVIEW_CACHE_REPOS="true"export KIT_REVIEW_MAX_FILES="50"export KIT_REVIEW_PRIORITY_FILTER="high,medium"
Configuration Validation
Test your configuration:
# Initialize configuration with guided setupkit review --init-config
# Validate current configurationkit review --validate-config
# Test with dry runkit review --dry-run --model claude-sonnet-4 https://github.com/owner/repo/pull/123
Multiple Configuration Profiles
Team-Specific Configs
# Create team-specific config directoriesmkdir -p ~/.kit/profiles/frontend-teammkdir -p ~/.kit/profiles/backend-teammkdir -p ~/.kit/profiles/security-team
# Frontend team configcat > ~/.kit/profiles/frontend-team/review-config.yaml << EOFllm: provider: openai model: gpt-4.1-mini api_key: sk-frontend-team-keyreview: default_profile: "frontend-react" priority_filter: ["high", "medium"]EOF
# Use specific configKIT_CONFIG_DIR=~/.kit/profiles/frontend-team kit review <pr-url>
Project-Specific Configs
# In your project directorymkdir .kitcat > .kit/review-config.yaml << EOFllm: provider: ollama model: qwen2.5-coder:latestreview: default_profile: "project-standards" max_files: 30EOF
# Kit automatically uses project-local config if availablekit review <pr-url>
Cost Management Configuration
Budget Controls
llm: provider: anthropic model: claude-sonnet-4-20250514 api_key: sk-ant-your_key_here max_tokens: 4000 temperature: 0.1
# Cost controls cost_limit_per_review: 0.50 # Maximum $0.50 per review monthly_cost_limit: 100.00 # Maximum $100 per month
review: # Auto-downgrade model if cost limit exceeded fallback_model: "gpt-4.1-mini"
# Skip review if PR is too large max_cost_estimate: 1.00 # Skip if estimated cost > $1.00
Usage Tracking
tracking: enabled: true log_file: ~/.kit/usage.log metrics_endpoint: https://your-metrics-server.com/api/usage team_id: "engineering-team"
Troubleshooting
Common Issues
1. API Key Issues:
# Test API keycurl -H "Authorization: Bearer sk-ant-your_key" \ https://api.anthropic.com/v1/messages
# Check environmentecho $KIT_ANTHROPIC_TOKEN
2. Model Availability:
# List available modelskit review --list-models
# Test specific modelkit review --model claude-sonnet-4 --dry-run <pr-url>
3. GitHub Permissions:
# Test GitHub tokencurl -H "Authorization: token ghp_your_token" \ https://api.github.com/user
# Check permissionsgh auth status
Debug Mode
# Enable debug loggingexport KIT_DEBUG=truekit review --dry-run <pr-url>
# Verbose outputkit review --verbose <pr-url>
Configuration Reset
# Reset to defaultsrm ~/.kit/review-config.yamlkit review --init-config
# Backup current configcp ~/.kit/review-config.yaml ~/.kit/review-config.yaml.backup