Changelog
Changelog
All notable changes to Kit will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[0.7.0]
π Major Features
-
Custom Context Profiles: Store and apply organization-specific coding standards and guidelines
- Create reusable profiles:
kit review-profile create --name company-standards
- Apply to any PR:
kit review --profile company-standards <pr-url>
- Export/import for team collaboration
- Create reusable profiles:
-
Priority Filtering: Focus reviews on what matters most
- Filter by priority levels:
kit review --priority=high,medium <pr-url>
- Reduce noise and costs by focusing on critical issues
- Combine with other modes for targeted workflows
- Filter by priority levels:
[0.6.4]
π Major Features
-
OpenRouter & LiteLLM Provider Support: Complete integration with OpenAI-compatible providers
- Access to 100+ models through OpenRouter at competitive prices
- Support for Together AI, Groq, Perplexity, Replicate, and other popular providers
- Additional cost tracking with accurate model name handling
- Thanks to @AlanVerbner for this contribution
-
Google Gemini Support for PR Reviews: Complete integration of Googleβs Gemini models
- Support for latest models:
gemini-2.5-flash
,gemini-2.5-pro
,gemini-1.5-flash-8b
- Ultra-budget option with Gemini 1.5 Flash 8B at ~$0.003 per large PR
- Automatic provider detection and accurate token-based cost tracking for Gemini
- Support for latest models:
π Bug Fixes
- Better fork PR Support: Fixed issue preventing reviews of PRs from certain outside forks
- Now uses base repository coordinates instead of head repository for diff fetching
- Resolves 404 errors when reviewing external contributor PRs
- Thanks to @redvelvets for this contribution
β¨ Enhanced Features
-
Expanded Provider Ecosystem: Three major categories now supported
- Cloud Providers: Anthropic Claude, OpenAI GPT, Google Gemini
- Alternative Providers: OpenRouter, Together AI, Groq, Fireworks
- Local Providers: Ollama with free local models
-
Smart Model Detection: Intelligent routing based on model names
- Handles complex model naming like
openrouter/anthropic/claude-3.5-sonnet
- Automatically strips provider prefixes for accurate cost calculation
- Maintains compatibility with all existing configurations
- Handles complex model naming like
[0.6.3]
π Bug Fixes
- Symbol Type Extraction Fix: Fixed bug where some symbol types were incorrectly processed
- Classes and other symbol types no longer have characters incorrectly stripped
- Added comprehensive test coverage for symbol type processing edge cases
[0.6.2]
π Major Features
-
Ollama Support: Complete local LLM inference support with Ollama
- Zero-cost PR reviews with local models
- Support for popular models like DeepSeek R1, Qwen2.5-coder, CodeLlama
- Automatic provider detection from model names (e.g.,
deepseek-r1:latest
) - First-class integration with kitβs repository intelligence
-
DeepSeek R1 Reasoning Model Support
- Thinking Token Stripping: Automatically removes
<think>...</think>
tags from reasoning models - Clean, professional output without internal reasoning clutter
- Preserves the analytical capabilities while improving output quality
- Works in both summarization and PR review workflows
- Thinking Token Stripping: Automatically removes
-
Plain Output Mode: New
--plain
/-p
flag for pipe-friendly output- Removes all formatting and status messages
- Perfect for piping to Claude Code or other AI tools
- Enables powerful multi-stage AI workflows (e.g.,
kit review -p | claude
) - Quiet mode suppresses all progress/status output
β¨ Enhanced Features
-
CLI Improvements
- Added
--version
flag to display current kit version - Model override support:
--model
/-m
flag for per-review model selection - Better error messages and help text
- Added
-
Documentation
- Comprehensive Ollama integration guides
- Claude Code workflow examples
- Multi-stage AI analysis patterns
- Updated CLI reference with new flags
π§ Developer Experience
-
Community
- Added Discord community server for support and discussions
- Improved README with better getting started instructions
-
Testing
- Comprehensive test suite for thinking token stripping
- Ollama integration tests with mock scenarios
- PR reviewer test coverage for new features
π° Cost Optimization
- Free Local Analysis: Use Ollama for zero-cost code analysis
- Hybrid Workflows: Combine free local analysis with premium cloud implementation
- Provider Switching: Automatic provider detection and switching
[0.6.1]
π§ Improvements
- Enhanced line number accuracy in PR reviews
- Improved debug output for troubleshooting
- Better test coverage for core functionality
- Performance optimizations for large repositories
π Bug Fixes
- Fixed edge cases in symbol extraction
- Improved error handling for malformed diffs
- Better validation for GitHub URLs
[0.6.0]
π Major Features
- Advanced PR reviews
- Enhanced line number context and accuracy fore reviews
- Comprehensive cost tracking and pricing updates for reviews
- Improved repository intelligence with better symbol analysis
β¨ Enhanced Features
- Better diff parsing and analysis
- Enhanced file prioritization algorithms for reviews
- Improved cost breakdown reporting