Command Line Interface
kit provides a comprehensive command-line interface for repository analysis, symbol extraction, and AI-powered development workflows. All commands support both human-readable and machine-readable output formats for seamless integration with other tools.
Installation & Setup
# Install kitpip install cased-kit
# Verify installationkit --version
# Get help for any commandkit --helpkit <command> --help
Core Commands
Repository Analysis
kit symbols
Extract symbols (functions, classes, variables) from a repository with intelligent caching for 25-36x performance improvements.
kit symbols <repository-path> [OPTIONS]
Options:
--format, -f <format>
: Output format (table
,json
,names
,plain
)--pattern, -p <pattern>
: File pattern filter (e.g.,*.py
,src/**/*.js
)--output, -o <file>
: Save output to file--type, -t <type>
: Filter by symbol type (function
,class
,variable
, etc.)
Examples:
# Extract all symbols (uses incremental analysis for speed)kit symbols /path/to/repo
# Get only Python functions as JSONkit symbols /path/to/repo --pattern "*.py" --type function --format json
# Export to file for further analysiskit symbols /path/to/repo --output symbols.json
# Quick symbol names for scriptingkit symbols /path/to/repo --format names | grep "test_"
kit file-tree
Display repository structure with file type indicators and statistics.
kit file-tree <repository-path> [OPTIONS]
Options:
--path, -p <subpath>
: Subdirectory path to show tree for (relative to repo root)--output, -o <file>
: Save output to file
Examples:
# Show full repository structurekit file-tree /path/to/repo
# Show structure for specific subdirectorykit file-tree /path/to/repo --path srckit file-tree /path/to/repo -p src/components
# Export structure as JSONkit file-tree /path/to/repo --output structure.json
# Analyze specific directory and exportkit file-tree /path/to/repo --path src/api --output api-structure.json
kit search
Fast text search across repository files with regex support.
kit search <repository-path> <query> [OPTIONS]
Options:
--pattern, -p <pattern>
: File pattern filter--output, -o <file>
: Save output to file
Examples:
# Search for function definitionskit search /path/to/repo "def.*login"
# Search in specific fileskit search /path/to/repo "TODO" --pattern "*.py"
kit search-semantic
AI-powered semantic search using vector embeddings to find code by meaning rather than keywords. Requires sentence-transformers
package.
kit search-semantic <repository-path> <query> [OPTIONS]
Options:
--top-k, -k <count>
: Maximum number of results to return (default: 5)--output, -o <file>
: Save output to JSON file--embedding-model, -e <model>
: SentenceTransformers model name (default: all-MiniLM-L6-v2)--chunk-by, -c <strategy>
: Chunking strategy: βsymbolsβ or βlinesβ (default: symbols)--build-index/--no-build-index
: Force rebuild of vector index (default: false)--persist-dir, -p <dir>
: Directory to persist vector index--format, -f <format>
: Output format: βtableβ, βjsonβ, or βplainβ (default: json)
Index Management:
The vector index is automatically built on first use and persisted for subsequent searches. Use --build-index
to force a rebuild when the codebase changes significantly.
Examples:
# Find authentication-related code (builds index on first run)kit search-semantic /path/to/repo "authentication logic"
# Force rebuild index after major code changeskit search-semantic /path/to/repo "new feature" --build-index
# Search for error handling patterns (uses existing index)kit search-semantic /path/to/repo "error handling patterns" --top-k 10
# Use line-based chunking for better granularitykit search-semantic /path/to/repo "database connection" --chunk-by lines
# Use a different embedding modelkit search-semantic /path/to/repo "user registration" --embedding-model all-mpnet-base-v2
# Export results to JSONkit search-semantic /path/to/repo "API endpoints" --output semantic-results.json
# Search without rebuilding index (if already built)kit search-semantic /path/to/repo "payment processing" --no-build-index
# Get plain output for piping to other toolskit search-semantic /path/to/repo "error handling" --format plain | head -5
# Get JSON output for programmatic processingkit search-semantic /path/to/repo "database queries" --format json | jq '.[] | .file'
Setup Requirements:
# Install sentence-transformers for semantic searchpip install sentence-transformers
# Or install kit with semantic search supportpip install 'cased-kit[ml]' # minimal extras for semantic search# Or install kit with all optional extraspip install 'cased-kit[all]'
Note: First run will download the embedding model and build the vector index, which may take time depending on repository size.
kit grep
Perform fast literal grep search on repository files using system grep. By default, excludes common build directories, cache folders, and hidden directories for optimal performance.
kit grep <repository-path> <pattern> [OPTIONS]
Options:
--case-sensitive/--ignore-case, -c/-i
: Case sensitive search (default: case-sensitive)--include <pattern>
: Include files matching glob pattern (e.g., β*.pyβ)--exclude <pattern>
: Exclude files matching glob pattern--max-results, -n <count>
: Maximum number of results to return (default: 1000)--directory, -d <path>
: Limit search to specific directory within repository--include-hidden
: Include hidden directories in search (default: false)--output, -o <file>
: Save output to JSON file
Automatic Exclusions: By default, kit grep excludes common directories for better performance:
- Build/Cache:
__pycache__
,node_modules
,dist
,build
,target
,.cache
- Hidden:
.git
,.vscode
,.idea
,.github
,.terraform
- Virtual Environments:
.venv
,venv
,.tox
- Language-Specific:
vendor
(Go/PHP),deps
(Elixir),_build
(Erlang)
Examples:
# Basic literal search (excludes common directories automatically)kit grep /path/to/repo "TODO"
# Case insensitive searchkit grep /path/to/repo "function" --ignore-case
# Search only in specific directorykit grep /path/to/repo "class" --directory "src/api"
# Search only Python files in src directorykit grep /path/to/repo "class" --include "*.py" --directory "src"
# Include hidden directories (search .github, .vscode, etc.)kit grep /path/to/repo "workflow" --include-hidden
# Exclude test files but include hidden dirskit grep /path/to/repo "import" --exclude "*test*" --include-hidden
# Complex filtering with directory limitationkit grep /path/to/repo "api_key" --include "*.py" --exclude "*test*" --directory "src" --ignore-case
# Limit results and save to filekit grep /path/to/repo "error" --max-results 50 --output errors.json
Performance Tips:
- Use
--directory
to limit search scope in large repositories - Default exclusions make searches 3-5x faster in typical projects
- Use
--include-hidden
only when specifically searching configuration files - For regex patterns, use
kit search
instead
Note: Requires system grep
command to be available in PATH. For cross-platform regex search, use kit search
instead.
kit dependencies
Analyze and visualize code dependencies within a repository. Supports Python and Terraform dependency analysis with features including dependency graph generation, circular dependency detection, module-specific analysis, and visualization generation.
kit dependencies <repository-path> --language <language> [OPTIONS]
Options:
--language, -l <language>
: Language to analyze (required:python
,terraform
)--output, -o <file>
: Output to file instead of stdout--format, -f <format>
: Output format (json
,dot
,graphml
,adjacency
)--visualize, -v
: Generate visualization (requires Graphviz)--viz-format <format>
: Visualization format (png
,svg
,pdf
)--cycles, -c
: Show only circular dependencies--llm-context
: Generate LLM-friendly context description--module, -m <name>
: Analyze specific module/resource--include-indirect, -i
: Include indirect dependencies (for module analysis)
Examples:
# Analyze Python dependencies with JSON outputkit dependencies /path/to/repo --language python
# Generate DOT format for Graphvizkit dependencies /path/to/repo --language python --format dot --output deps.dot
# Create visualization directlykit dependencies /path/to/repo --language terraform --visualize --viz-format svg
# Check for circular dependencieskit dependencies /path/to/repo --language python --cycles
# Analyze specific Python modulekit dependencies /path/to/repo --language python --module "myproject.auth" --include-indirect
# Generate LLM context for AI analysiskit dependencies /path/to/repo --language python --llm-context --output context.md
# Terraform infrastructure analysiskit dependencies /path/to/repo --language terraform --format graphml --output infrastructure.graphml
# Combined analysis with visualizationkit dependencies /path/to/repo --language python --cycles --visualize --output analysis.json
Sample Output:
π Analyzing python dependencies...π Found 127 components in the dependency graphπ Summary: 45 internal, 82 external dependenciesβ
No circular dependencies found!
Visualization Requirements:
- Install Graphviz:
brew install graphviz
(macOS) orapt-get install graphviz
(Linux) - Supports PNG, SVG, and PDF output formats
kit usages
Find all usages of a specific symbol across the repository.
kit usages <repository-path> <symbol-name> [OPTIONS]
Options:
--symbol-type, -t <type>
: Filter by symbol type--output, -o <file>
: Save output to file--format, -f <format>
: Output format
Examples:
# Find all usages of a functionkit usages /path/to/repo "calculate_total"
# Find class usages with JSON outputkit usages /path/to/repo "UserModel" --symbol-type class --format json
Cache Management
kit cache
Manage the incremental analysis cache for optimal performance.
kit cache <action> <repository-path> [OPTIONS]
Actions:
status
: Show cache statistics and healthclear
: Clear all cached datacleanup
: Remove stale entries for deleted filesstats
: Show detailed performance statistics
Examples:
# Check cache statuskit cache status /path/to/repo
# Clear cache if neededkit cache clear /path/to/repo
# Clean up stale entrieskit cache cleanup /path/to/repo
# View detailed statisticskit cache stats /path/to/repo
Sample Output:
Cache Statistics: Cached files: 1,234 Total symbols: 45,678 Cache size: 12.3MB Cache directory: /path/to/repo/.kit/incremental_cache Hit rate: 85.2% Files analyzed: 156 Cache hits: 1,078 Cache misses: 156 Average analysis time: 0.023s
AI-Powered Workflows
kit summarize
Generate AI-powered pull request summaries with optional PR body updates.
kit summarize <pr-url> [OPTIONS]
Options:
--plain, -p
: Output raw summary content (no formatting)--dry-run, -n
: Generate summary without posting to GitHub--model, -m <model>
: Override LLM model for this summary--config, -c <file>
: Use custom configuration file--update-pr-body
: Add summary to PR description
Examples:
# Generate and post PR summarykit summarize https://github.com/owner/repo/pull/123
# Dry run (preview without posting)kit summarize --dry-run https://github.com/owner/repo/pull/123
# Update PR body with summarykit summarize --update-pr-body https://github.com/owner/repo/pull/123
# Use specific modelkit summarize --model claude-3-5-haiku-20241022 https://github.com/owner/repo/pull/123
# Clean output for pipingkit summarize --plain https://github.com/owner/repo/pull/123
kit commit
Generate intelligent commit messages from staged git changes.
kit commit [repository-path] [OPTIONS]
Options:
--dry-run, -n
: Show generated message without committing--model, -m <model>
: Override LLM model--config, -c <file>
: Use custom configuration file
Examples:
# Generate and commit with AI messagegit add .kit commit
# Preview message without committinggit add src/auth.pykit commit --dry-run
# Use specific modelkit commit --model gpt-4o-mini-2024-07-18
# Specify repository pathkit commit /path/to/repo --dry-run
Sample Output:
Generated commit message:feat(auth): add JWT token validation middleware
- Implement JWTAuthMiddleware for request authentication- Add token validation with signature verification- Include error handling for expired and invalid tokens- Update middleware registration in app configuration
Commit? [y/N]: y
Content Processing
kit context
Extract contextual code around specific lines for LLM analysis.
kit context <repository-path> <file-path> <line-number> [OPTIONS]
Options:
--lines, -n <count>
: Context lines around target (default: 10)--output, -o <file>
: Save output to JSON file
Examples:
# Get context around a specific linekit context /path/to/repo src/main.py 42
# Export context for analysiskit context /path/to/repo src/utils.py 15 --output context.json
kit chunk-lines
Split file content into line-based chunks for LLM processing.
kit chunk-lines <repository-path> <file-path> [OPTIONS]
Options:
--max-lines, -n <count>
: Maximum lines per chunk (default: 50)--output, -o <file>
: Save output to JSON file
Examples:
# Default chunking (50 lines)kit chunk-lines /path/to/repo src/large-file.py
# Smaller chunks for detailed analysiskit chunk-lines /path/to/repo src/main.py --max-lines 20
# Export chunks for LLM processingkit chunk-lines /path/to/repo src/main.py --output chunks.json
kit chunk-symbols
Split file content by code symbols (functions, classes) for semantic chunking.
kit chunk-symbols <repository-path> <file-path> [OPTIONS]
Options:
--output, -o <file>
: Save output to JSON file
Examples:
# Chunk by symbols (functions, classes)kit chunk-symbols /path/to/repo src/main.py
# Export symbol-based chunkskit chunk-symbols /path/to/repo src/api.py --output symbol-chunks.json
Export Operations
kit export
Export repository data to structured JSON files for external tools and analysis.
kit export <repository-path> <data-type> <output-file> [OPTIONS]
Data Types:
index
: Complete repository index (files + symbols)symbols
: All extracted symbolsfile-tree
: Repository file structuresymbol-usages
: Usages of a specific symbol
Options:
--symbol <name>
: Symbol name (required forsymbol-usages
)--symbol-type <type>
: Symbol type filter (forsymbol-usages
)
Examples:
# Export complete repository analysiskit export /path/to/repo index complete-analysis.json
# Export only symbolskit export /path/to/repo symbols symbols.json
# Export file structurekit export /path/to/repo file-tree structure.json
# Export symbol usage analysiskit export /path/to/repo symbol-usages api-usages.json --symbol "ApiClient"
# Export specific symbol type usagekit export /path/to/repo symbol-usages class-usages.json --symbol "User" --symbol-type class
Server Operations
kit serve
Run the kit REST API server for web integrations and remote access.
kit serve [OPTIONS]
Options:
--host <host>
: Server host (default: 0.0.0.0)--port <port>
: Server port (default: 8000)--reload/--no-reload
: Auto-reload on changes (default: True)
Examples:
# Start development serverkit serve
# Production configurationkit serve --host 127.0.0.1 --port 9000 --no-reload
# Custom port for testingkit serve --port 3000
PR Review Operations
kit review
AI-powered GitHub pull request reviewer that provides comprehensive code analysis with full repository context. The reviewer clones repositories, analyzes symbol relationships, and generates intelligent reviews using Claude or GPT-4.
kit review <pr-url> [OPTIONS]
Options:
--plain, -p
: Output raw review content for piping (no formatting)--dry-run, -n
: Generate review without posting to GitHub (shows formatted preview)--model, -m <model>
: Override LLM model for this review--config, -c <file>
: Use custom configuration file--init-config
: Create default configuration file--agentic
: Use multi-turn agentic analysis (higher cost, deeper analysis)--agentic-turns <count>
: Number of analysis turns for agentic mode
Examples:
# Review and post commentkit review https://github.com/owner/repo/pull/123
# Dry run (formatted preview without posting)kit review --dry-run https://github.com/owner/repo/pull/123
# Clean output for piping to other toolskit review --plain https://github.com/owner/repo/pull/123kit review -p https://github.com/owner/repo/pull/123
# Override model for specific reviewkit review --model gpt-4.1-nano https://github.com/owner/repo/pull/123kit review -m claude-3-5-haiku-20241022 https://github.com/owner/repo/pull/123
# Pipe to Claude Code for implementationkit review -p https://github.com/owner/repo/pull/123 | \ claude "Implement these code review suggestions"
# Use agentic mode for complex PRskit review --agentic --agentic-turns 15 https://github.com/owner/repo/pull/123
# Initialize configurationkit review --init-config
Quick Setup
1. Install and configure:
# Install kitpip install cased-kit
# Set up configurationkit review --init-config
# Set API keysexport KIT_GITHUB_TOKEN="ghp_your_token"export KIT_ANTHROPIC_TOKEN="sk-ant-your_key"
2. Review a PR:
kit review https://github.com/owner/repo/pull/123
Configuration
The reviewer uses ~/.kit/review-config.yaml
for configuration:
github: token: ghp_your_token_here base_url: https://api.github.com.com
llm: provider: anthropic # or "openai" model: claude-sonnet-4-20250514 api_key: sk-ant-your_key_here max_tokens: 4000 temperature: 0.1
review: analysis_depth: standard # quick, standard, thorough post_as_comment: true clone_for_analysis: true cache_repos: true cache_directory: ~/.kit/repo-cache cache_ttl_hours: 24
Model Selection
Frontier Tier ($15-75/MTok)
claude-opus-4-20250514
: Latest flagship, worldβs best coding model, superior complex reasoningclaude-sonnet-4-20250514
: High-performance with exceptional reasoning and efficiency
Premium Tier ($3-15/MTok)
claude-3-7-sonnet-20250219
: Extended thinking capabilitiesclaude-3-5-sonnet-20241022
: Proven excellent balance
Balanced Tier ($0.80-4/MTok)
gpt-4o-mini-2024-07-18
: Excellent value modelclaude-3-5-haiku-20241022
: Fastest responses
Cache Management
# Check cache statuskit review-cache status
# Clean up old repositorieskit review-cache cleanup
# Clear all cached repositorieskit review-cache clear
Enterprise Usage
Batch Review:
# Review multiple PRsfor pr in 123 124 125; do kit review https://github.com/company/repo/pull/$prdone
CI/CD Integration:
name: AI PR Reviewon: pull_request: types: [opened, synchronize]
jobs: review: runs-on: ubuntu-latest steps: - name: AI Review run: | pip install cased-kit kit review --dry-run ${{ github.event.pull_request.html_url }} env: KIT_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} KIT_ANTHROPIC_TOKEN: ${{ secrets.ANTHROPIC_API_KEY }}
Cost Analysis
Real-world costs by team size:
Small Team (20 PRs/month):
- Standard mode: $0.20-1.00/month
- Mixed usage: $1.00-5.00/month
Enterprise (500 PRs/month):
- Standard mode: $5.00-25.00/month
- Mixed usage: $25.00-150.00/month
Cost per PR by complexity:
- Simple (1-2 files): $0.005-0.025
- Medium (3-5 files): $0.01-0.05
- Complex (6+ files): $0.025-0.10
Features
Intelligent Analysis:
- Repository cloning with caching for 5-10x faster repeat reviews
- Symbol extraction and cross-codebase impact analysis
- Security, architecture, and performance assessment
- Multi-language support for any language kit supports
Cost Transparency:
- Real-time cost tracking with exact LLM usage
- Token breakdown (input/output) for cost optimization
- Model information and pricing details
Enterprise Features:
- GitHub integration with classic and fine-grained tokens
- Multiple LLM provider support (Anthropic Claude, OpenAI GPT-4)
- Configurable analysis depth and review modes
- Repository caching and batch operations
Example Output
## π οΈ Kit AI Code Review
### Summary & ImplementationThis PR introduces a new authentication middleware that validates JWT tokens...
### Code Quality AssessmentThe implementation follows clean code principles with appropriate error handling...
### Cross-Codebase Impact Analysis- **AuthMiddleware**: Used in 15 other places across the codebase- **validateToken**: New function will be called by 8 existing routes- Breaking change risk: Low (additive changes only)
### Security & Reliabilityβ
Proper JWT validation with signature verificationβ οΈ Consider adding rate limiting for failed authentication attempts
### Specific Issues & Recommendations1. **Line 42 in auth.py**: Consider using constant-time comparison2. **Line 67 in middleware.py**: Add input validation for token format
---*Generated by kit v0.3.3 with claude-sonnet-4 analysis*
Output Formats
Human-Readable Formats
- Table: Structured columns for easy reading
- Plain Text: Simple text output for basic parsing
- Icons: File type indicators (π for files, π for directories)
Machine-Readable Formats
- JSON: Structured data perfect for further processing
- Names: Simple lists for Unix pipeline operations
Piping & Integration
All commands work seamlessly with Unix tools:
# Count Python fileskit file-tree /path/to/repo | grep "\.py" | wc -l
# Find large functions (over 50 lines)kit symbols /path/to/repo --format json | jq '.[] | select(.end_line - .start_line > 50)'
# Get unique function nameskit symbols /path/to/repo --format names | sort | uniq
# Find files with many symbolskit symbols /path/to/repo --format json | jq -r '.[] | .file' | sort | uniq -c | sort -nr
Practical Workflows
Development Workflow with Caching
#!/bin/bashREPO_PATH="/path/to/repo"
# First analysis (builds cache)echo "Initial analysis..."time kit symbols $REPO_PATH > /dev/null
# Subsequent analyses (use cache - much faster)echo "Cached analysis..."time kit symbols $REPO_PATH > /dev/null
# Check cache performancekit cache stats $REPO_PATH
Focused Directory Analysis
The new subpath and grep capabilities enable focused analysis of specific parts of large repositories:
#!/bin/bashREPO_PATH="/path/to/repo"FOCUS_DIR="src/api"
# Analyze specific directory structureecho "π Analyzing directory structure for $FOCUS_DIR"kit file-tree $REPO_PATH --path $FOCUS_DIR
# Find TODOs only in API codeecho "π Finding TODOs in API code"kit grep $REPO_PATH "TODO" --include "$FOCUS_DIR/*.py"
# Find security-related codeecho "π Finding security patterns"kit grep $REPO_PATH "auth\|token\|password" --include "$FOCUS_DIR/*" --ignore-case
# Export focused analysiskit file-tree $REPO_PATH --path $FOCUS_DIR --output api-structure.jsonkit grep $REPO_PATH "class.*API" --include "$FOCUS_DIR/*.py" --output api-classes.json
Large Repository Optimization
For large repositories, use subpath analysis to work efficiently:
#!/bin/bashREPO_PATH="/path/to/large-repo"
# Instead of analyzing the entire repository# kit symbols $REPO_PATH # This could be slow
# Focus on specific componentsfor component in "src/auth" "src/api" "src/models"; do echo "Analyzing $component..."
# Get component structure kit file-tree $REPO_PATH --path "$component"
# Find component-specific patterns kit grep $REPO_PATH "class\|def\|import" --include "$component/*.py" --max-results 100
# Extract symbols from component files only # (Note: symbols command doesn't support subpath yet, but you can filter) kit symbols $REPO_PATH --format json | jq ".[] | select(.file | startswith(\"$component\"))"done
Semantic Code Discovery
Use semantic search to understand unfamiliar codebases or find implementation patterns:
#!/bin/bashREPO_PATH="/path/to/unfamiliar/repo"
# First-time setup (builds vector index)echo "π Building semantic index..."kit search-semantic $REPO_PATH "test setup" --top-k 1 > /dev/null
echo "π§ Exploring codebase with semantic search..."
# Understand architecture patternskit search-semantic $REPO_PATH "authentication flow" --top-k 3kit search-semantic $REPO_PATH "database connection setup" --top-k 3
# Find implementation exampleskit search-semantic $REPO_PATH "error handling patterns" --top-k 5kit search-semantic $REPO_PATH "retry logic with backoff" --top-k 3
# Discover testing approacheskit search-semantic $REPO_PATH "unit test examples" --top-k 5kit search-semantic $REPO_PATH "mocking external services" --top-k 3
# Find configuration patternskit search-semantic $REPO_PATH "environment variable configuration" --top-k 3kit search-semantic $REPO_PATH "logging setup" --top-k 3
# Export findings for team reviewkit search-semantic $REPO_PATH "API endpoint implementation" --output api-patterns.jsonkit search-semantic $REPO_PATH "data validation logic" --output validation-patterns.json
AI-Powered Development
#!/bin/bash# Stage changesgit add .
# Generate intelligent commit messagekit commit --dry-run
# If satisfied, commitkit commit
# Create PR and get AI summarygh pr create --title "Feature: New auth system" --body "Initial implementation"PR_URL=$(gh pr view --json url -q .url)kit summarize --update-pr-body $PR_URL
Code Review Preparation
#!/bin/bashREPO_PATH="/path/to/repo"OUTPUT_DIR="./analysis"
mkdir -p $OUTPUT_DIR
# Generate comprehensive analysis (uses caching)kit export $REPO_PATH index $OUTPUT_DIR/repo-index.json
# Find all TODO items with grep (faster than search for literal strings)kit grep $REPO_PATH "TODO\|FIXME\|XXX" --ignore-case --output $OUTPUT_DIR/todos.json
# Analyze specific areas that changedCHANGED_DIRS=$(git diff --name-only HEAD~1 | cut -d'/' -f1 | sort -u)for dir in $CHANGED_DIRS; do echo "Analyzing changed directory: $dir" kit file-tree $REPO_PATH --path "$dir" --output "$OUTPUT_DIR/${dir}-structure.json" kit grep $REPO_PATH "test\|spec" --include "$dir/*" --output "$OUTPUT_DIR/${dir}-tests.json"done
Documentation Generation
#!/bin/bashREPO_PATH="/path/to/repo"DOCS_DIR="./docs"
mkdir -p $DOCS_DIR
# Extract all public APIs (uses incremental analysis)kit symbols $REPO_PATH --format json | \ jq '.[] | select(.type=="function" and (.name | startswith("_") | not))' \ > $DOCS_DIR/public-api.json
# Generate symbol usage reportsfor symbol in $(kit symbols $REPO_PATH --format names | head -10); do kit usages $REPO_PATH "$symbol" --output "$DOCS_DIR/usage-$symbol.json"done
Migration Analysis
#!/bin/bashOLD_REPO="/path/to/old/repo"NEW_REPO="/path/to/new/repo"
# Compare symbol counts (both use caching)echo "Old repo symbols:"kit symbols $OLD_REPO --format names | wc -l
echo "New repo symbols:"kit symbols $NEW_REPO --format names | wc -l
# Find deprecated patternskit search $OLD_REPO "deprecated\|legacy" --pattern "*.py" > deprecated-code.txt
# Export both for detailed comparisonkit export $OLD_REPO symbols old-symbols.jsonkit export $NEW_REPO symbols new-symbols.json
CI/CD Integration
name: Code Analysison: [push, pull_request]
jobs: analyze: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3
- name: Install kit run: pip install cased-kit
- name: Analyze codebase run: | # Generate repository analysis (uses incremental caching) kit export . index analysis.json
# Check for complexity issues COMPLEX_FUNCTIONS=$(kit symbols . --format json | jq '[.[] | select(.type=="function" and (.end_line - .start_line) > 50)] | length')
if [ $COMPLEX_FUNCTIONS -gt 10 ]; then echo "Warning: $COMPLEX_FUNCTIONS functions are longer than 50 lines" fi
# Find security-related patterns kit search . "password\|secret\|key" --pattern "*.py" > security-review.txt
# Show cache performance kit cache stats .
- name: Upload analysis uses: actions/upload-artifact@v3 with: name: code-analysis path: | analysis.json security-review.txt
Best Practices
Performance
- Use incremental analysis (
kit symbols
) for repeated operations - 25-36x faster - Use
--format names
for large repositories when you only need symbol names - Leverage file patterns (
--pattern
) to limit search scope - Monitor cache performance with
kit cache stats
Caching
- Use
kit cache cleanup
periodically to remove stale entries - Monitor cache size with
kit cache status
- Clear cache (
kit cache clear
) if switching between very different git states frequently
AI Workflows
- Use
--dry-run
to preview AI-generated content before posting - Combine
kit commit
withkit summarize
for complete AI-powered development workflow - Use
--plain
output for piping to other AI tools
Semantic Search
- Use
symbols
chunking for better semantic boundaries (default) - Try different embedding models for different use cases: lightweight (
all-MiniLM-L6-v2
) vs. quality (all-mpnet-base-v2
) - Build index once, reuse with
--no-build-index
for faster subsequent searches - Use natural language queries that describe what youβre looking for conceptually
- Combine with text search (
kit search
) for comprehensive code discovery
Scripting
- Always check command exit codes (
$?
) in scripts - Use
--output
to save data persistently rather than relying on stdout capture - Combine with
jq
,grep
,sort
, and other Unix tools for powerful analysis
Integration
- Export JSON data for integration with external tools and databases
- Use the CLI in CI/CD pipelines for automated code quality checks
- Combine with language servers and IDEs for enhanced development workflows