Using Ollama with kit review
π¦ Using Ollama with Kit
Kit has first-class support for free local AI models via Ollama. No API keys, no costs, no data leaving your machine.
Why Ollama?
- β No cost - unlimited usage
- β Complete privacy - data never leaves your machine
- β No API keys - just install and run
- β No rate limits - only hardware constraints
- β Works offline - perfect for secure environments
- β Latest models - access to cutting-edge open source AI
π Quick Setup (2 minutes)
1. Install Ollama
# macOS/Linuxcurl -fsSL https://ollama.ai/install.sh | sh
# Windows# Download from https://ollama.ai/download
2. Pull a Model
Choose based on your use case:
# Best for code tasks (recommended)ollama pull qwen2.5-coder:latest
# Best for reasoningollama pull deepseek-r1:latest
# Best for coding agentsollama pull devstral:latest
# Good general purposeollama pull llama3.3:latest
3. Start Using with Kit
from kit import Repositoryfrom kit.summaries import OllamaConfig
# Configure Ollamaconfig = OllamaConfig(model="qwen2.5-coder:latest")
# Use with any repositoryrepo = Repository("/path/to/your/project")summarizer = repo.get_summarizer(config=config)
# Summarize code at no costsummary = summarizer.summarize_file("main.py")print(summary) # Cost: $0.00
π‘ Complete Examples
Code Summarization
from kit import Repositoryfrom kit.summaries import OllamaConfig
# Setuprepo = Repository("/path/to/project")config = OllamaConfig( model="qwen2.5-coder:latest", temperature=0.1, # Lower for more focused analysis max_tokens=1000)summarizer = repo.get_summarizer(config=config)
# Summarize a filesummary = summarizer.summarize_file("complex_module.py")print(f"File Summary: {summary}")
# Summarize a specific functionfunc_summary = summarizer.summarize_function("utils.py", "parse_config")print(f"Function Summary: {func_summary}")
# Summarize a classclass_summary = summarizer.summarize_class("models.py", "UserManager")print(f"Class Summary: {class_summary}")
PR Reviews (No Cost)
# Setup Ollama for PR reviewskit review --init-config# Choose "ollama" as provider# Choose "qwen2.5-coder:latest" as model
# Review any PR at no costkit review https://github.com/owner/repo/pull/123# Cost: $0.00
Batch Documentation Generation
from kit import Repositoryfrom kit.summaries import OllamaConfigimport os
def generate_docs(project_path, output_file): """Generate documentation for an entire project.""" repo = Repository(project_path) config = OllamaConfig(model="qwen2.5-coder:latest") summarizer = repo.get_summarizer(config=config)
with open(output_file, 'w') as f: f.write(f"# Documentation for {os.path.basename(project_path)}\n\n")
# Get all Python files files = repo.get_file_tree() python_files = [f for f in files if f['path'].endswith('.py') and not f.get('is_dir')]
for file_info in python_files: file_path = file_info['path'] try: summary = summarizer.summarize_file(file_path) f.write(f"## {file_path}\n\n{summary}\n\n") print(f"β
Documented {file_path} (Cost: $0.00)") except Exception as e: print(f"β οΈ Skipped {file_path}: {e}")
# Usagegenerate_docs("/path/to/project", "project_docs.md")
Legacy Codebase Analysis
def analyze_legacy_code(repo_path): """Analyze and understand legacy code using free AI.""" repo = Repository(repo_path) config = OllamaConfig(model="qwen2.5-coder:latest") summarizer = repo.get_summarizer(config=config)
# Find all symbols symbols = repo.extract_symbols()
# Group by type functions = [s for s in symbols if s.get('type') == 'FUNCTION'] classes = [s for s in symbols if s.get('type') == 'CLASS']
print(f"Found {len(functions)} functions and {len(classes)} classes")
# Analyze complex functions (those with many lines) complex_functions = [f for f in functions if len(f.get('code', '').split('\n')) > 20]
for func in complex_functions[:5]: # Analyze top 5 complex functions file_path = func['file'] func_name = func['name']
analysis = summarizer.summarize_function(file_path, func_name) print(f"\nπ {func_name} in {file_path}:") print(f" {analysis}") print(f" Cost: $0.00")
βοΈ Advanced Configuration
Custom Configuration
config = OllamaConfig( model="qwen2.5-coder:32b", # Use larger model for better results base_url="http://localhost:11434", # Default Ollama endpoint temperature=0.0, # Deterministic output max_tokens=2000, # Longer responses)
Remote Ollama Server
config = OllamaConfig( model="qwen2.5-coder:latest", base_url="http://your-server:11434", # Remote Ollama instance temperature=0.1, max_tokens=1000,)
Multiple Models for Different Tasks
# Code-focused model for functionscode_config = OllamaConfig(model="qwen2.5-coder:latest", temperature=0.1)
# Reasoning model for complex analysisreasoning_config = OllamaConfig(model="deepseek-r1:latest", temperature=0.2)
# Use different models for different taskscode_summarizer = repo.get_summarizer(config=code_config)reasoning_summarizer = repo.get_summarizer(config=reasoning_config)
π§ Troubleshooting
Common Issues
βConnection refusedβ error:
# Make sure Ollama is runningollama serve
# Or check if it's already runningps aux | grep ollama
βModel not foundβ error:
# Pull the model firstollama pull qwen2.5-coder:latest
# List available modelsollama list
Slow responses:
- Use smaller models like
qwen2.5-coder:0.5b
for faster responses - Reduce
max_tokens
for shorter outputs - Ensure sufficient RAM (8GB+ recommended for 7B models)
Performance Tips
-
Choose the right model size:
- 0.5B-3B: Fast, good for simple tasks
- 7B-14B: Balanced speed and quality
- 32B+: Best quality, requires more resources
-
Optimize settings:
# For speedconfig = OllamaConfig(model="qwen2.5-coder:0.5b",temperature=0.1,max_tokens=500)# For qualityconfig = OllamaConfig(model="qwen2.5-coder:32b",temperature=0.0,max_tokens=2000) -
Hardware considerations:
- RAM: 8GB minimum, 16GB+ recommended for larger models
- GPU: Optional but significantly speeds up inference
- Storage: Models range from 500MB to 400GB
π° Cost Comparison
Provider | Cost per Review | Setup Time | Privacy | Offline |
---|---|---|---|---|
Ollama | $0.00 | 2 minutes | β 100% private | β Works offline |
OpenAI GPT-4o | ~$0.10 | API key setup | β Sent to OpenAI | β Requires internet |
Anthropic Claude | ~$0.08 | API key setup | β Sent to Anthropic | β Requires internet |
π― Use Cases
Perfect for Ollama:
- β Continuous integration - No API costs for automated analysis
- β Enterprise environments - Complete data privacy
- β Learning and experimentation - No usage limits
- β Offline development - Works without internet
- β Cost-sensitive projects - Zero ongoing costs
Consider Cloud Models for:
- β Highest quality requirements - Latest commercial models
- β Minimal setup - No local hardware requirements
- β Occasional use - Pay only when needed
π Next Steps
- Start Simple: Begin with
qwen2.5-coder:latest
for code tasks - Experiment: Try different models to find what works best for your use case
- Scale Up: Use larger models for higher quality when needed
- Automate: Integrate into your CI/CD pipelines for continuous code analysis
- Contribute: Share your experience and help improve kitβs Ollama integration
π€ Community
- Discord: Join the kit Discord for help and discussions
- GitHub: Report issues or contribute
- Ollama Community: Ollama Discord for model-specific help
Ready to get started? Install Ollama, pull a model, and start analyzing code at no cost! π