A curated list of AI-powered tools, agents, and platforms dedicated to automating code reviews, enforcing guidelines, and improving software quality.
Maintainer Note: This list is curated and maintained by the engineering team at Kodus. We love open source and building better devtools.
- Automated PR Agents
- IDE Assistants & Copilots
- Research & Benchmarks
- Security & Static Analysis AI
- CLI & Local Workflows
- Open Source Models
- Benchmarks
Tools that connect directly to GitHub/GitLab to review Pull Requests, comment on code, and suggest fixes asynchronously.
Note: This list is not intended to compare tools; as maintainers of Kodus, we are biased.
-
Kodus (⭐ Maintainer)
An AI code review agent focusing on high-signal feedback. It allows teams to define custom review guidelines (using plain English) to enforce architectural patterns and best practices, reducing noise in the review process. -
CodeRabbit Provides line-by-line feedback on pull requests and generates summaries of changes. Features a chat interface within the PR to discuss the feedback with the AI.
-
Greptile An AI engine that indexes the entire codebase to understand context. It focuses on answering complex questions about the repo and reviewing code with full-repository awareness.
-
Cursor Bugbot AI-powered PR review that runs automatically to catch real bugs and security issues with a low false-positive rate.
Tools that integrate with editors or local environments for autocomplete, chat, and agentic coding.
- GitHub Copilot - The standard AI pair programmer for autocomplete, chat, and inline edits.
- Cursor - AI-first code editor with built-in chat, autocomplete, and agent workflows.
- Claude Code - Claude's coding agent for terminal, IDE, and web workflows that can manage large codebases and implement changes.
- OpenAI Codex - OpenAI's coding agent that can read, modify, and run code, available as a VS Code extension with optional cloud delegation.
- Google Antigravity - Agent-first IDE with tab autocomplete, natural language commands, and cross-surface agents across editor, terminal, and browser.
- Kilo Code - Open-source agentic engineering platform with IDE/CLI support, tab autocomplete, and multi-agent orchestration.
- Cline - Autonomous IDE agent that can create/edit files, run commands, and use the browser with user approval.
- OpenCode - Open-source coding agent for terminal, IDE, or desktop with multi-session workflows and broad model support.
Fundamental reading on how LLMs are transforming software engineering.
-
AI-Assisted Assessment of Coding Practices in Modern Code Review: LLM-based system for enforcing coding best practices in code review.
-
Towards Practical Defect-Focused Automated Code Review: Automation pipeline for defect detection in C++ codebases using AI.
-
Automated Code Review Using Large Language Models at Ericsson: An Experience Report: LLM tool with static analysis for code review in industry.
-
DeputyDev -- AI Powered Developer Assistant: Breaking the Code Review Logjam through Contextual AI to Boost Developer Productivity: AI assistant for efficient code reviews to boost productivity.
-
CRScore++: Reinforcement Learning with Verifiable Tool and AI Feedback for Code Review: RL framework for improving code review comment generation.
-
Leveraging Reviewer Experience in Code Review Comment Generation: Integrating reviewer experience into AI models for comment generation.
-
A Survey on Machine Learning Techniques for Source Code Analysis: Overview of ML applications in source code tasks including review.
-
BitsAI-CR: Automated Code Review via LLM in Practice: Practical LLM-based automated code review in large-scale environments.
-
AutoCodeRover: Autonomous Program Improvement: LLM for program improvement via code review and repair.
-
Prompting and Fine-tuning Large Language Models for Automated Code Review Comment Generation=: Fine-tuning LLMs with QLoRA for generating accurate code review comments.
-
Fine-Tuning Large Language Models to Improve Accuracy and Comprehensibility of Automated Code Review: Enhancing LLM accuracy and comprehensibility in automated code reviews.
Lessons from Building Static Analysis Tools at Google: Why low false-positive rates are crucial (validating the need for specialized agents).
Tools focusing specifically on vulnerabilities and SAST (Static Application Security Testing).
- Snyk DeepCode - AI-powered engine to find security flaws faster than traditional static analysis.
- Semgrep AI - Combines rule-based static analysis with AI to reduce false positives in security scanning.
Command-line tools for local reviews and "hacker" workflows.
- Aider - AI pair programming in your terminal.
- Mentat - Coordinate edits across multiple files using command line.
- OpenCommit - Generates semantic git commit messages automatically.
- Code Review Benchmark: Comprehensive evaluation of LLM performance in AI-powered code review tasks.
- SWE-bench - Evaluation framework for language models on real-world software engineering issues.
- HumanEval - OpenAI's dataset for evaluating code generation capabilities.
Contributions are welcome! Please read the contribution guidelines first. If you are a founder or maintainer of a tool listed here and want to update your description, feel free to open a PR.
