Enterprise AI Analysis
On the Use of Agentic Coding: An Empirical Study of Pull Requests on GitHub
This empirical study investigates 567 GitHub pull requests (PRs) generated by an agentic coding tool (Claude Code). It reveals that 83.8% of these PRs are accepted, with 54.9% merged without further modifications. Agentic-PRs excel in tasks like refactoring, documentation, and testing, but often require human revision for bug fixes and adherence to project standards. The findings highlight the practical usefulness of agentic coding while underscoring the ongoing need for human oversight and refinement in software development.
Key Insights at a Glance
Our analysis reveals critical metrics on Agentic Coding adoption and impact.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Agentic vs. Human PR Purposes
Agentic-PRs differ from Human-PRs in their focus, excelling in non-functional improvements and often serving multiple purposes.
| Category | Agentic-PRs (%) | Human-PRs (%) |
|---|---|---|
| Bug Fixes | 31.0 | 30.8 |
| Feature Dev | 26.8 | 27.6 |
| Refactoring | 24.9 | 14.9 |
| Documentation | 22.1 | 14.0 |
| Testing | 18.8 | 4.5 |
83.8% of Agentic-PRs are accepted, demonstrating their practical usefulness, though slightly lower than human-written PRs (91.0%).
Why Agentic-PRs Are Rejected
Rejections are often due to project context (alternative solutions, PR size), process-related issues (verification-only, merge conflicts), technical shortcomings (non-optimal design, bugs), and strategic misalignment (not adding value, not aligning with community interests).
Example: Large PRs
A large Agentic-PR was closed with the comment, 'Closing in favor of smaller, more focused PRs to make reviews more manageable.' This highlights the difficulty of integrating oversized contributions into collaborative review processes. (Page 11)
54.9% of Agentic-PRs are merged without revisions, similar to human-PRs (58.5%), indicating a baseline level of trust and adequacy.
Agentic Coding Workflow & Revision Points
While agents automate initial tasks, human oversight refines them, especially for correctness and standards.
Most Frequent Revision Types for Agentic-PRs
Human revisions primarily focus on critical fixes, documentation, refactoring, and style adherence.
| Revision Type | % of Agentic-PRs Revisions |
|---|---|
| Bug Fixes | 47.7 |
| Documentation Updates | 29.0 |
| Refactoring | 27.1 |
| Code Style Improvements | 23.4 |
| Project Housekeeping (Chores) | 21.0 |
| Test-Related Improvements | 16.4 |
Calculate Your Potential AI ROI
Quantify the impact of agentic coding on your team's efficiency and cost savings with our interactive calculator.
Your Agentic Coding Adoption Roadmap
Based on the research, here's a strategic roadmap for integrating agentic coding into your enterprise.
Phase 1: Small, Focused PRs
Start by breaking down complex tasks into smaller, self-contained pull requests. This reduces review burden and improves integration.
Phase 2: Embed Project Standards
Define and integrate project-specific style guides, architectural patterns, and rules directly into agent instructions. Use tools like CLAUDE.md for this.
Phase 3: Automate Low-Risk Maintenance
Leverage agents for routine tasks such as rebasing, resolving simple merge conflicts, and handling stale PRs to free up human developers.
Phase 4: Enhance Agent Tool Integration
Integrate linters, static analysis, build-checking, and test-coverage tools directly with your coding agents to preemptively address common issues.
Phase 5: Cultivate Trust & Transparency
Encourage agents to provide 'confidence cards' with their code, detailing plans, assumptions, and known edge cases to foster human trust and efficient reviews.
Ready to Transform Your Development Workflow?
Unlock the full potential of agentic coding in your enterprise. Schedule a personalized consultation to discuss tailored strategies and implementation.