Countering PR Slop
The Progressive Path to responsible AI-Assisted Development
A message from a colleague (Reliability engineer at a large enterprise company) landed in my inbox this week. Their team’s internal architect had just submitted a PR with a note: “Ready for review.”
The inspection of the PR revealed 6,000 lines of changes. Brand new documentation that made no sense. Code with no tests. Tests commented out. New code that nothing called.
This pattern has earned its own name: PR Slop.
The phenomenon is widespread enough that the podcast “The AI Daily Brief” devoted an episode to it, “RIP Vibe Coding. Feb 2025-Oct 2025” (link). The pattern is simple: prompt an AI to generate a complete solution, then dump the output onto reviewers. Minimal effort from one developer becomes maximum burden for others.
The easy reaction is to call for a ban to AI tooling in software development. However, as I argued in Developers, AI Changed Everything. No, You’re Not Doomed, the tool itself is not the problem. A responsible approach to AI-assisted development exists. The challenge is execution.
The Expectations Mismatch
PR Slop reveals a fundamental gap between promise and reality. AI marketing suggests these tools can autonomously handle complex features. A system of distributed components, integrated subsystems, and existing architectural decisions cannot be one-shot into coherent code.
AI is a powerful addition to your development toolkit. It is not a magic solution that removes the need for engineering judgment, iterative refinement, or responsibility for what ships.
The healthy reset starts with acknowledging what AI coding assistants actually enable versus what we might hope they provide.
Working Synchronously vs. Asynchronously
The distinction between synchronous and asynchronous AI collaboration matters significantly, especially when first adopting these tools.
Synchronous work means active involvement in the AI’s progression through the problem. You remain engaged, watching decisions unfold, correcting course when the approach diverges. AI-assisted autocomplete represents the simplest form of this pattern. CLI tools like Claude Code or Gemini Code can operate synchronously when you follow their execution step by step.
Asynchronous work means initiating the AI, then stepping away until it completes. You return to review a finished PR. This approach can work with mature practices and appropriate constraints. Without them, it produces PR Slop.
A Progressive Adoption Path
For developers new to AI coding assistance, the synchronous approach builds both skill and safeguards. Start here:
1. Planning Mode Only
Begin every feature in planning mode with writes disabled. Iterate on the approach with the AI. Work on the smallest valuable increment. Define what “done” looks like before implementation starts.
You can use AI purely as a planning aid and implement everything yourself. This is perfectly valid and avoids many risks while gaining clarity on requirements.
2. Synchronous Implementation
If you choose to continue with AI during implementation, maintain active involvement. Use AI-assisted autocomplete, or work with an agent but follow its progress. Crucially, stop and correct at the first sign the implementation has diverged from the plan.
Watch for AI reward hacking behaviors. Commenting out failing tests is a classic example. The AI optimizes for appearing to complete the task rather than actually solving the problem. Your synchronous oversight catches this.
3. Learning from the Process
Watching AI tools work can teach patterns and approaches you might not have considered. I have observed AI agents diverge from the plan I approved, only to realize the divergence represented a superior solution. This learning compounds when you stay engaged.
4. Advancing to Asynchronous (Optional)
As your practices mature, asynchronous workflows become viable with proper constraints:
Limit scope to planning: Ask the AI to plan three solutions and list trade-offs for each. Review when you return. This minimizes risk while gaining value.
Require tests and limit scope: If implementation is involved, constrain the work to small, well-tested changes. The AI does not mind being told “tests are required” as much as human teammates might.
Always use branch protection: Every AI-generated change results in a PR. You review first, not your colleagues. You own the quality and appropriateness of what reaches the team.
The Responsibility Constant
The core principle remains consistent across all adoption stages: you are responsible for what ships. The AI is a tool that amplifies your capabilities. It does not replace your judgment, your knowledge of the system, or your accountability to the team. PR Slop emerges when developers attempt to outsource this responsibility. The remedy is maintaining engineering discipline while learning to work effectively with a powerful new capability.
If you receive PR slop, work with the sender. Planning mode first. Synchronous implementation with active oversight. Small, tested increments instead of 6,000-line dumps. The process is slower than “prompt and pray,” but the PRs are reviewable and the code actually works.
The path forward combines the efficiency AI enables with the responsibility the craft has always demanded.

