Engineering

Embracing the AI Slop

AI slop is inevitable. If you treat it as signal instead of noise, it becomes a reliable way to surface missing context, weak constraints, and process gaps.

MA

Matan Zutta

CTO & Co-founder @ Yess

December 30, 2025

Embracing the AI Slop

AI agents are now part of everyday engineering work: features, refactors, tests, migrations, and docs.

In greenfield projects, the gains are obvious. In brownfield codebases, AI often produces "slop": code that looks reasonable and may even pass tests, but quietly violates architectural intent, invariants, or long term maintainability.

The instinctive response is to restrict AI. This post argues the opposite: slop is inevitable. If you treat it as signal instead of noise, it becomes a reliable way to surface missing context, weak constraints, and process gaps.


Why Brownfield Codebases Amplify AI Slop

Brownfield systems are defined by accumulated, partly invisible context:

  • Architectural decisions made under old constraints
  • Domain rules that were never formally documented
  • Performance and security tradeoffs that only show up at scale
  • Conventions enforced socially rather than technically

Humans navigate this with experience and intuition. AI can't. If a rule isn't written down, encoded, or enforced, it will be violated.

That's the point: AI makes implicit assumptions visible.

AI makes implicit assumptions visible


Slop Is a Structure Problem

A key idea from No Vibes Allowed is that failures in complex systems are rarely failures of intelligence. They're failures of process and structure.

When AI produces risky output, it usually isn't "carelessness." It's evidence your system leaves too much implicit: invariants in people's heads, boundaries enforced by convention, and reviews that depend on taste instead of checks.

Context engineering helps, but without proper process in place it tends to drift. The goal is to provide our agent a continuously improving context with explicit constraints, verifiable steps, and acceptance criteria that produces correct outcomes.


AI Slop Is Inevitable, and That Is Useful

Even with strong context and disciplined processes, slop will still appear. That is not a failure. It is a diagnostic.

Every instance of slop answers one question:

What assumption did we allow to remain implicit?

AI behaves like an aggressive super smart junior engineer who never hesitates and never assumes undocumented rules. That makes it excellent at discovering where your organization relies on shared intuition instead of shared structure.

Turning Slop into a Learning Flywheel

Instead of treating AI mistakes as noise, treat them as input.

Learning Flywheel

This flywheel is built around a simple principle: do not rely on judgment in the moment, design systems where good outcomes are the default. Over time, the flywheel compounds.


Examples

Slop: Bypassing the domain service

Expected

Fix Rule — Domain mutations must go through services

Rule: Never write directly to critical domain tables (users, accounts, orders, subscriptions) outside domain or service modules.

Disallow: ORM UPDATE/DELETE on critical tables in handlers, jobs, or scripts.

Require: An explicit domain function call with intent and actor context.

Verify: The mutation produces an audit entry or domain event.


Slop: Breaking the outbox pattern

Expected

Fix Rule — External side effects require an outbox

Rule: Any operation that mutates the database and triggers external effects (billing, email, search, analytics) must use the outbox pattern.

Disallow: Network or SDK calls in request handlers or transactions that also mutate DB state.

Require: Enqueue a single durable outbox event within the same transaction.

Verify: Outbox worker processes events idempotently.


Slop: Tenant leakage via unsafe helpers

Expected

Fix Rule — Tenant and auth scope enforced at the data boundary

Rule: All tenant-scoped data access must require an actor or auth context and enforce tenant and role constraints at the query boundary.

Disallow: Repository helpers that return tenant data without auth context.

Require: *_for_actor style APIs that encode authorization.

Verify: Cross-tenant access tests fail as expected.


From Vibes to Systems

For the flywheel to compound, learning can't live in people's heads. AI makes that obvious: if a rule isn't explicit, it will get violated. That's not a model failure, it's a systems failure.

So don't try to "review better." Move intent into durable process: Surface issues during PRs and CI, define the right context to fix it (via skills, rules, subagents, etc.) and store it in a centrally managed place which your clients can pull from. Then every slop incident becomes a reliable signal: an implicit assumption you can capture and enforce everywhere.

AI isn't lowering the bar. It's exposing where the bar was never clearly defined.


About the Author

Matan Zutta is the CTO & Co-founder at Yess AI. Connect with him on LinkedIn.

Get Started with Yess AI
  • ✓ AI-powered sales engagement
  • ✓ Automated relationship mapping
  • ✓ Multi-threaded outreach