AI can spit out code and documentation fast. That speed feels like a gift until you’re the person reviewing it. Then it can feel like trying to drink from a fire hose, except half the water is mud.
That’s the problem Linus Torvalds was pointing at when he pushed back on the idea that kernel documentation or AI-specific guidelines will “solve” low-effort, tool-generated submissions. His message is blunt: documentation can’t stop bad actors, and turning kernel docs into an AI manifesto doesn’t improve patch quality.
This post breaks down what he meant, why it matters for the Linux kernel and open source, and what actually works when AI increases the volume of submissions.
An overloaded review queue is where “AI slop” becomes real work for humans, not a theory, created with AI.
What Linus Torvalds meant by “AI slop” and why documentation won’t stop it
“AI slop” isn’t a technical term. It’s a plain description of output that looks confident and polished, but isn’t grounded in real understanding. In practice, it shows up as patches or docs that:
- sound generic
- miss key context
- get details wrong in subtle ways
- add noise without adding value
The Linux kernel community has been debating how to handle submissions created with machine-learning tools and LLM assistants, including whether to require disclosures or add AI-specific guidance. (If you want the broader policy context, LWN has covered it in detail in “Toward a policy for machine-learning tools in kernel development”.)
Torvalds’s core point is simple: rules written for honesty don’t change dishonest behavior. People trying to sneak low-quality work into a project won’t volunteer that it’s low-quality, and they won’t reliably label it as AI-produced either. So a documentation-only approach becomes more like a public statement than a real defense.
He also doesn’t want kernel development docs to take sides in the loud “AI will save everything” versus “AI will ruin everything” fight. His preference is to keep things tool-neutral: judge contributions by quality, not by what typed them.
Good actors read docs, bad actors ignore them
Most documentation is written for contributors who already want to do the right thing. That’s not a criticism, it’s just reality.
If someone is submitting careless patches, whether they were typed by a human at 2 a.m. or produced by an AI assistant in 20 seconds, a paragraph in a guideline file won’t stop them. At best, it adds a checkbox for people who already care.
There’s also a real cost to “more rules” in large projects:
- Reviewers spend time policing paperwork instead of code.
- Contributors spend time formatting compliance instead of improving the patch.
- Maintainers inherit new arguments that don’t fix correctness.
Kernel work lasts for years. Anything that increases review burden without increasing clarity is a bad trade.
Why “just label AI content” sounds nice but fails in practice
Labeling proposals often come from a good place: reviewers are overwhelmed and want signals. The problem is that AI labeling isn’t a reliable signal.
First, it’s hard to prove. Second, it’s easy to lie. Third, even an honest label doesn’t tell you whether the change is correct, tested, or maintainable.
A label can even create a false comfort effect in reverse. Reviewers may treat “human-written” as safe and “AI-assisted” as risky, when the real issue is whether the author understands the change and can support it later.
The Linux kernel already has a strong culture of demanding proof. If you claim something fixes a bug, you need to show why. If you claim it improves performance, you need to show how. That culture doesn’t need an “AI section” to stay healthy.
For more on the kernel community’s ongoing discussion around LLM assistants, LWN’s “On the use of LLM assistants for kernel development” is a useful read.
The real fix: quality gates, strong review, and clear standards (with or without AI)
Torvalds’s framing pushes teams toward something less dramatic, and more effective: treat AI like any other tool, then enforce the same standards every time.
For the kernel (and any serious open source project), the best protection isn’t a label. It’s a pipeline that forces clarity before merge.
A quality gate mindset turns “more submissions” into “better submissions,” created with AI.
A strong patch pipeline usually has three layers:
Automated checks: formatting, static analysis, build tests, unit tests where possible.
Human review: does the change make sense, and will it age well?
Maintenance reality: can someone other than the author understand it six months later?
That last layer is where AI slop hurts most. It doesn’t just waste time now. It creates long-term cost because maintainers carry the burden.
What “quality” looks like for kernel docs and patches
When reviewers say “this patch is low quality,” it can feel vague. A simple checklist makes it concrete. If you’re submitting to a strict project, you should be able to answer “yes” to these:
- Clear problem statement: What’s broken or missing?
- Specific behavior change: What happens before, and what happens after?
- Traceable reasoning: Which code path, driver, or subsystem is involved?
- Proof you ran something: A basic test plan, even if it’s small.
- No mystery claims: If you add a technical statement, back it with evidence or references.
- Docs explain “why”: Not just what the code does, but why the design is this way.
That checklist is tool-agnostic on purpose. It catches rushed human work and rushed AI work equally well.
Simple team rules that work better than AI-specific policies
If you maintain a repo and want fewer low-effort submissions, small rules beat big policies.
A few that hold up well:
Require a test plan in every submission: even “built on x86_64, booted, ran basic smoke test” is better than silence.
Reject patches you can’t maintain: if the author can’t explain it, it doesn’t belong.
Enforce style automatically: let tools handle formatting so humans focus on logic.
Demand specific commit messages: “fix bug” isn’t a reason, it’s a headline.
Ask for sources when adding claims: especially in documentation.
This mindset also matches other frustrations Torvalds has raised over the years: low-signal submissions create drag. Even outside the AI debate, maintainers have pushed back on “looks fine” metadata and sloppy commit hygiene. The theme is consistent in community coverage like this Slashdot discussion of Torvalds criticizing “garbage” link tags in commits.
How to use AI responsibly in serious open source projects
AI can be genuinely helpful. It can explain unfamiliar code, suggest edge cases, and help you draft a clear summary. It can also produce confident nonsense, especially when you ask it to reason about complex systems.
Kernel-level work raises the stakes. Small mistakes can cause data loss, security issues, or hard-to-debug crashes. Even when the bug is minor, the maintenance cost isn’t.
A practical way to think about it: AI can speed up typing, but it can’t take responsibility. Maintainers can’t merge responsibility. They can only merge patches.
AI tools are now embedded in everyday development workflows.
Photo by Daniil Komov
If you want more background on how this debate keeps resurfacing in kernel circles, Phoronix tracks related items in its AI news archive.
If you used AI, what you should do before you hit send
A good pre-submission routine is boring, and that’s the point. Before you email a patch or open a pull request:
Read every line: not “skim,” actually read.
Explain it in your own words: if you can’t, you’re not done.
Confirm the behavior: run the test, reproduce the bug, validate the fix.
Check edge cases: error paths, cleanup, concurrency assumptions.
Make the commit message match reality: no marketing, no guesswork.
Ownership beats disclosure labels. If you stand behind the change, reviewers can work with you. If you don’t, no policy will save it.
Reviewer tips for spotting low-effort AI output without starting a culture war
Reviewers don’t need to play detective. They just need to enforce standards consistently and keep feedback focused on the work.
Patterns that often signal slop (human or AI):
- vague phrases like “improves stability” with no explanation
- missing “why” in both code and docs
- contradictions between commit message and diff
- no test plan, no reproduction steps
- long documentation that says little
Useful reviewer responses stay calm and specific:
Ask for a concrete test plan.
Request the exact failing scenario and how it changes after the patch.
Push for smaller patches if the change is too broad.
Reject with clear reasons when the author can’t support the change.
This approach also avoids an unproductive fight about intent. You don’t need to accuse someone of using AI. You can just require the same clarity you’ve always required.
What I learned from Linus’s point of view (and how it changed how I think about AI)
Torvalds’s comments snapped something into focus for me: documentation is not enforcement. It’s guidance for people who already care.
Here are the takeaways I’m carrying forward:
Documentation can’t enforce honesty: if someone wants to hide low-effort work, they will.
Tool-neutral standards are stronger: the keyboard doesn’t matter, the patch does.
Review is the real shield: the only dependable filter is careful human judgment.
AI is helpful only with ownership: if I can’t explain it, I can’t ship it.
Extra words aren’t extra clarity: long AI-written docs can waste time and still be wrong.
In my own workflow, I’m more strict now about how I use AI. I’ll use it to draft a summary or to list potential edge cases, but I won’t let it be the final author. Before I send anything, I rewrite it in plain language, verify it against the source, and make sure the test steps are real.
That feels slower in the moment, but it’s faster than dragging reviewers through ambiguity.
Careful review is where quality gets enforced, not in policy pages, created with AI.
Conclusion
Torvalds isn’t saying “no AI.” He’s saying “no shortcuts.” The “AI slop” problem won’t be solved by documentation, because the people creating slop won’t be stopped by rules meant for good faith contributors.
The fix is older than AI: clear standards, real tests, and reviewers who feel empowered to reject anything they can’t trust or maintain. If you contribute, own your work. If you review, push for specifics. If you maintain, keep policies simple and keep the bar on quality.
0 Comments