Developer Experience

READMEs Nobody Reads
(And How to Fix Them)

A reusable skill for writing project READMEs
that humans and AI can actually explain.

March 2026  ·  Problem / Solution / Impact

The Problem

Your AI writes a README.
Nobody can explain the project from it.

What gets generated

# MyProject A powerful, cutting-edge framework for modern development workflows. ## Features ✨ AI-Driven — leverages advanced AI 🚀 Fast — optimized for performance 🔒 Secure — enterprise-grade security 📦 Modular — pluggable architecture Replace “MyProject” with any name. Still says nothing. Every project looks identical. AI reads it and gives a vague, generic summary.
🤯

The actual failure

Someone pastes your repo URL into a chat and asks “explain this project” or “should I use this?” The AI becomes the reader, interpreter, and recommender.

If your README is a feature list, the AI returns a feature list. The person asking gets nothing actionable.

The README doesn’t just fail humans who skim — it fails the AI reading on their behalf.

The Insight

READMEs are for humans.
But AI reads them first.

Most people don’t carefully read READMEs anymore. They paste the link and ask AI to explain it.

User finds repo → pastes URL into chat → asks “should I use this?” The AI reads the README and answers on their behalf. If the README is vague, the AI misrepresents the project. Ambiguity propagates.

This doesn’t mean you write for AI. It means clarity has always mattered — and now it’s testable. Paste your README into an AI and ask it to explain your project. If it gets it wrong, the README isn’t clear enough.

The Skill

Five principles. Every README.

1. Problem before solution

Open with pain the reader recognizes. They should feel the problem before they see your name. A feature list is not an opening.

2. Show, don’t describe

A before/after output example is the most persuasive element in a README. It proves your claim without you having to make it.

3. Timing is respect

Tell people how long things take. People make adoption decisions based on time investment. “Quick: 3–5 min. Deep: ~15 min.”

4. Honest about limits

Name what you can’t do yet. “Inference quality is the hardest dimension — and it’s actively improving” is more credible than silence.

5. README pitches, usage guide teaches

The README answers “why should I care?” in 60 seconds. The usage guide answers “how do I do this?” Don’t mix them.

Structure

Not every project needs the same README.

🛠️

Tools & Products

Heavy opening — problem story first, then architecture. Show before/after output. “You ask an AI to research something. Then someone asks where the number came from…”

📚

Libraries

The code IS the pitch. 1–2 sentences, then show a working code example. No one needs your philosophy — they need pip install and a function call.

🎧

Consumer Products

Paint the experience. “You have a 45-minute commute and 12 tabs you’ll never read.” If the README has personality, don’t flatten it with structure.

Quick Start is always heavy — regardless of project type. First install command, simplest usage, immediate result. No caveats until after the first success.

Quality Gates

Four tests. Run them before you ship.

🔄

1. Substitution test

Replace your project name with [PROJECT] in the opening paragraph. If it still works, it’s too generic. The opening should only work for your project.

🤔

2. “So what?” test

Read each sentence. Could a reader respond “so what?” If yes, you described a feature, not an outcome. “Runs in Docker” → so what. “Ships as a single container, runs anywhere” → outcome.

3. 30-second test

After 30 seconds of reading, can someone explain your project to a colleague? If they’d say “it’s some kind of AI framework” you failed. They should be able to explain the specific problem it solves.

🤖

4. Explain-it-back test

Paste your README into an AI and ask: “explain this project in one paragraph.” If the answer is vague or wrong, your README isn’t clear enough — regardless of how good it seems to you.

Test 4 is the real-world test. It’s exactly what most people actually do. A README that fails test 4 is failing its primary job.

Key Principle

Evaluate first. Don’t bulldoze.

Run the four tests before touching a word. Surgical edits beat rewrites when voice is already working.

✏️

Full rewrite needed

Fails 3–4 quality tests. Is a feature catalog with no problem statement. Generic opening that works for any project. No before/after, no timing. Start fresh.

🔧

Surgical edit needed

Passes most tests. Has voice and personality. Missing one key detail that’s buried in a vision doc. Find the buried gold — add one line. Preserve everything else.

Leave it alone

Passes all four tests. Has personality. Someone will push back on this — your instinct to “improve” it may just flatten the voice. A more precise README isn’t always better.

Consumer products with voice are the most common false positive. Don’t rewrite a README that already makes someone feel something — just because it could be more technically complete.

Tested on 5 Repos

The skill, applied.

Different project types, different outcomes.

Repo Type Before After Action
ridecast Consumer product 8 / 10 8 / 10 Left untouched — voice was right
amplifier-change-advisor Developer tool 7.5 / 10 8.5 / 10 Surgical — blast radius example from vision doc
amplifier-doc-driven-dev Bundle / workflow 5 / 10 9 / 10 Full rewrite — was a feature catalog
deckgen CLI tool 5 / 10 9 / 10 Full rewrite — killed emoji feature list
nouri Consumer product 7 / 10 8 / 10 Surgical — contrarian positioning from vision doc

2 rewrites. 2 surgical edits. 1 untouched. The skill correctly identified which treatment each repo needed.

The Transformation

amplifier-doc-driven-dev — 5 → 9

Before (feature catalog)

# amplifier-doc-driven-dev Amplifier recipes and templates for doc-driven development. ## What is Doc-Driven Development? A documentation-first approach where you define the “why” and “what” before the “how”: • Vision first — Define problems before solutions • Templates ensure consistency • User stories for implemented features only Replace “amplifier-doc-driven-dev” with any project name. Still works. That’s the problem.

After (problem-first)

# amplifier-doc-driven-dev You build fast with AI. Three weeks later, you can’t explain why any decision was made. The AI that added the caching layer didn’t write down why it chose Redis. amplifier-doc-driven-dev fixes this by making documentation the first step, not the last. A conversational recipe interviews you about vision, problems, and audience — then scaffolds a complete docs structure that both humans and AI can reference from day one. Try replacing the project name here. It breaks. “Redis vs Memcached” and “three weeks later” are specific.
What the Testing Taught

Three patterns, discovered across all five repos.

🦐

Look for buried gold

Every repo had a vision doc or design brief with a line sharper than anything in the README. The blast radius example (“1-line dependency add → 68 files, 9 tools, 3 hooks”) was in the vision doc, not the README. Always scan docs/ before writing.

✂️

Compress Built By

Every repo had a multi-line “Built by” section with titles, credentials, and backstory. One line is right. “Chris Park — Microsoft Office of the CTO, AI Incubation. Building the tools he actually uses.” The rest belongs on LinkedIn.

🎤

Preserve voice

Consumer products with personality (ridecast: “your commute now has a point”) shouldn’t be flattened by a rewrite that’s more “correct.” A README that makes someone feel something has already won. More structure isn’t always more value.

These patterns are now in the skill. Each test run made the skill better — the skill got iterated from the same process it teaches.

The Deliverable

One skill bundle. Reusable across every project.

Published as an Amplifier bundle. Any session can load it.

How to use it

Install the bundle once. In any session, the skill is available automatically or load explicitly:

amplifier bundle add git+https://github.com/cpark4x/amplifier-writing-readme@main # Then in any session: load_skill(skill_name=“writing-readmes”) # Agent now follows: evaluate-first, five principles, # weight-by-type, four quality tests, buried gold pattern.

First drafts land at 7–8 out of 10 without hand-holding. You tweak to 9 instead of building from scratch.

Sources

Research Methodology

Data as of: March 15, 2026

Feature status: Active — bundle live at @main, skill iteratively improved

Skill bundle repository: github.com/cpark4x/amplifier-writing-readme

Key commits (all verifiable in git log):

Repos tested: ridecast, amplifier-change-advisor, amplifier-doc-driven-dev, deckgen, nouri — five different project types (consumer product, developer tool, bundle/workflow, CLI tool, mobile app)

Scoring methodology: Before/after quality scores are subjective assessments against the four quality tests (substitution, so-what, 30-second, explain-it-back). Not an automated metric.

Gaps: Scores are human-assessed, not LLM-evaluated. Sample size is 5 repos from a single author. Results may not generalize to all project types or writing styles.

Built by: Chris Park — Microsoft Office of the CTO, AI Incubation  ·  @cpark4x

More Amplifier Stories