A reusable skill for writing project READMEs
that humans and AI can actually explain.
March 2026 · Problem / Solution / Impact
What gets generated
Someone pastes your repo URL into a chat and asks “explain this project” or “should I use this?” The AI becomes the reader, interpreter, and recommender.
If your README is a feature list, the AI returns a feature list. The person asking gets nothing actionable.
The README doesn’t just fail humans who skim — it fails the AI reading on their behalf.
Most people don’t carefully read READMEs anymore. They paste the link and ask AI to explain it.
User finds repo → pastes URL into chat → asks “should I use this?” The AI reads the README and answers on their behalf. If the README is vague, the AI misrepresents the project. Ambiguity propagates.
This doesn’t mean you write for AI. It means clarity has always mattered — and now it’s testable. Paste your README into an AI and ask it to explain your project. If it gets it wrong, the README isn’t clear enough.
Open with pain the reader recognizes. They should feel the problem before they see your name. A feature list is not an opening.
A before/after output example is the most persuasive element in a README. It proves your claim without you having to make it.
Tell people how long things take. People make adoption decisions based on time investment. “Quick: 3–5 min. Deep: ~15 min.”
Name what you can’t do yet. “Inference quality is the hardest dimension — and it’s actively improving” is more credible than silence.
The README answers “why should I care?” in 60 seconds. The usage guide answers “how do I do this?” Don’t mix them.
Heavy opening — problem story first, then architecture. Show before/after output. “You ask an AI to research something. Then someone asks where the number came from…”
The code IS the pitch. 1–2 sentences, then show a working code example. No one needs your philosophy — they need pip install and a function call.
Paint the experience. “You have a 45-minute commute and 12 tabs you’ll never read.” If the README has personality, don’t flatten it with structure.
Quick Start is always heavy — regardless of project type. First install command, simplest usage, immediate result. No caveats until after the first success.
Replace your project name with [PROJECT] in the opening paragraph. If it still works, it’s too generic. The opening should only work for your project.
Read each sentence. Could a reader respond “so what?” If yes, you described a feature, not an outcome. “Runs in Docker” → so what. “Ships as a single container, runs anywhere” → outcome.
After 30 seconds of reading, can someone explain your project to a colleague? If they’d say “it’s some kind of AI framework” you failed. They should be able to explain the specific problem it solves.
Paste your README into an AI and ask: “explain this project in one paragraph.” If the answer is vague or wrong, your README isn’t clear enough — regardless of how good it seems to you.
Test 4 is the real-world test. It’s exactly what most people actually do. A README that fails test 4 is failing its primary job.
Run the four tests before touching a word. Surgical edits beat rewrites when voice is already working.
Fails 3–4 quality tests. Is a feature catalog with no problem statement. Generic opening that works for any project. No before/after, no timing. Start fresh.
Passes most tests. Has voice and personality. Missing one key detail that’s buried in a vision doc. Find the buried gold — add one line. Preserve everything else.
Passes all four tests. Has personality. Someone will push back on this — your instinct to “improve” it may just flatten the voice. A more precise README isn’t always better.
Consumer products with voice are the most common false positive. Don’t rewrite a README that already makes someone feel something — just because it could be more technically complete.
Different project types, different outcomes.
| Repo | Type | Before | After | Action |
|---|---|---|---|---|
| ridecast | Consumer product | 8 / 10 | 8 / 10 | Left untouched — voice was right |
| amplifier-change-advisor | Developer tool | 7.5 / 10 | 8.5 / 10 | Surgical — blast radius example from vision doc |
| amplifier-doc-driven-dev | Bundle / workflow | 5 / 10 | 9 / 10 | Full rewrite — was a feature catalog |
| deckgen | CLI tool | 5 / 10 | 9 / 10 | Full rewrite — killed emoji feature list |
| nouri | Consumer product | 7 / 10 | 8 / 10 | Surgical — contrarian positioning from vision doc |
2 rewrites. 2 surgical edits. 1 untouched. The skill correctly identified which treatment each repo needed.
Before (feature catalog)
After (problem-first)
Every repo had a vision doc or design brief with a line sharper than anything in the README. The blast radius example (“1-line dependency add → 68 files, 9 tools, 3 hooks”) was in the vision doc, not the README. Always scan docs/ before writing.
Every repo had a multi-line “Built by” section with titles, credentials, and backstory. One line is right. “Chris Park — Microsoft Office of the CTO, AI Incubation. Building the tools he actually uses.” The rest belongs on LinkedIn.
Consumer products with personality (ridecast: “your commute now has a point”) shouldn’t be flattened by a rewrite that’s more “correct.” A README that makes someone feel something has already won. More structure isn’t always more value.
These patterns are now in the skill. Each test run made the skill better — the skill got iterated from the same process it teaches.
Published as an Amplifier bundle. Any session can load it.
Install the bundle once. In any session, the skill is available automatically or load explicitly:
First drafts land at 7–8 out of 10 without hand-holding. You tweak to 9 instead of building from scratch.
Data as of: March 15, 2026
Feature status: Active — bundle live at @main, skill iteratively improved
Skill bundle repository: github.com/cpark4x/amplifier-writing-readme
Key commits (all verifiable in git log):
bf22b98 — Initial skill bundle: five principles, six-section structure, quality tests381b34e — Revise skill: weight-by-type table, dual examples, explain-it-back testbca8312 — Add evaluate-first guidance and voice preservation (from ridecast test)7f191e5 — Add buried-gold and compress-built-by patterns (from change-advisor test)Repos tested: ridecast, amplifier-change-advisor, amplifier-doc-driven-dev, deckgen, nouri — five different project types (consumer product, developer tool, bundle/workflow, CLI tool, mobile app)
Scoring methodology: Before/after quality scores are subjective assessments against the four quality tests (substitution, so-what, 30-second, explain-it-back). Not an automated metric.
Gaps: Scores are human-assessed, not LLM-evaluated. Sample size is 5 repos from a single author. Results may not generalize to all project types or writing styles.
Built by: Chris Park — Microsoft Office of the CTO, AI Incubation · @cpark4x