Step 1: Read the report template
You have a reproducible notebook with every number verified: the overall no-show rate, breakdowns by day, time, and visit type, the chi-square test, and the temporal trend. The analysis is done. Now you need to turn it into something Wanjiku can present at her staff meeting.
Open materials/report-template.md. Read through the whole thing before you start filling it in. The template has five sections: Executive Summary, Key Findings, Detailed Breakdowns, Temporal Trend, and Recommendations. Each section has a specific job. The executive summary gives Wanjiku the answer in thirty seconds. The key findings section lists what matters most. The detailed breakdowns show the evidence. The trend section answers "is it getting worse?" And the recommendations suggest what to do next.
This structure is not arbitrary. Wanjiku will probably read the executive summary, maybe skim the key findings, and stop. Her staff will read further if they are curious. If the most important information is buried in the detailed breakdowns, the people who need it most will never see it.
Step 2: Direct AI to draft the report
Open materials/analysis-specification.md alongside the template. The specification tells you what the analysis was supposed to produce. The template tells you where each finding goes. Between the two, you have a complete map of what the report should contain.
Direct Claude to draft the findings report using the template structure. Be specific in your prompt. Tell it which findings go in which section. Tell it to include confidence intervals — not just point estimates. Tell it to address all four of Wanjiku's original questions: what is the no-show rate, what are the patterns, which visit types are worst, and is it getting worse.
If your prompt says "write a report from the analysis," Claude will produce something that looks professional but may leave out confidence intervals, bury important findings in the wrong section, or quietly drop one of Wanjiku's questions. The more specific your prompt, the fewer gaps you have to fix afterward. What you type is everything Claude knows about what you want — if you do not mention confidence intervals, Claude has no reason to include them.
Step 3: Verify every number
Read the draft. Then open the notebook and check every number in the report against the notebook output.
Does the report state the correct overall no-show rate? Does the confidence interval match? Are the visit type breakdowns accurate — the right percentages for each type, the right ordering from highest to lowest? Does the chi-square p-value match? Does the trend description match what the line chart actually shows?
A report that says "the no-show rate is 13.2%" when the notebook says 13.4% is not a rounding issue — it is a fabrication. Claude drafted the report from its understanding of the analysis, not by copying numbers from the notebook output. Every number needs checking. This is not a formality. It is the difference between a report that accurately represents the analysis and one that looks like it does.
Step 4: Put uncertainty in the executive summary
Look at the executive summary. Does it include the confidence interval for the overall no-show rate, or does it just state a single number?
If Claude wrote something like "the overall no-show rate is 13.2%," that is incomplete. Wanjiku needs to know the range: is it 13.2% give or take half a percent, or 13.2% give or take five percent? Those are different situations. The confidence interval belongs in the executive summary — the first section, the one Wanjiku will definitely read — not buried in the detailed breakdowns where she might never see it.
If the confidence interval is missing from the summary, add it. If it is only in the details section, move it up. Honest evidence means communicating uncertainty where it will actually be seen, not where it is technically present.
Step 5: AI self-review
Direct Claude to review its own report. But be specific about what you want it to check. Ask it to compare every number in the report against the notebook output and list any discrepancies.
This is different from asking "does this look right?" A vague review prompt gets a vague answer — Claude will skim the report and say it looks fine. A specific review prompt gets a specific answer — Claude will go through each number, check it against the source, and flag anything that does not match.
Try both if you want to see the difference. Ask Claude "does this report look good?" and see what it says. Then ask "check every number in this report against the notebook output and list any discrepancies." The second prompt gives Claude a concrete task with a clear success criterion. The first gives it permission to be agreeable.
If Claude finds discrepancies, fix them. If it finds none, you still have your own verification from Step 3 as a cross-check. Two independent checks — yours and Claude's — are better than one.
✓ Check: Every number in the report matches the notebook. Confidence intervals appear in the executive summary, not just the details. The report addresses all four of Wanjiku's questions (rate, patterns, visit types, trend).