Step 1: The last question
You have the overall no-show rate, the breakdowns by day, time, and visit type, and a chi-square test confirming that the visit type differences are real. One question remains from Wanjiku's original brief: is the no-show problem getting worse over time, or has it been roughly stable?
This is a different kind of question from what you have been answering. The breakdowns sliced the data by category — day, time, visit type. A temporal trend slices the data by time. The x-axis is not a category but a sequence: months in order, from the start of the dataset to the end.
Open materials/analysis-specification.md and read the section on temporal trends. The specification asks for a monthly no-show rate — one rate per month, plotted over the full 18-month period.
Step 2: Compute the monthly trend
Direct Claude to compute the no-show rate for each month in the dataset and plot it as a line chart. Be specific: you want one data point per month, with the month on the x-axis and the no-show rate on the y-axis.
Look at the shape of the line. Is it climbing steadily upward? Dropping? Bouncing around a stable average? The answer matters for Wanjiku — if the rate is climbing, she has an urgent problem. If it is stable, the problem is real but not accelerating, and she has time to address it systematically.
Check the result against materials/verification-targets.md. The target describes what the trend should look like at a high level.
Step 3: Read the pattern honestly
The trend should show a relatively stable no-show rate — not dramatically increasing or decreasing. There will be month-to-month variation. Some months spike, some months dip. That is normal. With a few hundred appointments per month, random variation alone will produce fluctuations of a few percentage points.
This is where confidence intervals matter again. A single month's rate is based on fewer observations than the overall rate, so each monthly estimate is less precise. A month that looks like it spiked to 18% might have a confidence interval that stretches from 14% to 22% — meaning the spike could easily be noise. When you look at the trend, you are looking for a pattern that persists across many months, not one month that looks unusual.
If Wanjiku asks "is it getting worse?" and the honest answer is "no, it's been stable with normal variation," that is a useful finding. Stability is not a non-result. It means the problem is structural — built into how the practice operates — rather than something that suddenly got worse. That changes how she thinks about solutions.
Step 4: Make the notebook reproducible
The analysis is nearly complete. Before you move to the findings report, there is one thing left to do: make sure the notebook actually works.
A Jupyter notebook records the order you ran cells during your session. But that order might not be the order the cells appear in the file. You might have gone back and re-run an earlier cell after changing something, or skipped a cell and come back to it. The notebook remembers what happened, but someone opening it fresh would run cells top to bottom — and if the results depend on a different order, they would get different numbers or errors.
Reproducibility means: someone else can open the notebook, run every cell from top to bottom, and get the same results you got. The simplest test is to do it yourself.
A Jupyter notebook keeps variables in memory across cell executions, so old results can persist even after you change the code. Restarting the kernel clears everything and runs cells fresh, proving the notebook works from scratch.
Direct Claude to restart the kernel and run all cells. In Jupyter, this is "Restart & Run All" — it clears everything from memory and executes every cell in document order, starting from the first.
Step 5: Fix what breaks
If every cell runs and the outputs match what you had before, the notebook is reproducible. You are done with this step.
If something fails — a cell throws an error, or a variable is not defined when it should be — then the notebook has an ordering problem. A cell that uses a variable defined later in the notebook, or a cell that depends on output from a cell you ran out of sequence during your session, will break on a clean run.
Fix the ordering. Move cells so that every variable is defined before it is used, every import is at the top, and every computation flows downward. Then restart and run all again. Repeat until the notebook runs cleanly from top to bottom with no errors and no changed results.
This is not a formality. A notebook that only works when cells are run in the right order is not a finished artifact. If Wanjiku's staff wanted to check your numbers, they would open the notebook and run it. If it fails, the analysis is not verifiable — and an analysis that cannot be verified is not complete.
✓ Check: The temporal trend should show a relatively stable no-show rate — not dramatically increasing or decreasing. The notebook should run cleanly on "restart and run all."