Step 1: The deliverable email
Wei needs the results before the board meeting. Send the analysis summary by email. Open the chat and paste the key findings:
Wei, here are the campaign effectiveness results.
Summary: The campaign generated an estimated [X-Y] additional new patients across the five original clinics (95% CI), with a statistically significant effect (p = [value]). Total spend was CNY 180,000.
Key findings:
1. Cosmetic dentistry: significant increase in new patient bookings (p = [value], 95% CI: [range]). This is the strongest campaign effect.
2. General dentistry: the observed increase was not statistically significant (p = [value], 95% CI: [range]). The increase is consistent with normal seasonal variation.
3. Channel breakdown: [WeChat ads / KOL / referral results with ranges]
4. Cost per incremental patient: CNY [range] overall, varying by channel.
Data notes: Gaoxin clinic excluded (opened during campaign, all patients new by definition). Referral channel estimate carries additional uncertainty -- baseline cannot separate organic from bonus-driven referrals. About 3% of bookings have no source attribution.
Replace the bracketed values with the actual numbers from the analysis. Every number should be a range, not a point estimate. The data notes section is essential -- it tells the board what the analysis can and cannot assert.
Wei responds: "This is exactly what I needed. The board will understand 'the campaign generated X-Y additional cosmetic patients at CNY Z-W per patient.' The general dentistry finding is disappointing but honest -- better to know now. I'll follow up on the retention question after we secure next quarter's budget."
Step 2: Managing scope creep
Wei adds a follow-up request: "Can you also look at the retention of campaign-acquired patients versus organic ones? If the campaign patients don't come back, the ROI is worse than it looks."
This is a valid question. It is also a different project. The current dataset covers two years of bookings, ending at December of the campaign year. Retention analysis requires follow-up data -- did the campaign-acquired patients return in the months after the campaign ended? That data does not exist yet.
Acknowledge the value of the question and defer it:
That's an important question -- if campaign patients have lower retention, the effective cost per patient is higher than the initial numbers suggest. But the current dataset ends at December, so we can't track post-campaign return visits yet. I'd recommend revisiting this in Q2 when you have at least three months of follow-up data. That would be a separate analysis with its own hypothesis test: "Is the return rate for campaign-acquired patients different from the return rate for organic patients?"
This is scope management. The analysis answers the question it was designed to answer. Extending it to retention without the right data would produce unreliable results -- or worse, AI would generate plausible-looking numbers from insufficient data.
Step 3: Writing the README
Direct AI to write a project README:
Write a README.md for this project. Include: what was analyzed (BrightSmile Dental campaign effectiveness), the statistical methods used (z-test for proportions, chi-squared test), the key findings with confidence intervals, the tools used, and the data sources. Keep it concise.
The README documents what was built, what was found, and how. Anyone opening the repository should understand the analysis in under a minute.
Step 4: Push to GitHub
Commit the remaining work and push:
git add -A
git commit -m "Complete campaign effectiveness analysis with statistical tests and uncertainty reporting"
git push origin main
The repository should contain: the booking data, the analysis notebooks, the statistical test results, the uncertainty-aware charts, the dashboard configuration, and the README.
Check: You should have a Git repository with the analysis, the dashboard, the statistical results, and a README describing the project.