Step 1: Send the findings package to Hassan
Go to the platform and send Hassan the covering email with the two deliverables: the findings summary and the technical appendix. Include your completed methodology memo as well -- his partner will want to see it.
This is the delivery. Everything from Units 1 through 6 has been building toward this moment: the question framing, the data cleaning, the inferential analysis, the validation, and the translation.
Step 2: Read Hassan's response
Hassan responds with two emails, twenty minutes apart.
First email: enthusiastic. The marketing finding answers his main question. The seasonal patterns are exactly what he needed for guide staffing -- he can now plan hires for the October-April peak season with specific numbers. He mentions that his partner read the technical appendix and appreciated the methodology documentation. "This is exactly what I needed. Now I can tell my partner: here's the evidence, let's increase the digital budget."
Second email: two scope extensions. "My silent partner mentioned we should also compare ourselves to industry benchmarks -- can you add that? And one more thing -- can you predict bookings for next year so I can plan hiring?"
Step 3: Handle the scope extensions
Hassan's two requests require different responses.
Industry benchmark comparison: A reasonable idea, but it requires external benchmark data that Hassan does not have. What does "industry" mean for a 20-person boutique operator in Cairo? Comparing Nile Compass Tours to Egypt's overall tourism numbers or to large international tour operators would not be meaningful. If Hassan has data from a peer group of similar operators, a comparison could work. Without it, the analysis would be speculative. Respond professionally: explain what data would be needed and recommend collecting it for a future engagement.
Booking prediction: This is not just an extension -- it is a different question type. The entire analysis was designed around inference: "which factors are associated with growth?" Prediction asks: "how many bookings next quarter?" It requires different methods (time series forecasting, holdout evaluation), different validation (accuracy metrics, not assumption checks), and produces a different deliverable (point forecasts with error bounds, not coefficient interpretations).
You determined the question type in Unit 2. You can now articulate why prediction would be a separate analysis, not an add-on to the current one. Respond to Hassan explaining the distinction and recommending prediction as a follow-on project with its own scope.
This is the payoff of owning the question type. If you understand why inference was chosen, you can explain why prediction is different -- and why trying to bolt it onto an inferential analysis would produce weak results.
Step 4: Write the decision record
Create decision-record.md. This documents the most consequential analytical decision in the project: the question type determination.
Structure it:
- Decision: Framed the analysis as inference (not prediction)
- Context: Hassan's brief was ambiguous. "Understand our booking patterns" could have been answered with description, inference, prediction, or causation. AI suggested prediction.
- Why inference: Hassan's actual decision was whether to increase his digital marketing budget. That requires knowing whether the marketing shift is associated with growth -- an inference question. A prediction model would tell him how many bookings to expect but not whether his marketing change worked.
- Why not prediction: AI defaulted to the most technically impressive approach. Prediction would have produced a forecast model with accuracy metrics -- useful for hiring decisions but not for the marketing budget question Hassan needed answered.
- What changed because of this decision: The entire analytical approach followed from the question type. Regression instead of forecasting. Coefficients and effect sizes instead of accuracy metrics. Assumption checks instead of holdout evaluation. Translation into "associated with X additional bookings" instead of "expect Y bookings next quarter."
This decision record is a permanent artifact. When someone revisits this analysis, they can see why the approach was chosen and what the alternative would have produced.
Step 5: Commit and push to GitHub
Direct AI to commit the project to Git with a meaningful commit message that describes the analysis, not just the files. Something like:
feat: inferential analysis of booking patterns for Nile Compass Tours
Determined question type as inference (not prediction). Marketing shift
associated with ~47 additional bookings/month after controlling for
seasonality and Luxor launch. Effect size moderate. Attribution data
self-reported -- channel-level conclusions limited.
Then push to GitHub. Verify the push succeeded.
Your repository should contain: the analysis notebook, findings summary, technical appendix, methodology memo, decision record, and the materials folder.
Check: Hassan has received and responded to findings. Both scope extensions addressed (benchmark comparison needs external data; prediction is a different question type). Decision record documents the question type determination. Git push succeeded.