Step 1: Complete the methodology memo
Open materials/methodology-memo-template.md and fill in the remaining sections: Methods (which tests and models you used and why), Assumptions Checked (which assumptions you tested, the results, and what action you took), and Limitations (what this analysis cannot support).
This is the first project where the analytical artifacts are yours to create. In the previous two projects, verification targets told you what correct looked like. Here, the methodology memo has no answer key. You decided the methods, checked the assumptions, and now you document what you did and why. The memo is the evidence that the analysis was deliberate.
Step 2: What cross-model review is and why it matters here
You have been working with one AI session through this entire analysis. That session has context from every decision you made -- the join strategy, the assumption checks, the model selection. That shared context is useful, but it also creates a blind spot: the AI that produced the analysis evaluates it through the same lens that created it.
Cross-model review uses a second AI session with fresh context. The second AI sees only the brief, the data dictionaries, and the methodology memo -- not the conversation that produced the analysis. It reviews the work from scratch. Different context surfaces different issues.
This is not about one AI being smarter than another. It is about the structural value of a fresh perspective. The same principle applies when you ask a colleague to review your work -- they catch things you missed because they do not share your assumptions.
Step 3: Run the cross-model review
Open a second Claude session. Give it only three things:
- Somchai's brief (from the client email)
- The data dictionaries
- Your completed methodology memo
Ask it: "Review this methodology memo. Does the methodology match the questions? Are there gaps? Are the assumptions appropriately checked?"
Read the review carefully. The second AI might catch that a test was applied to data that violates its assumptions. It might flag a finding that overreaches from the evidence. It might identify a gap you did not notice.
Step 4: Address the cross-check findings
Some of the second AI's findings will be legitimate issues. Fix those.
Some will not be actual problems. Note why. This is pattern recognition -- learning to distinguish useful feedback from noise in AI review output. Over time, you build a mental model of what AI's reviews consistently catch versus what they consistently miss.
Step 5: Synthesize the findings
You have two analyses: an inferential analysis (which differences are real) and a prediction model (what drives satisfaction). These tell a combined story.
The inferential analysis says property differences exist but are small. The prediction model says season and room type are stronger drivers than the property itself. Together: "The board's instinct that some properties outperform others is not wrong -- the differences are statistically real. But those differences are smaller than they appear in raw numbers, and much of what looks like a property effect is actually a seasonal pattern."
Direct AI to draft a synthesis that combines the inferential and predictive findings into one coherent narrative. The synthesis should be in plain language -- this is the foundation for what you will present to Somchai.
Check: Methodology memo complete. Cross-model review performed. Findings synthesized. Key findings in plain language.