Step 1: Send the findings to Eunji
Open the platform and send Eunji the findings summary. Your covering message should lead with what her buying team needs: which SKUs to stock up on, which products are showing upward social media trends, and how far ahead the forecast can see.
Do not lead with methodology. Eunji does not need to know you used gradient boosting or that your temporal split was at July 2025. She needs to know what to order and how much to trust the numbers.
Attach the findings summary. Reference the methodology memo as supporting evidence she can share with her data team if they want the technical details.
Step 2: Read Eunji's response
Eunji responds in her usual fast Slack bursts:
ok this is really good
the seasonal stuff -- finally, we won't over-order sunscreen in November
wait, so same-day mentions don't actually predict anything? that's... kind of obvious when you say it like that
the trend products are harder, yeah, makes sense
can we also predict which new products from unknown brands are likely to go viral? not just existing SKU demand but emerging products
Read all of it before responding. The first part is approval. The sunscreen comment confirms she understood the seasonal forecast. The leakage insight landed -- she sees why same-day mentions were misleading once you explained it in her terms.
The last message is different. That is a scope extension.
Step 3: Handle the scope extension
Eunji is asking whether you can predict which new products from unknown brands will go viral. This sounds like a natural extension of the demand forecasting work. It is not.
Demand forecasting predicts how much of a known product will sell based on historical patterns and leading indicators. The model you built does this -- it uses past sales, calendar features, and lagged social media signals for 200 existing SKUs.
Predicting which unknown products will go viral is a fundamentally different problem. There is no historical sales data for products that do not exist in the catalog yet. There are no SKU-level social media lags because nobody is mentioning a product Glow Republic has not listed. The data inputs, the model architecture, and the validation approach would all be different.
Respond to Eunji professionally. Acknowledge that this is a valuable question -- it is where the real money is, and she is right about that. Then explain the distinction: the demand forecasting model predicts quantities for known products, while the viral prediction question requires a different kind of analysis -- one that would look at brand emergence signals, category-level trends, and influencer behavior patterns rather than SKU-level sales history.
Frame it as a separate engagement, not a rejection. Something like: "This is a great next question. It needs different data and a different approach. I'd recommend scoping it as its own project once the demand forecasting pipeline is running."
Step 4: Write the decision record
The project involved several preparation decisions that shaped the model's behavior. Pick the most consequential one and write a decision record.
Candidates:
- Same-day vs lagged social media features. Using same-day mentions inflated accuracy by 3x but would fail completely in production. Lagging by seven days preserved the genuine predictive signal while removing the leakage.
- Temporal vs random train/test split. A random split leaked temporal patterns into training. The temporal split produced worse metrics on paper but reflected real-world forecasting conditions.
- Stockout period handling. Treating zero-sales periods during stockouts as real zero demand would teach the model that nobody wanted the product when the truth was that nobody could buy it.
- Seasonal vs trend-driven product separation. Treating all SKUs the same averaged out the different forecasting dynamics. Separating them gave seasonal products better forecasts and set honest expectations for trend-driven products.
The decision record should capture three things: what was decided, why, and what would have changed if the decision went differently. The "what would have changed" part is the most important -- it documents the stakes. If someone revisits this decision later, they need to understand not just what you chose but what they would lose by choosing differently.
Direct AI to create decision-record.md in the project directory. Keep it concise -- one to two pages.
Step 5: Commit and push to GitHub
Direct AI to stage all project files, commit with a meaningful message, and push to GitHub. The commit message should describe the project deliverable, not the process: something like "Add Glow Republic demand forecast with methodology memo and findings summary."
Verify the push succeeded. Check that the repository contains the notebook, the findings summary, the completed methodology memo, the decision record, and all materials.
Check: Findings delivered. Scope extension handled. Decision record written. Git push succeeded.