Learn by Directing AI
Unit 5

Amina's review and the self-service test

Step 1: Share the exploratory dashboard

Open the chat with Amina. Share the exploratory dashboard. Describe what she can do: click any store to see the category breakdown, filter by time period, compare stores side by side.

Amina clicks Oyster Bay. Drills into Fiction. Sees the subcategory breakdown -- Literary Fiction, Genre Fiction, Poetry, Drama. She likes what she sees. The drill-down answers her natural follow-up question: "I can see Fiction is Oyster Bay's strongest category -- but which types of fiction?"

This is the difference between plotly charts in a notebook and a Metabase dashboard a stakeholder visits daily. The plotly charts were your prototyping workspace -- you explored the data, tested the interactions, verified the metrics. The Metabase dashboard is the delivery tool. Amina does not open Jupyter. She opens Metabase.

Step 2: Share the guided view

Show Amina the guided manager view. She is surprised -- she had not thought about her managers needing a different presentation.

"Yes, this is exactly what Kariakoo needs. She wouldn't know what to click on the other dashboard."

Raj's question landed. The least data-literate person who will use this is the Kariakoo manager. The guided view -- pre-filtered to her store, three questions already answered, "needs attention" flags drawing her eye to what matters -- is designed for her. Amina sees the difference immediately.

Step 3: Handle scope creep

Amina has feedback: "Could we add our author events data? I want to know if event attendance drives sales in the following week."

This is a natural extension. She is thinking about her business, not about scope boundaries. Assess the request: it requires a new data source -- event attendance records -- that is not in the current POS export. Linking events to post-event sales means matching dates, locations, and possibly individual customers.

Respond through the chat. The analysis Amina is describing is valuable, but it is a separate project. The current POS data does not contain event information. Building the link between events and sales requires a different dataset and a different analytical approach.

Defer gracefully. The dashboards are ready with the data you have. The event analysis is a follow-up project, not a feature request on this one.

Step 4: Fix interaction path issues

Review the interaction path testing from Unit 4. Address anything that did not work cleanly.

Filters that produce empty results should show an explanation, not a blank panel. Drill-downs that lead to sparse data should trigger the minimum-data-point warning. Any metric label that still shows a raw column name needs correction.

Review all interaction paths on both dashboards. For each path, verify: (1) the drill-down produces a meaningful result, (2) sparse data triggers a warning, (3) all labels are human-readable, (4) the same metric produces the same number on both dashboards. List any issues found and fix them.

Step 5: Verify cross-tool consistency

The final verification. Pick any store, any category, any month. Check the revenue number in three places: the plotly chart, the exploratory Metabase dashboard, and the guided manager view. They must match.

For Kariakoo, Children's, November 2025: check the retail revenue number in the plotly chart, the exploratory Metabase dashboard, and the guided manager view. Do all three show the same number? If not, identify which metric definition each tool is using and where they diverge.

Metric governance is not a one-time task. It is an ongoing responsibility. Every time a new view is added, a new filter is created, or a new audience accesses the data, the metric definitions must hold. The shift from "define a metric" to "govern a metric" is the shift from setting it up to keeping it honest across every tool that uses it.

If any numbers disagree, trace the cause. It is usually a filter boundary difference (one tool includes bulk orders, the other does not) or an aggregation difference (sum versus average). Fix it by aligning the SQL in both tools to the documented metric definition.

✓ Check

Check: Pick any store, any category, any month. Does the revenue number match across the plotly chart, the exploratory Metabase dashboard, and the guided manager view? If they differ, which metric definition is each tool using?