Step 1: Plan the dual-audience design
Two dashboards from the same data. Different audiences, different designs.
Amina gets an exploratory dashboard. She controls everything -- store, category, time period. She clicks, drills down, explores. She is comfortable with data and knows what to look for.
Her three store managers get a guided view. Pre-filtered to their store. Three questions already answered: "How is my store doing this month?" "Which categories are growing?" "What needs attention?" The managers do not need to decide what to click. The dashboard tells them what matters.
Self-service analytics succeeds or fails based on whether the dashboard is designed at the stakeholder's literacy level, not the analyst's. Amina's exploratory view works because she knows what questions to ask. The same view would fail for the Kariakoo manager, who would not know what to click first.
Step 2: Raj Patel's question
Open the senior colleague chat. Raj Patel asks a question that reframes the design work:
"Who's the least data-literate person who'll use this? Design for them first."
This is the harder problem. The exploratory dashboard is a matter of wiring up filters and drill-downs. The guided view requires you to decide which questions matter, how to present the answers, and what to leave out. Designing for the Kariakoo manager -- who has never used a dashboard -- forces clarity.
Step 3: Build the exploratory dashboard
Open Metabase. Create Amina's exploratory dashboard. Use SQL mode for the panels -- the metric definitions from Unit 2 are your queries. Every interactive element on this dashboard -- every filter, every clickable bar, every drill-down -- is a promise: "this will help you answer a follow-up question." A drill-down that leads nowhere breaks that promise.
Build an exploratory Metabase dashboard for Soma Books. SQL mode panels. Include:
1. A store comparison panel showing retail revenue (excluding bulk orders) for all three stores
2. A category breakdown panel that updates when a store is selected via cross-filter
3. A monthly trend panel showing revenue over time with school order months annotated
4. Filter widgets at the top: Store, Category, Date Range
Use the metric definitions we wrote in Unit 2. The revenue numbers must match what the plotly charts show.
Notice the context you are giving Claude for this session. You included the metric definitions, the exclusion rules for bulk orders, and the cross-filter requirement. Context quality determines output quality. A prompt that says "build a Metabase dashboard" produces generic output. A prompt that specifies the metric definitions and interaction paths produces what Amina needs.
Step 4: Build the guided manager view
This is the design Raj's question pointed to. A different dashboard, same data.
Build a guided Metabase dashboard for Soma Books store managers. This view is for people who do not explore data -- the questions are pre-set.
Pre-filter to a single store (start with Kariakoo). Three panels:
1. "This Month vs Last Month" -- a comparison metric showing current month revenue vs previous month, with percentage change
2. "Category Trends" -- top 3 growing categories and top 3 declining categories by revenue change
3. "Needs Attention" -- any metric that has moved more than 15% month-over-month, highlighted visually
Use the same metric definitions as the exploratory dashboard. The Kariakoo manager should be able to answer "How is my store doing this month?" within 10 seconds of opening this view.
Where you draw the boundary between the exploratory and guided views is a design decision that affects output quality. Too much on the guided view overwhelms the manager. Too little leaves them without the information they need. The three-panel structure is a deliberate constraint.
Step 5: Minimum-data-point awareness
Both dashboards need to handle sparse data honestly. Add minimum-data-point awareness to each view.
For both the exploratory and guided dashboards, add minimum-data-point handling. If a filter combination produces fewer than 20 transactions, show an explanatory message instead of a chart. For the exploratory view, this could be a conditional text panel. For the guided view, add a note below any metric that is backed by sparse data.
Interactivity adds maintenance burden. Every filter combination you support is a combination you need to test. Every drill-down path is a path that might produce sparse data. Designing interactive dashboards means designing for change -- new months of data, new categories, new edge cases in filter combinations.
Step 6: Test every interaction path
Click every store on the exploratory dashboard. Try every filter combination. What happens when you filter to a store, a category, and a month with very few transactions? What happens when school bulk order months are included versus excluded?
Then open the guided view. Pretend you are the Kariakoo manager. Can you answer "How is my store doing this month?" within 10 seconds? Is the "needs attention" flag drawing your eye to the right metric? Does the percentage change make sense?
If a drill-down on the exploratory view leads to a confusing result, fix it. If the guided view requires any data literacy to interpret, simplify it. Starting a fresh session with consolidated context -- the metric definitions, the interaction path map, the test results -- is a valid approach when the current session has accumulated too many threads.
Check: If you were the Kariakoo store manager and opened the guided view, could you answer 'How is my store doing this month?' within 10 seconds? Does the same revenue number appear when you check it in the exploratory view?