Learn by Directing AI
All materials

dashboard-design-guide.md

Interactive Dashboard Design Guide

1. Interaction Paths

Every drill-down answers a follow-up question. Map the path before building:

  • Stakeholder sees the overview (all stores, all categories, key metrics)
  • Clicks a store -- sees that store's category breakdown
  • Clicks a category -- sees subcategory detail and trends for that category at that store
  • Each step answers a natural next question, not just shows more data

Design questions:

  • What follow-up question does each drill-down answer?
  • Does the drill-down path end somewhere useful, or does it dead-end at raw data?
  • Can the stakeholder get back to the overview easily?
  • What happens if a drill-down leads to an empty view?

2. Self-Service Patterns

Two patterns serve different users:

Exploratory: The stakeholder controls all filters and chooses what to drill into. They know what questions they have and want the tools to answer them. This works for users with data literacy -- they understand what a filter does, what a category breakdown shows, and how to read a trend chart.

Guided: The questions are pre-set, filters are pre-applied, and the view answers specific questions without requiring the user to know what to ask. "How is my store doing this month?" is answered by a panel that shows this month versus last month, not by a set of filters the user must configure.

Design questions:

  • Could the least data-literate person who will use this dashboard answer their question within 10 seconds?
  • If they need to configure filters to get their answer, is that a design failure?
  • Does the guided view answer the three questions the user actually has, or the three questions the analyst thinks are important?

3. Minimum-Data-Point Thresholds

If a filter combination produces fewer than 20 transactions, display a warning rather than a chart. Twelve transactions backing a trend line is not enough to draw conclusions.

The threshold depends on context:

  • Daily operations checks can tolerate smaller samples (a manager checking today's sales is fine with 5 transactions)
  • Monthly trend analysis needs at least 20-30 data points per period to show a reliable pattern
  • Strategic decisions (which categories to invest in) need enough data to distinguish signal from noise

Design questions:

  • What is the minimum number of transactions needed for each type of view?
  • Does the warning explain why the data is insufficient, or just say "not enough data"?
  • Is the threshold consistent across the dashboard, or does it vary by view type?

4. Hover Labels and Readability

Every hover, tooltip, and axis label should use human-readable names:

  • "Net Revenue (TZS)" not "amt_tzs_net"
  • "Oyster Bay" not "store_1"
  • "1,250,000" not "1250000"

Raw column names in hover text are communication failures -- they expose the data structure to users who do not know or care about column naming conventions.

Design questions:

  • If Amina hovers over a bar, does she see language she understands?
  • Are numbers formatted with thousands separators and currency indicators?
  • Are axis labels readable at the dashboard's actual viewing size?

5. Dual-Audience Checklist

For each dashboard element, ask:

  • Could the least data-literate user understand this without help?
  • If not, should this element be simplified or moved to the exploratory view?
  • Does the guided view use the same metric definitions as the exploratory view?
  • If the manager's guided view and the director's exploratory view show different numbers for the same metric, which one is wrong?