Step 1: The dashboard design checklist
Open materials/dashboard-design-checklist.md. Five areas:
- Information hierarchy -- what the stakeholder sees first
- Chart type selection -- matching chart type to data structure
- Accessibility -- readable without colour-only encoding
- Durability -- surviving new data categories
- Readability -- labels, spacing, contrast
This checklist is your verification tool. You will use it after building the dashboard to check your own work. Read it now so the design principles are in your head before you start building.
Step 2: Plan the layout
Before opening Metabase, plan the dashboard on paper or in a text file. What goes where?
Diego's most important question: retention. That metric goes at the top, large, impossible to miss. A CEO checks this dashboard at 7am on Monday. The first thing he sees should answer his most important question before he scrolls.
Below the primary KPI, a row of secondary metrics: active members, average revenue per member, total monthly revenue. These give context without competing with retention for attention.
Below that, charts: a cohort retention table showing 3/6/12-month retention by signup cohort, a location comparison bar chart, and a class attendance breakdown. Filters at the top: location, time period, membership type.
This layout is a decision about what matters most. AI will not make this decision for you. It will distribute panels evenly in a grid because that is what AI does -- a democratic layout where everything is the same size. That is not a dashboard. It is a spreadsheet with better fonts.
Step 3: Choose chart types
Each chart type matches a specific kind of question:
- Retention by cohort: a table or heatmap. The stakeholder needs precise values (72% retained at 3 months, 58% at 6 months) -- a line chart would show the trend but lose the precision.
- Revenue by location: a bar chart. Locations are categories, not a time series. A bar chart shows categorical comparison cleanly.
- Revenue over time: a line chart. Time series data flows left to right. A bar chart would work but a line shows the trend more clearly.
- Class attendance by type: a horizontal bar chart sorted by attendance count, not alphabetically. Yoga and HIIT will dominate. Sorting by value puts the answer first.
AI commonly selects chart types based on what it generates first rather than what the data and question demand. A pie chart with five slices for class types is technically valid and practically useless for comparison. Check every chart type choice.
Step 4: Build in Metabase
Open Metabase and create a new dashboard. Build each panel using SQL mode -- the same approach as P3. SQL mode gives you visibility into the queries and lets you verify that each panel uses the governed metric definitions from Unit 3.
Build the primary KPI panel first. Large. Above the fold. The retention rate with a clear label.
Then the secondary metrics row. Then the charts. Then the filters.
For each panel, confirm the SQL uses your governed definitions. A panel that calculates retention differently from your definition creates the exact inconsistency problem the metric governance was designed to prevent.
Step 5: Apply accessibility
Two of Diego's investors are colourblind. Red/green encoding for "up" and "down" trends is invisible to about 8% of men. If your dashboard uses red and green as the only way to distinguish positive and negative trends, those investors cannot read it.
Direct AI to replace colour-only encoding with accessible alternatives:
Review every colour-coded element on the dashboard. For each one, add a non-colour indicator: an arrow icon (up/down), a text label (+5.2%, -2.1%), or a pattern. Colour can stay, but it cannot be the only distinguisher.
AI generates colour-coded dashboards as the default because colour is the simplest visual distinction. Catching this and directing the correction is part of the verification work.
Also check: axis labels readable at normal viewing distance? Legend entries untruncated? Sufficient contrast between text and background? These are not cosmetic concerns. A chart with tiny labels is a chart that does not communicate.
Step 6: Test durability
Diego is opening a fifth location next year. What happens to the dashboard when that location appears in the data?
Direct AI to simulate a new location entry:
What happens to the dashboard if a new location ("Alajuela") appears in the FitPro data? Check: do the filters pick it up automatically? Do the charts handle the additional category? Does the layout break with five bars instead of four?
A dashboard that works today and breaks tomorrow is a maintenance problem you are creating for Diego. Design decisions made now -- dynamic filter values, flexible chart widths, colour assignments that accommodate new categories -- determine whether the dashboard survives the business changing.
Check: Can you identify the single most important number on the dashboard without scrolling? If someone with red-green colour blindness viewed this dashboard, could they distinguish all status indicators?