Learn by Directing AI
Unit 5

The dashboard and leading indicators

Step 1: Plan the dashboard layout

The dashboard serves Siobhan's operations team. A shift supervisor checking it at the start of a shift needs different information from Siobhan reviewing it in a weekly meeting. The layout should follow the team's decision flow.

Plan three levels:

  1. Top level -- OEE as a single summary number. One card, prominently placed. The number that answers "how is production doing overall?" Drill-down from OEE to its three components (Availability, Performance Rate, Quality Rate) for anyone who needs to investigate why.
  2. Second level -- product-line profitability with cost breakdown. Revenue, material costs, production time, waste costs for each of the three product lines. This answers Siobhan's core question about which products are actually making money.
  3. Third level -- leading indicators alongside lagging indicators. PLA moisture content trends with a threshold line. Maintenance frequency by production line. Batch failure rates and waste percentages. The leading indicators let the team act before problems happen. The lagging indicators confirm what already did.

The hierarchy from Unit 3 is now visual. OEE displayed as a system, not a single number. The dashboard is the metric hierarchy made operational.

Step 2: Create the DuckDB views

The dashboard panels need data. Create DuckDB views that power each panel, combining data from multiple sources.

Direct AI to create the views:

Create DuckDB views for a Metabase dashboard:
1. An OEE view that calculates Availability, Performance Rate, and Quality Rate per production line per month, plus the composite OEE.
2. A profitability view that combines production, sales, procurement, and quality data to show revenue, material costs, production time costs, and waste costs per product line per month.
3. A leading indicators view that shows PLA moisture content and maintenance frequency trends over time, with the 4% moisture threshold flagged.

Include the metric definitions in your prompt. When directing AI to build multi-source views, specifying "Availability = (planned_time - downtime) / planned_time" prevents AI from inventing its own definition. The context curation you started in Unit 1 pays off here -- the metric definitions, join logic, and freshness constraints all need to be in the AI's context.

Step 3: Build the dashboard in Metabase

Open Metabase and create the dashboard. Confirm Docker and Metabase are running from previous projects. If not, direct Claude to restart them.

Build the panels in order:

First, the OEE summary. A large number card showing the overall OEE value. Below it, three smaller cards for Availability, Performance Rate, and Quality Rate. Quality Rate for the food container line should stand out -- it is the bottleneck component.

Second, the product-line profitability panel. A horizontal bar chart or table showing the three product lines with their full cost breakdown. Food containers: highest revenue but lowest profitability once waste and material costs are included.

Third, the leading and lagging indicators. Trend lines showing PLA moisture content over time with a threshold line at 4%. Maintenance frequency by production line. Batch failure rate trends. The leading indicators should be positioned where the shift supervisor looks first -- if moisture is climbing toward 4%, they can check calibration before the next batch runs.

Step 4: Siobhan sees the dashboard and pushes for more

Show Siobhan the dashboard. She sees the OEE breakdown for the first time in visual form. She sees the leading indicator panel -- PLA moisture trending over time with the threshold.

Her reaction: "This is exactly what I needed. Can we also connect it to the maintenance schedule? I have a hunch that our Monday morning quality issues are because the weekend shift skips calibration checks."

This is scope creep. The alert system she is asking for -- automatic notifications when PLA moisture exceeds the threshold or when maintenance is overdue -- is a natural extension of the leading indicators. It is also a separate piece of work with its own complexity: alert infrastructure, notification routing, threshold management.

Both responses are valid:

  • Include it: Build a basic threshold check that flags when PLA moisture exceeds 4% and when maintenance frequency drops below a threshold. Not a full notification system, but enough that someone checking the dashboard sees the warning.
  • Push back: "That's a great idea but it's a separate piece of work. Let me finish the current analysis and we can scope that as a follow-up." Siobhan respects pushback: "Grand, if you can show me the threshold on the dashboard, I'll check it myself until we set up something proper."

Either way, the threshold line on the moisture chart is already there. The question is whether to automate the alert or let the team monitor visually for now.

Step 5: Evaluate the dashboard

Test the dashboard against the audience's workflow. Two questions:

  1. Can the shift supervisor see today's leading indicators at a glance? If the moisture chart is buried below the profitability panel, the person who needs it most will not find it. Consider the layout order.
  2. Can Siobhan drill from OEE to the quality component without help? If the drill-down requires clicking through three levels, she will ask someone to pull the number instead.

Make adjustments based on these checks. The dashboard is a promise to the operations team that they can check their numbers and make decisions. If the information is there but hard to find, the promise is broken in practice.

✓ Check

✓ Check: OEE with drill-down to three components; profitability for three product lines; leading and lagging indicators visible