Learn by Directing AI
Unit 5

Delivering the System to Priya

Step 1: Review the full system

Before delivering anything, review what you have built. Walk through the components:

  • The Pipeline with ColumnTransformer -- handling numeric, categorical, and text features in a single structure that prevents leakage by design
  • The transfer learning model -- pretrained language understanding adapted to match quality prediction
  • The fairness audit -- disaggregated evaluation showing the regional disparity, the investigation into why it existed, and the intervention that reduced it
  • The MLflow experiments -- baseline vs transfer learning comparison, pre- and post-fairness metrics

Everything should be documented and reproducible. If the session has grown long, consolidating the key decisions into a fresh context helps keep the final deliverables clean.

Step 2: Write the client summary

Priya needs a document she can share with her team and reference when questions come up. This is not a technical report -- it is a communication deliverable written in terms her operations team can act on.

Cover: what the model does (scores nurse-hospital matches and ranks them), how to interpret the scores (what a high score means, what a low score means), the fairness measures in place (regional parity constraints, the monitoring cadence), and what to watch for going forward.

The going-forward section matters. Priya asked a good question: "How will we know if the bias comes back as we collect more data?" Be honest -- monitoring is infrastructure that comes in future work. For now, running the disaggregated evaluation periodically on new placement data is the minimum.

Step 3: Priya's final review

Share the summary with Priya. She reviews it and gives feedback.

She is satisfied -- the fairness measures address the board's equity policy, and the ranked match list is practical for her team. She asks one more question: how does the team use the ranked list in practice? Walk her through the workflow -- her team reviews the top matches for each position, approves or adjusts, and the placements proceed.

Step 4: Push to GitHub

Write a README that communicates the project's scope, approach, and fairness considerations. The README should cover:

  • What the model does and who it is for
  • The data pipeline (Pipeline with ColumnTransformer handling heterogeneous data)
  • The model architecture (transfer learning for text, scikit-learn for structured features)
  • The fairness audit (what was found, what was done, what the trade-offs are)
  • How to run the system

The Pipeline's structure in the README tells another practitioner exactly what happens to the data. That is the Pipeline as a communication artifact -- not just code that runs, but documentation that explains.

Push everything to GitHub: the Pipeline code, the model or training script, the fairness audit report with disaggregated metrics, the README, and the client summary.

Step 5: Final commit

Review the repository. Commit any remaining changes with a descriptive message.

The project is complete. You built a matching model that handles heterogeneous data through a Pipeline, adapted pretrained language understanding through transfer learning, and discovered -- then addressed -- regional bias that aggregate metrics hid.

✓ Check

Check: The GitHub repository contains: the Pipeline code, the trained model or model training script, the fairness audit report with disaggregated metrics, the README, and the client summary document.

Project complete

Nice work. Ready for the next one?