Step 1: Set up the project
Open a terminal, navigate to your dev directory, and start Claude Code.
cd ~/dev
claude
Paste this setup prompt:
Create the folder ~/dev/ml/p7. Download the project materials from https://learnbydirectingai.dev/materials/ml/p7/materials.zip and extract them into that folder. Read CLAUDE.md -- it's the project governance file.
Claude downloads the materials, extracts them, and reads the governance file. Once it finishes, you have a project workspace with the production placement data, a CI/CD workflow template, a drift detection template, and an evaluation suite.
Step 2: Read Priya's message
Priya reached out on Slack. The matching model from P6 is in production and performing well -- placement times are down 40%.
But three hospitals merged their staffing requirements under a new management group, and two hospitals in Coimbatore switched from 8-hour to 12-hour shift patterns. Her operations team says the match scores for those hospitals feel "off" since the changes.
She has two concerns. First: how do they know when the model stops working? Right now the only signal is the team noticing bad matches after placements have already been made. Second: her CTO Ravi wants model updates to go through automated review before going live. No bad model should reach production.
Step 3: Talk to Priya
Open a chat with Priya. Her message is brief -- she gave you the headline but not the details. This is your job to extract.
Ask about the specific hospitals that changed. Which ones merged? What exactly changed about their requirements? What does "off" mean for the match scores -- are they too low, or are the wrong nurses being matched? And what does Ravi mean by "CI/CD" and "eval gates" -- what exactly does he want the system to do?
Priya gives useful detail when you ask specific questions. She defers to Ravi on the CI/CD architecture but is firm about the operational requirement: she does not want to find out about a broken model by seeing bad placements.
Step 4: Plan the work
Before writing any code, plan the work. This project has multiple pieces that depend on each other, and the order matters.
Use plan mode in Claude Code. Ask Claude to plan the full project decomposition: what needs to be built, in what order, and why.
The dependency chain: CI/CD pipeline first, because the deployment infrastructure needs to exist before you can add monitoring to it. Drift detection second, because monitoring checks what the pipeline deploys. Response plan third, because detection without a plan for what to do is noise.
Review the plan. Does the ordering make sense? Are there dependencies Claude missed? Adjust if needed.
Step 5: Review the production data
Open materials/placement-data-production.csv and profile it. Ask Claude to show the shape, column types, and distributions.
This dataset has 600 recent placement records. Compare it to the training data in materials/placement-data-training.csv -- that is the baseline the model was trained on. Look at the hospital IDs, shift patterns, and region distributions. Something has changed.
The changes are not random. The merged hospitals have standardized their requirements. The Coimbatore hospitals show a different shift pattern. The regional distribution of nurses applying has shifted. These are the signals that a drift detection system would need to catch.
Check: The plan identifies at least three work phases with dependency reasoning, and you can articulate why CI/CD comes before drift detection.