Learn by Directing AI
Chat with Wanjiku MuthoniAsk questions, get feedback, discuss the project

The Brief

Wanjiku is back. The no-show rate chart from your last project is pinned to the wall behind reception. Grace printed it. The team knows the numbers now.

But knowing that vaccination follow-ups have the worst no-show rate does not help Wanjiku with tomorrow's schedule. She is looking at 30 appointments and wants to know which ones are likely to be empty. If she knew that, Grace could send extra reminders to the risky ones, or Wanjiku could double-book those slots.

Same clinic. Same data, with three more months added. A fundamentally different question.

Your Role

You build a prediction model. The previous project asked "what are the patterns?" This one asks "what will happen next?" That shift changes everything: how you prepare the data, how you evaluate the result, and what you deliver to Wanjiku.

You direct AI through a structured pipeline: prepare the data, build the model, evaluate it honestly, and translate the results into something Grace can use for Monday's schedule. You write your own prompts this time. The materials tell you what to do at each stage, but not exactly how to tell AI to do it.

What's New

Last time, everything was provided: an analysis specification, verification targets, suggested prompts, a report template. You checked AI's output against expected values and caught a wrong denominator.

This time, the suggested prompts and the step-by-step specification are gone. You have a project plan that structures the work into stages, and verification targets that tell you what honest results look like. The rest is you directing AI through each stage.

The hard part is not the model. It is catching the moment when the model looks more accurate than it should. AI has a default that produces impressive-looking results on this kind of data. The verification targets will help you spot it.

Tools

  • Python 3.11+ via your conda "ds" environment
  • Jupyter Notebook for the analysis
  • pandas for data handling
  • scikit-learn for modeling and evaluation (new this project)
  • matplotlib / seaborn for visualization
  • scipy for assumption checking
  • Claude Code as the AI you direct
  • Git / GitHub for version control

Materials

You receive:

  • The extended dataset: 21 months of appointment records (~9,500 rows)
  • A data dictionary describing every column
  • A project plan that structures the prediction pipeline
  • Verification targets for the prediction work
  • A project governance file (CLAUDE.md) for Claude Code

Less than last time. No analysis specification, no suggested prompts, no report template. The project plan tells you the stages. The verification targets tell you what to check. You direct the work.