Learn by Directing AI

Machine Learning: Track Setup

Complete the platform setup first if you haven't already. You should have a terminal, Claude Code, Git, and a GitHub account ready.


1. Create your track folder

mkdir -p ~/dev/ml
cd ~/dev/ml

2. ML tools: let Claude Code do it

Open Claude Code in your track folder:

claude

Paste this prompt:

I'm setting up a machine learning development environment. Please:

1. Install Python 3.11+ via Miniconda, then create a conda environment called "ml"
2. Install core packages in the ml environment: scikit-learn, pandas, jupyter, mlflow, 
   fastapi, uvicorn
3. Check if Docker is installed. If not, tell me how to install it (it needs admin access)

After each step, verify it worked and show me the result.

Note on Docker: Docker typically needs administrator access. If Claude Code can't install it directly, it will tell you what command to run yourself.

Verify

Once Claude Code finishes:

conda activate ml
python --version
python -c "import sklearn; import pandas; import mlflow; import fastapi; print('All packages installed')"
jupyter notebook --version
docker --version

You should see Python 3.11+, "All packages installed", and version numbers for Jupyter and Docker.


3. Your first look

Everything is installed. Before you start Project 1, see what Claude Code can do when you point it at an ML problem.

Stay in your track folder with Claude Code open, and paste this:

Create a small CSV file with 200 rows of synthetic customer data (age, monthly_spend, 
support_tickets, months_active, churned). Then build a simple churn prediction model: 
load the data, split it properly, train a random forest, evaluate it with a classification 
report, and serve it as a FastAPI endpoint I can test with curl.

In a few minutes, Claude will generate a dataset, write a training script, build an API, and show you how to test it. A working ML system from a single prompt.

As you work through the track, you'll learn why a single prompt isn't enough: why that train/test split might be leaking data, why that evaluation might be misleading, why that API might fail in production, and why that model will degrade over time.

But for now, look at what just happened. That's the starting point.


Ready

Start Project 1.