The Brief
Pemba Sherpa runs Himalaya Horizon Treks, a trekking outfitter in Kathmandu. His team handles everything from permits to lodging for treks to Annapurna, Langtang, and Everest.
His booking system works most of the time. The problem is "most." Every time his developer Raj pushes changes, the system crashes. Last October -- peak trekking season -- it went down for two days. Pemba was taking bookings on paper.
A friend told him Docker could help. Pemba doesn't know what Docker is. He knows his system breaks when changes happen, and he needs that to stop.
Your Role
You containerise Pemba's existing booking application. The React frontend, Express backend, and PostgreSQL database all go into Docker containers. When you're done, the application runs in an environment you control -- identical on your machine, on a test server, and in production.
The planning pipeline continues from before. You still have templates and guides. What changes: you plan the work sequence explicitly before starting, and you decide what context Claude needs for each session. Docker work involves multiple concerns -- the Dockerfile, the application, the database, the networking -- and each session works better when you choose what to include rather than loading everything.
What's New
Last time you extended Marco's system with relational data -- foreign keys, JOINs, cascade decisions. You connected tables and built API endpoints that navigate relationships. You worked with a returning client on an existing codebase.
Two things change.
You containerise an application. Docker wraps the entire environment -- OS, runtime, dependencies, configuration -- into a single unit that runs the same everywhere. When Raj pushes changes, the live system doesn't go down. If something breaks, you restart the container in seconds instead of days.
You plan before you prompt. The sequence matters: backend container before frontend, because the frontend depends on the API. You decide what Claude needs to know for each task and what to leave out. This is the first project where how you direct AI is as important as what you direct it to build.
The hard part is the Dockerfile. AI generates Dockerfiles that use floating version tags, run as root, copy unnecessary files into the image, and skip basic security practices. Every one of those defaults works today and breaks unpredictably tomorrow. You catch them, and you understand why they matter.
Tools
- Claude Code -- AI coding agent, VS Code extension. Primary tool.
- Git and GitHub -- version control, remote repo, issues, project board.
- VS Code with Claude Code extension.
- Docker -- containerisation. New in this project.
- React (Vite) -- the existing frontend. Continuing.
- Express.js -- the existing backend. Continuing.
- PostgreSQL -- the existing database. Continuing.
- Tailwind CSS -- continuing.
- Chrome DevTools -- continuing.
- Vercel CLI -- deployment comparison. Continuing.
Materials
- Pemba's WhatsApp messages -- the problem, in his words. Enough to start, not enough to build. You ask him the rest.
- Starter application -- Pemba's existing booking system. React frontend, Express backend, PostgreSQL database. It works. Your job is to containerise it.
- Docker guide -- Dockerfile syntax, the layer model, base images, build context. Reference, not a tutorial.
- Planning templates -- for updating the PRD, planning the work sequence, and creating tickets.
- Deployment comparison -- Docker vs Vercel. What each gives you, what each costs.
- CLAUDE.md -- project governance file with the ticket list, tech stack, and verification targets.