Step 1: Add structured logging
Replace any console.log calls with structured log entries. A structured log entry is JSON with contextual fields: timestamp, log level, request ID, route, method, and business context.
A request ID ties together everything that happens during a single request. Without it, your production logs are a wall of interleaved entries from many users -- searching by timestamp and hoping is not a diagnosis strategy. With it, you filter all log entries for one request and see the full story.
Direct Claude to add a middleware that generates a request ID for every incoming request and attaches it to all log entries:
Add a request ID middleware to the Next.js API routes. Generate a UUID for each request. Include the request ID in every log entry for that request. Use structured JSON logging with fields: timestamp, level, requestId, route, method, message, and any relevant business context (farm name, order ID, etc.).
Adding context to every log entry -- request ID, route, method, operation name -- is the difference between "Error: connection refused" (useless) and a log entry that tells you exactly what was happening, for which request, on which route, when the failure occurred. This context must be added at the middleware level so it's present automatically, not manually added to each log call.
Step 2: Review log levels
Check the log levels Claude assigned. This is a judgment call about future diagnosis needs.
A failed form validation? That's warn -- it's expected behavior, the user submitted bad data, and someone looking at logs later wants to know it happened but not be woken up about it. A database connection failure? That's error -- something is broken and someone should investigate. A successful order creation? That's info -- normal operation worth recording.
Review each log statement. Is the level appropriate for what you'd want to see at 2am when something goes wrong?
Step 3: Review AI's logging output
Check what Claude actually generated. AI produces structured logging with correct JSON syntax but consistently misses the contextual fields that make logs useful in production.
Open the API route files and look at the log entries. Do they include request IDs? Do they include the route and method? When the inventory API logs a database query, does it include which farm was being queried? When the order API logs an allocation, does it include the order ID and the product IDs?
If the logs are syntactically correct JSON but contain only a message field, they're not useful. Direct Claude to add the missing context fields.
Step 4: Set up Sentry
Error tracking is not a logging replacement. Sentry groups identical exceptions, counts how often they occur, captures the stack trace with environment context, and alerts on new error types. When an error happens once, it's a data point. When it happens fifty times in an hour, it's a pattern that needs attention.
Direct Claude to integrate Sentry:
Set up Sentry error tracking for the Next.js application. Install the Sentry SDK, configure it with the project DSN, and add the Sentry middleware. Ensure that API route errors are captured with request context (request ID, route, method).
After setup, trigger a test error -- query a non-existent database table or pass invalid data to a function. Check the Sentry dashboard. The error should appear with a stack trace, the environment (development), and the request context.
Step 5: Deploy to staging
Deploy to Vercel. Configure the environment variables for production: the database connection string, the Sentry DSN, and any other secrets.
A deployment failure caused by an undefined environment variable is not a code bug -- it's a configuration mismatch between your local environment and production. Locally, your .env file has everything. On Vercel, you must explicitly set each variable. If you forget one, the application starts and then crashes when it tries to use the missing value.
Deploy to a staging environment first. Test the inventory dashboard -- does it load? Create a test order -- does the API respond with the correct status codes? Check Sentry -- are there unexpected errors?
Step 6: Deploy to production
Deploy to the production URL. Visit the site on your phone. The inventory dashboard should render correctly on a small screen -- farm names readable, processing stages clear, available quantities visible. The order form should work with touch interactions -- inputs large enough to tap, the submit button accessible without scrolling past the edge of the screen.
Step 7: Compare lab and field measurement
Run Lighthouse against the production URL. Check all four categories: performance, accessibility, best practices, SEO.
Compare the Lighthouse results with the DevTools Performance profiler results from Unit 6. Lighthouse is a lab measurement -- it simulates a specific device and network on your machine. Field measurement (real user data from Web Vitals or CrUX) is the ground truth. A site scoring 95 in Lighthouse with a p75 INP of 350ms in the field has a real performance problem that lab testing missed.
If field data is available, compare it. If not, note the distinction -- you'll encounter it in production.
Step 8: Share with Marco
Share the deployed URL with Marco. He tests it: checks the inventory dashboard, creates an order, updates its status, confirms the flavour profiles display correctly.
His response: "Good. The flavour profiles show. One thing -- when I'm at the workshop and the internet drops, I lose whatever I was entering. Is there a way to save my work even without internet?"
This is a legitimate concern. Marco's workshop in Coban has unreliable internet. But offline support is a significant feature -- service workers, local storage sync, conflict resolution when reconnecting. It's not in scope for this project.
Communicate the boundary clearly: the system saves as soon as the user submits, but if the connection drops mid-entry, the data in the form is held by the browser. Offline data entry and sync would be a feature for a future version. Say what it does, what it doesn't do, and what's possible later. Marco respects directness.
Step 9: Write the README
Write the README for the GitHub repository. Document the full-stack architecture: the database schema and what it stores, the API routes and what they return, the frontend pages and what they display, and how the three layers connect. Include the tech stack, local setup instructions (install dependencies, run migrations, seed the database, start the dev server), and the deployed production URL.
The README is the first thing someone opens when they encounter the project. It should explain the system architecture in a few paragraphs, not just list commands.
Step 10: Push to GitHub
Push the complete repository. It should contain:
- Planning artifacts: PRD, design decisions, architecture document, CLAUDE.md
- Database: Prisma schema, migration files, seed script
- API routes: inventory and order endpoints with validation
- Frontend: inventory dashboard, order form, order status page
- Tests: component tests with React Testing Library
- Configuration: CSP, CORS, CSRF setup
- Observability: structured logging middleware, Sentry integration
- README with architecture documentation and deployed URL
The repository is the deliverable. It represents the full pipeline from Marco's email to a production system.
Check: Visit the production URL. Inventory dashboard loads with data. Create a test order. Check Sentry dashboard -- no unhandled errors. Check Vercel logs -- structured JSON with request IDs. GitHub repo contains planning artifacts, migrations, source code, tests, and README.