Step 1: Start a fresh session
You've been working on the backend container. Before switching to the frontend, check: is Claude still tracking the backend constraints correctly? If it contradicts decisions it made earlier -- mixing up port numbers, forgetting the non-root user requirement, or suggesting changes that conflict with the Dockerfile you already built -- that's context degradation. The session has been running long enough that earlier information is getting pushed out.
The fix is a fresh session with consolidated context. Summarise what you've established:
- The backend container is built and running on port 3000
- It uses
node:20-slim, runs as a non-root user, installs withnpm ci - The DATABASE_URL connects to the host's PostgreSQL via
host.docker.internal - The
.dockerignoreexcludes.git,node_modules,.env
Start a new Claude session with this summary plus the frontend code and Docker guide. Don't carry the full conversation history -- carry the decisions.
This is the first time you've done this deliberately. In previous projects, context degradation happened and you may not have noticed. Now you're diagnosing it and responding to it. The variable that determines output quality is the context you provide, not the model you're using.
Step 2: Write the frontend Dockerfile
Direct Claude to write a Dockerfile for the React frontend. The frontend uses Vite -- it needs npm run build to produce static files, then a simple server to serve them.
Curate context for this session: the frontend's package.json, vite.config.js, .env.example, and the Docker guide. Exclude the backend code and the backend Dockerfile -- they're done.
Apply the same constraints: pinned base image, non-root user, npm ci, .dockerignore. The frontend Dockerfile has an additional concern: the build step. npm run build compiles the React code into static files. Those files are what the container serves. The build tools (vite, react, all the dev dependencies) are needed during the build but not at runtime.
Build and run the frontend container:
docker build -t himalaya-frontend ./frontend
docker run -d -p 5173:5173 himalaya-frontend
Open the browser and verify the frontend loads. You should see the Himalaya Horizon Treks page with trek listings.
Step 3: Connect the containers
The frontend needs to reach the backend API. The backend needs to reach PostgreSQL. Right now, each container runs in isolation.
Create a Docker network so the containers can find each other:
docker network create himalaya-net
Stop both containers and restart them on the network:
docker run -d --name himalaya-backend --network himalaya-net -p 3000:3000 -e DATABASE_URL=postgres://username:password@host.docker.internal:5432/himalaya_treks himalaya-backend
docker run -d --name himalaya-frontend --network himalaya-net -p 5173:5173 -e VITE_API_URL=http://himalaya-backend:3000 himalaya-frontend
On the same Docker network, containers can reach each other by name. The frontend reaches the backend at http://himalaya-backend:3000 -- Docker's DNS resolves the container name to its IP address.
Step 4: Verify the full stack
Open the booking application in your browser. Navigate through the trek listings. Open a trek detail page. Fill out a booking form and submit it. Check the bookings page.
The application should work end-to-end. If the frontend can't reach the backend, the trek listings won't load. If the backend can't reach the database, the API returns errors. Each connection point is a verification target.
The database schema -- treks, bookings, foreign keys -- works the same inside a container as outside. The relational model doesn't change because the application runs in Docker. That's a pattern: the concepts transfer. What changes is the environment around the application, not the application's logic.
Step 5: Check the logs
Check the backend container's logs:
docker logs himalaya-backend
Look at the log entries. Do they include the structured information from Winston -- request ID, method, route, status code, duration? If the logs show plain console.log output instead of JSON-structured entries, the logging middleware isn't working correctly in the containerised environment.
The structured logging you set up in previous projects should work the same inside the container. If it doesn't, something about the container's environment (environment variables, working directory, module resolution) is different from what the application expects. That's a real bug to fix, not a cosmetic issue -- when Pemba's system has problems in production, these logs are how you diagnose them.
Send Pemba a message showing the booking system running in containers. He doesn't need to understand containers. He needs to know: "When Raj pushes changes, the live system won't go down." Show him the result -- the same application, running reliably.
Pemba's response will be practical. He cares about reliability, not technology. He'll ask about recovery: "If it breaks, how fast can you get it back?" That's the next unit.
Check: Can you book a trek through the containerised application? Do the backend logs inside the container show structured log entries with request context? If you stop and restart the backend container, does the application recover?