Learn by Directing AI
Unit 7

Deploy and Close

Step 1: Deploy the application

The patient portal works locally -- auth, RBAC, testing, error tracking, and audit logging are all in place. Now it needs to run in production where Lucia's staff can use it.

Production auth configuration is different from development in ways that have direct security consequences. The environment variables you need to set:

  • Auth secrets. Session signing keys, encryption secrets. These must be unique per environment. A development secret reused in production means anyone who knows the dev secret can forge sessions.
  • Database connection. The production database URL with credentials. Not the local PostgreSQL you've been developing against.
  • Sentry DSN. Points error tracking at the production project, not the development project.
  • Callback URLs. If using OAuth or a managed auth provider, the callback URLs must match the production domain. AI commonly generates auth flows that hardcode http://localhost:3000/api/auth/callback -- this doesn't just break in production, it opens an attack surface if the localhost URL isn't properly restricted.
  • Session configuration. Cookie domain, secure flag (must be true in production with HTTPS), sameSite policy.

Deploy the application to your hosting provider. After deployment, verify the basics: the login page loads at the deployed URL, HTTPS is active (check the padlock icon), and the environment variables are correctly set.

If you chose managed auth, the provider (Clerk, NextAuth.js) may need its own production configuration -- API keys, webhook endpoints, allowed origins. Check that the provider's dashboard matches your production domain.

Step 2: Run the security cross-review

Before calling this done, run a cross-model security review. This is the culminating verification act -- directing a second AI to review the first's security-critical output.

Open a new Claude session (or use a different AI model). Give it your access control documentation and your codebase. Ask it to verify:

Review this auth implementation against the access control documentation. Check: (1) Are there any API routes without authorization middleware? (2) Are there client-side role checks without matching server-side enforcement? (3) Are session cookies configured with httpOnly and secure flags? (4) Is rate limiting configured on login and registration endpoints? (5) Does audit logging cover all patient record access?

A fresh context catches issues that established context normalises. You've been working in this codebase for six units. You know the middleware is applied because you wrote it. A second model reading the code for the first time checks whether the middleware is actually applied to every route, not just the ones you remember.

The cross-review may find issues -- a route you added late without the middleware, a client-side check you forgot to mirror server-side, an endpoint where the session is checked but the role is not. Fix anything it finds.

Step 3: Run tests against the deployed environment

Run the full test suite against the deployed application. The E2E tests are most important here -- they test the complete auth flow through the production stack with a real browser, real network requests, and real session management.

While the tests run, check the deployed application's JavaScript bundle. Code splitting should ensure that auth-related JavaScript loads only when needed -- the login page doesn't need the patient records component, and the patient records page doesn't need the registration form. Open the browser's Network tab and verify that the initial page load doesn't download the entire application at once.

If any tests fail against the deployed environment but pass locally, the cause is almost always environment configuration -- a callback URL pointing to localhost, a missing environment variable, or a session cookie domain mismatch. These are the production auth failures that development environments hide.

Step 4: Push to GitHub and close

Push the final code to GitHub. Write the README with:

  • What was built -- a patient portal with role-based access control for Lucia's clinic network
  • The access control model -- three roles, what each accesses, where enforcement happens
  • The technology decisions -- managed vs custom auth (and why), sessions vs JWT (and why), the verification approach
  • How to run the project -- environment setup, test commands, deployment

The commit history should tell the story of the project. If each ticket was a commit or a branch, the history shows the progression: schema, auth, RBAC, frontend, testing, observability, deployment.

Send the deployed URL to Lucia. She'll log in, navigate the system, and test it with her staff roles in mind. If it works -- and it should, because you've verified adversarially, tested at three layers, tracked errors, and cross-reviewed the security model -- she expresses genuine relief. "This has been a problem for three years."

She may ask a final question about the community health workers or the board reporting. Manage scope closure: acknowledge the requests, note what's been built, and clarify what's deferred for a future version. The project is complete when the portal is deployed, tested, and documented. Feature requests for version two are a sign of success, not incompleteness.

✓ Check

✓ Check: The deployed application is accessible at a URL. Login works. Role-based access is enforced. The E2E tests pass against the deployed environment. The cross-model security review found no unprotected routes.

Project complete

Nice work. Ready for the next one?