Learn by Directing AI
Chat with Aminata KoneAsk questions, get feedback, discuss the project

The Brief

Aminata Kone runs operations for a cashew processing company in Korhogo, northern Cote d'Ivoire. Forty-five people work there -- mostly women on the processing floor, shelling, grading, roasting, and packaging cashew kernels for export to Europe and Asia.

The company has a production tracking system. Two outages this season. March 15 -- three hours down, twelve tonnes of cashews untracked. April 2 -- forty-five minutes, during a shipment inspection.

Both caused by the same thing: developers pushing code directly to production. No tests run first. No staging environment. No checks of any kind.

Aminata has heard that a CI/CD pipeline would prevent this. She's also frustrated that her developers keep asking her what the database looks like every time they start working with their AI tools.

Your Role

You add infrastructure to a working system. The production tracking application already exists -- Express backend, Next.js frontend, PostgreSQL database. Your job is to make it safe to change.

The CI/CD pipeline you build is the first automated quality gate in this codebase. Until now, every verification step required someone to remember to do it. The pipeline enforces quality whether anyone remembers or not.

You also connect Claude to the database directly. Right now, you describe the database to AI in every session. After this project, AI reads it for itself.

What's New

Last time you built a museum website from scratch -- rendering strategy decisions, WCAG AA accessibility, custom observability metrics, the whole architecture. You were the architect.

Two things change.

You build infrastructure around existing code. The tracking system already works. You're not building an application -- you're adding the automation and tooling that protect it. This is the first time you work with a codebase you didn't write.

You connect AI to real data. Every project until now, you've described your database to Claude in prompts. This project, you connect a PostgreSQL MCP server. Claude reads the schema directly. The difference in output quality is immediate -- and so are the new failure modes.

Tools

  • GitHub Actions -- CI/CD pipeline configuration. New.
  • PostgreSQL MCP server -- first MCP connection, connecting Claude to the database. New.
  • Next.js, Express, TypeScript, PostgreSQL, Prisma -- the existing stack. Continuing.
  • Jest/Vitest -- tests that will run in CI. Continuing.
  • ESLint -- linting that will run in CI. Continuing.
  • VS Code + Claude Code -- continuing.
  • Git + GitHub -- continuing.

Materials

  • Existing production tracking system -- a working Next.js + Express + PostgreSQL application with API routes, database schema, seed data, and existing tests. This is the codebase you're protecting.
  • Pipeline template -- a GitHub Actions YAML skeleton with placeholder steps. Structure only -- you fill it in.
  • MCP setup reference -- a brief guide to the PostgreSQL MCP server and where it's configured.
  • Project governance file -- a pre-built CLAUDE.md that gives Claude context about the codebase.