Learn by Directing AI
Unit 2

The Pipeline

Step 1: What CI/CD is

Until now, every time you deployed something, you ran a command. You decided it was ready, you pushed, it went live. If the tests failed, you found out after deployment -- or you forgot to run them entirely.

A CI/CD pipeline replaces that. CI stands for Continuous Integration -- every code change is automatically tested. CD stands for Continuous Deployment -- code that passes all tests is automatically deployed. The pipeline is a sequence of steps: lint the code, run the tests, build the artifact, deploy it. Each step is a gate. If lint fails, tests don't run. If tests fail, the build doesn't happen. If the build fails, nothing deploys.

The key word is "blocks." A pipeline that runs tests but deploys anyway when they fail is not a CI/CD pipeline. It's scheduled deployment with extra steps.

Step 2: Read the pipeline template

Open materials/pipeline-template.yml.

This is a GitHub Actions workflow skeleton. It has four jobs: lint, test, build, deploy. Each job has runs-on: ubuntu-latest and placeholder steps with TODO comments. The needs keyword chains them -- test needs lint to pass, build needs test to pass, deploy needs build to pass.

GitHub Actions runs this file automatically whenever you push code or open a pull request. The configuration lives in .github/workflows/ -- it's code, versioned alongside the application, reviewed in pull requests just like any other file.

Step 3: Plan the pipeline

Before you ask Claude to generate anything, plan what each step should do:

  1. Lint -- run ESLint. The project already has ESLint configured. If the code has lint errors, stop here.
  2. Test -- run the test suite with Vitest. The project has 15 existing tests. If any test fails, stop here.
  3. Build -- run next build. If the build fails, stop here.
  4. Deploy -- only on pushes to main (not on pull requests). Deploy the built application.

The sequence matters. Lint is fast and catches syntax issues. Tests are slower but catch logic issues. Build catches compilation issues. Deploy only happens if everything else passed. Each step narrows the pool of code that reaches production.

Step 4: Direct AI to generate the pipeline

Tell Claude to create the complete GitHub Actions workflow. Be specific about the quality gates:

Using the pipeline template at materials/pipeline-template.yml as a starting point, create a complete GitHub Actions CI/CD pipeline at .github/workflows/ci.yml.

Requirements:
- Lint step must run ESLint and fail the pipeline if there are errors
- Test step must run the Vitest test suite and fail the pipeline if any test fails
- Build step must run next build and fail the pipeline if the build fails
- Deploy step must only run on pushes to main, not on pull requests
- Each step must block on failure -- no continue-on-error
- Cache npm dependencies using the package-lock.json hash

AI commonly gets pipeline configuration wrong in a specific way. The generated pipeline will look complete -- it'll have all four steps, the right commands, proper caching. But check whether failures actually block the next step. AI sometimes adds continue-on-error: true or structures the workflow so test failures are reported but don't prevent deployment.

Step 5: Review the generated pipeline

Open the generated .github/workflows/ci.yml. Check three things:

  1. Does continue-on-error appear anywhere? If it does on the lint or test steps, remove it. That flag turns a quality gate into a suggestion.
  2. Do the needs dependencies chain correctly? Test needs lint, build needs test, deploy needs build.
  3. Does any step echo or print environment variables? AI sometimes adds debug logging that would expose secrets in CI logs. Remove any echo $VARIABLE or env commands.

Step 6: Push and verify

Initialize a git repository if one doesn't exist, commit the workflow file, and push to GitHub:

Initialize a git repo in the project directory if there isn't one already. Add the GitHub Actions workflow file. Commit with the message "ci: add CI/CD pipeline with quality gates". Push to GitHub.

Go to the GitHub repository's Actions tab. You should see the workflow running. Watch it execute: lint, then test, then build. Each step should show a green checkmark when it passes.

Now test the quality gate. Open any test file and add a deliberately failing assertion. Commit and push. Watch the pipeline run again. The lint step should pass. The test step should fail with a red X. The build and deploy steps should not execute at all.

If the pipeline passes despite the failing test, the quality gate is broken. Go back to Step 5 and check for continue-on-error: true or a misconfigured test command.

After verifying the gate works, revert the failing test and push again.

Aminata sees the pipeline working. She messages: "Can the pipeline also run a check on the database -- make sure the new code doesn't break any of the existing tables? Last time a developer renamed a column and half the reports stopped working."

That's the next unit.

✓ Check

Check: Push a commit with a deliberately failing test. Verify the pipeline blocks the build. If the pipeline passes despite the failing test, continue-on-error: true is set on the test step, or the test step doesn't actually run the test suite.