Learn by Directing AI
Unit 4

Detail Views and Testing

Step 1: Build the detail view

Yasmine's customers want to examine pieces before buying. "Click on it and see it up close -- the stitching, the clasp, different angles." That's what this component delivers.

The detail view is its own component. When someone clicks a product card, an expanded view opens showing a larger photo, additional close-up shots, leather type and process description, price range, and a "Contact about this piece" link. This is a focused build task. Don't ask Claude to build the detail view and the tests in one request. Build the view first.

Direct Claude:

Build the ProductDetail component for T4. When a user clicks a product card, a detail view opens showing: the main photo at a larger size, 2-3 detail photo thumbnails (close-ups of stitching, clasp, leather texture), the product name, leather type, a short description, the price range in TND, and a "Contact about this piece" link. The detail view should overlay the product grid with the background dimmed. Include a close button with a visible hit target. Use the product data from materials/product-data.json and photos from materials/photos/. Save as src/components/ProductDetail.tsx.

After Claude finishes, open the browser and click a product card. The detail view should appear over the grid with the correct product's information. Click the close button. It should dismiss the detail view and return to the grid. Press Escape. That should also close it. If clicking a card shows the wrong product or nothing at all, the issue is usually in how the selected product's data flows from the grid to the detail component. Check how state is managed before asking Claude to fix it.

Step 2: Check accessibility

The detail view is an overlay. That means keyboard users can get stuck behind it. If someone tabs through the page and the detail view opens, their focus should stay inside the detail view until they close it. This is called focus trapping.

Test it now. Open a detail view by clicking a product card. Press Tab repeatedly. Focus should cycle through the elements inside the detail view: the close button, the detail photos, the contact link. It should not jump to the product grid behind the overlay. If it does, the detail view needs focus trapping.

Check the close button. Does it have an aria-label like "Close detail view"? A screen reader user encountering a button labeled "X" gets no information. An aria-label tells them what the button does.

Check every image in the detail view. The main photo and the close-up thumbnails each need alt text specific to the piece. "Medina Tote -- vegetable-tanned goatskin with hand-stitched seams" tells a screen reader user what they're looking at. "Image" or "product photo" tells them nothing.

If any of these checks fail, direct Claude to fix each issue separately. "Add focus trapping to the detail view" is one request. "Add descriptive aria-label to the close button" is another. Bundling all the fixes into one prompt produces sloppier results than addressing them one at a time.

Step 3: What automated tests are

Up to now, every verification step has been manual. You clicked filter buttons, tabbed through components, read TypeScript interfaces, ran Lighthouse. Those checks work, but they disappear. Every time you change the code, you have to re-perform every check by hand.

A unit test is a written claim about how a function behaves. You write the claim in a test file. A test runner executes the claim and tells you whether it holds. If you change the code and the claim still holds, the test passes. If the change broke the claim, the test fails. The difference: manual checks verify once, automated tests verify every time you run them.

The claim follows a pattern. You call a function with specific inputs and state what the output should be. In Vitest (the test runner already configured in the project scaffold), that looks like this:

import { filterProducts } from './filterProducts';

test('filters products by category', () => {
  const products = [
    { id: '1', name: 'Medina Tote', category: 'bags' },
    { id: '2', name: 'Souk Wallet', category: 'wallets' },
    { id: '3', name: 'Atlas Belt', category: 'belts' },
  ];

  const result = filterProducts(products, 'bags');

  expect(result).toEqual([
    { id: '1', name: 'Medina Tote', category: 'bags' },
  ]);
});

expect(result).toEqual(expected) is the claim. It says: "Given these products and the category 'bags', the result should contain only the bag." If the function returns something different, the test fails and tells you exactly what it expected versus what it received.

Notice the test name: 'filters products by category'. That name is documentation. Someone reading filterProducts.test.ts learns what the function does without opening the implementation file. Well-named tests communicate intent more clearly than comments do.

Step 4: Write tests for the filtering logic

Start with the filtering function. It's pure logic with no UI, which makes it the simplest thing to test.

Direct Claude:

Write tests for the product filtering logic in src/utils/filterProducts.ts. Test these cases: filtering by a specific category returns only matching products, filtering by "all" returns every product, filtering by a category with no matches returns an empty array, and filtering an empty product array returns an empty array. Save as src/utils/filterProducts.test.ts.

Review what Claude produces. The important question is not whether the tests exist, but what they test. AI defaults to testing implementation details. If Claude wrote something like expect(filterFn).toHaveBeenCalled(), that's testing the mechanism, not the outcome. It confirms the function was called, not that it returned the right products. Redirect:

The tests should assert on the returned array contents, not on whether functions were called. Test the output: what should the filtered array contain for each case?

Testing the happy path (expected inputs, expected outputs) is necessary but not enough. Real users provide empty strings, null values, and unexpected types. The edge cases in the prompt above (empty category matches, empty product array) exercise the function's boundaries. Thinking about what could go wrong is itself a form of understanding what the function is supposed to do.

Step 5: Write tests for the product card

Now test the product card component. Component tests check what the component renders, not how it renders internally.

Direct Claude:

Write tests for the ProductCard component. Test that it renders the correct product name, displays the right category, and that the image has a descriptive alt attribute. Save as src/components/ProductCard.test.tsx.

Review the tests. Each assertion should check something visible or meaningful to a user. "Renders the product name" means checking that the text "Medina Tote" appears in the rendered output. "Image has alt attribute" means checking that the alt text is specific to the product, not a generic placeholder.

If Claude wrote tests that check internal state values or assert that specific internal methods were called, those are implementation-detail tests. They break whenever you refactor the component, even if the component still works correctly. Behavioral tests break only when the behavior changes. That distinction matters for every test you write going forward.

Step 6: Run the tests and read the output

Run the test suite:

npx vitest run

Read the output. Vitest lists each test file, each test within it, and whether it passed or failed. The test names you reviewed in the previous steps are now visible in the terminal output as a list of claims about your code, each with a green checkmark or a red cross.

If every test passes, your verification artifacts work. The filtering logic does what the tests claim. The product card renders what the tests expect.

If a test fails, read the failure message before doing anything else. Vitest tells you what was expected and what was received. That information is diagnostic. A failure message like "Expected array of length 1, received array of length 3" tells you the filter returned too many products. A message like "Unable to find text 'Medina Tote'" tells you the component didn't render the product name. Read the message, understand what it's saying, then decide whether the test is wrong or the code is wrong. Do not immediately ask Claude to fix it. The failure message is giving you the answer.

✓ Check

Check: Detail view opens on click, closes on Escape. npx vitest run -- all pass. Assertions test behavior, not implementation.