Learn by Directing AI
Unit 3

Verification

Step 1: The site looks right — but is it?

The site is built. T0 through T5 are done, all three pages working in the browser. The gallery shows Yasmine's leather goods at large size, the process page tells her story, the contact form has three labeled fields. It looks right.

This unit is T6 — Final Verification. The ticket in CLAUDE.md specifies the targets: html-validate must report 0 errors, and Lighthouse must score 90 or above in all four categories.

But "looks right" is a feeling, not evidence. A page can render perfectly in your browser while hiding structural problems that affect search engines, screen readers, and performance on slower devices. A gallery owner in Paris using a screen reader might hear "image, image, image" instead of descriptions of Yasmine's work. A buyer in Milan on a 3G connection might wait ten seconds for the page to load because the photos were never optimized.

You need tools that check what the eye can't. Two tools handle this: html-validate checks the HTML structure against standards, and Lighthouse measures performance, accessibility, best practices, and SEO. Both produce specific, readable findings — not vague impressions.

The verification targets are already in materials/CLAUDE.md: html-validate must report 0 errors, and Lighthouse must score 90 or above across all four categories. Those targets aren't suggestions. They're the bar the work has to clear.

Step 2: Run html-validate

html-validate is a command-line tool that checks HTML files against structural rules. It catches things like missing alt attributes on images, broken heading hierarchy (jumping from <h1> to <h3>), and elements used in the wrong context. These are errors that don't break the page visually — the browser renders them fine — but they mean the HTML is saying something different from what it should.

npx is a tool that runs packages without installing them permanently -- it downloads html-validate, runs it, and you don't need to manage it after. Install it and run it against all three pages:

npx html-validate index.html about.html contact.html

Read the output. Each error line shows a file name, a line number, a description of what's wrong, and a rule name in parentheses. The rule name tells you exactly what standard was violated.

AI commonly generates HTML that looks correct but fails structural validation. Missing alt attributes, <div> elements where semantic elements belong, heading levels that skip from <h1> to <h3> — these are patterns html-validate is built to catch. The browser doesn't complain about them. The validator does.

Step 3: Fix the HTML errors

Now direct Claude to fix what html-validate found. The key: be specific. Don't say "fix the HTML errors." Say what the tool told you.

Look at each finding from the html-validate output. If the tool reported a heading hierarchy violation, tell Claude: "The heading hierarchy skips from h1 to h3 on about.html — fix the heading levels." If it found missing alt attributes, tell Claude: "Add descriptive alt text to every image in the gallery — each alt should describe the specific piece, not just say 'leather bag.'"

Specific instructions produce specific fixes. Vague instructions produce whatever AI thinks you might mean — which is often a generic cleanup that misses the actual problem.

After Claude makes the changes, run html-validate again:

npx html-validate index.html about.html contact.html

Keep going until the output reports 0 errors. Each cycle tightens the loop: run the tool, read the findings, give specific instructions, verify the fix.

Step 4: Run Lighthouse

Open index.html in Chrome. Open DevTools (right-click anywhere, select "Inspect," or press F12). Find the Lighthouse tab in the DevTools panel — it might be under the >> overflow menu if your DevTools window is narrow.

Lighthouse measures four categories: Performance, Accessibility, Best Practices, and SEO. Each produces a score from 0 to 100. Run a Lighthouse audit on the page.

The four scores at the top are the summary. Below them, individual audit findings explain what each score is made of. A site can score 75 in Performance and 75 in Accessibility for completely different reasons — the number alone doesn't tell you what to fix. The findings do.

Under Performance, you'll see metrics with names like LCP, CLS, and INP. These correspond to what real users experience. LCP (Largest Contentful Paint) measures how long until the biggest visible element renders — for Yasmine's site, that's probably the hero image in the gallery. CLS (Cumulative Layout Shift) measures how much the page layout jumps around during loading. INP (Interaction to Next Paint) measures how fast the page responds when someone clicks or taps.

These specific metrics change over time as the industry learns more about what matters. INP replaced an older metric called FID (First Input Delay) in March 2024 because FID only measured the first interaction, while INP measures the worst interaction throughout the entire page session. The tools evolve. The principle stays the same: measure what users actually experience.

One thing to keep in mind: Lighthouse runs in a simulated environment with throttled CPU and network. These are "lab data" — controlled conditions. Real users on mid-range phones with slow connections might see very different numbers. A site that scores 95 in Lighthouse can score 60 in the field. Lab data is a starting point, not the full picture.

Look at the Accessibility findings. If any image is missing a descriptive alt attribute, a screen reader user visiting Yasmine's gallery hears "image" instead of a description of the piece. If form fields are missing labels, assistive technology can't tell the user what to type where. These findings connect directly to real people trying to use the site.

Check the Best Practices findings too. AI sometimes generates external resource links — font imports, CDN references — using http:// instead of https://. On a deployed HTTPS site, the browser blocks HTTP resources or shows security warnings. Mixed content breaks the security guarantee.

Step 5: Fix the Lighthouse findings

Direct Claude to address the specific findings from the Lighthouse audit. The same rule applies: be specific. AI commonly suggests generic performance fixes like "optimize images" or "reduce JavaScript." Those aren't actionable. The audit findings are.

If Lighthouse flags a large image slowing down LCP, tell Claude exactly what to do: "The gallery hero image is too large — compress it to under 200KB without visible quality loss." If Accessibility is below 90 because of missing form labels, say: "Add visible labels to all form fields on contact.html." If Best Practices flags an HTTP resource, say: "Change the Google Fonts import to use HTTPS."

Every fix should trace back to a specific audit finding. After Claude makes the changes, run Lighthouse again. Check all four categories. The target is 90 or above across the board — Performance, Accessibility, Best Practices, SEO.

This is the loop: run the tool, read the findings, direct AI with specific instructions, verify the fix. The tools tell you what's wrong. You decide what to tell AI. AI makes the change. The tools confirm whether the change worked. That loop is the core of verification — and it works the same way whether you're checking a three-page portfolio site or a production application with a million users.

✓ Check

✓ Check: Run npx html-validate index.html about.html contact.html — 0 errors. Open Lighthouse in Chrome DevTools — Performance >= 90, Accessibility >= 90, Best Practices >= 90, SEO >= 90.