Step 1: Build the custom work section
Yasmine gets emails every week asking the same questions: What kinds of custom work do you do? How long does it take? What materials do you use? How do I start? The custom work section answers all of them in one place.
Open materials/custom-work-brief.md. This document has the raw details: types of custom work, the process from consultation to delivery, materials she uses, and how to get in touch. Your job is translating this reference material into site content that sounds like Yasmine's workshop, not like a generic artisan template.
This is a content-heavy component, no complex interactivity. A focused build task.
Direct Claude:
Build the CustomWork component for T5 using the information in materials/custom-work-brief.md. The section should cover: types of custom work (bags, wallets, accessories, restoration), Yasmine's process (consultation, material selection, 4-8 weeks production), materials she uses (vegetable-tanned goatskin and cowhide, solid brass hardware, waxed linen thread), and how to start (contact form or email). Include a simple contact form with name, email, and message fields. Style consistently with the rest of the site using Tailwind. Save as src/components/CustomWork.tsx.
After Claude finishes, open the browser and scroll to the custom work section. Read through it. The content should feel specific to Yasmine's workshop. If it reads like it could describe any leather artisan anywhere, the content needs work. Details matter: "vegetable-tanned goatskin" is specific. "High-quality leather" is not.
Step 2: Review against what Yasmine would expect
Would Yasmine recognize her own process in this description? Read the section as if you were her. A few things to check:
Does the process section mention a consultation step? Yasmine always meets with custom clients before quoting a timeline or price. If the section jumps straight from "contact us" to "4-8 weeks production," it skipped the step where she figures out what the client actually wants.
Does the pricing section handle her approach correctly? Yasmine quotes per piece after the consultation. She does not publish fixed prices. If the section shows a price list or a "starting from" number, that misrepresents how she works.
Does the materials description name specific things? Waxed linen thread for hand-stitching. Solid brass hardware. Vegetable-tanned goatskin and cowhide. These details are what separate her from mass production. If Claude generalized them into "premium materials," the section lost its identity.
Share a preview with Yasmine. She reviews it and responds: "The timeline is right but you should mention that I do a consultation first. I need to see what they want before I give a timeline." If the consultation step was already there, good. If not, update the section. This kind of correction is normal. The brief had the information, but the PRD might not have captured it fully. Receiving feedback and acting on it is part of the work.
Step 3: What XSS is and why it matters here
The contact form takes user input: name, email, message. That input is untrusted. Not because Yasmine's customers are malicious, but because "untrusted" means any value from outside the system. Every form field, every URL parameter, every piece of data that a user provides must be validated and sanitized before use. That is the baseline.
Cross-Site Scripting (XSS) is what happens when untrusted input gets rendered as executable code in another user's browser. Someone types a script tag into the message field. If the code renders that input directly into the page without sanitization, the script runs in the next person's browser. That person's session, their data, their trust in the site is compromised.
AI generates frontend code that interpolates user input directly into the DOM. Patterns like innerHTML and dangerouslySetInnerHTML exist widely in training data. The code works perfectly and is insecure simultaneously. There is no error, no warning, no red underline. The vulnerability is invisible until someone exploits it.
React helps here. Its default rendering behavior escapes content automatically. But the moment someone uses dangerouslySetInnerHTML to bypass that protection, the safety net disappears. The name includes "dangerously" for a reason.
Step 4: Add security headers with Helmet.js
Sanitizing input is one defense. Security headers are another. They work at a different level: instead of cleaning individual inputs, they instruct the browser to prevent entire categories of attacks. Sanitization and encoding are complementary but distinct strategies. Relying on only one creates gaps.
Security headers are server-level instructions. X-Content-Type-Options: nosniff prevents the browser from guessing file types (which can turn an uploaded text file into executable code). X-Frame-Options: SAMEORIGIN prevents other sites from embedding yours in an iframe (clickjacking). Referrer-Policy controls what URL information gets shared when users click links to external sites.
Helmet.js is a middleware that sets these headers for you. Direct Claude:
Add Helmet.js to the project for T5. Configure it as middleware so that every response includes security headers. After setup, verify that the following headers appear in the response: X-Content-Type-Options, X-Frame-Options, and Referrer-Policy.
After Claude finishes, open Chrome DevTools (F12), go to the Network tab, and reload the page. Click the main document request in the request list. Under Response Headers, look for the security headers. You should see at least X-Content-Type-Options: nosniff, X-Frame-Options: SAMEORIGIN, and a Referrer-Policy value. Each one is a server instruction that prevents a specific attack category without changing any application code.
These headers do not matter locally the same way they matter in production. On localhost, nobody is trying to iframe your site or sniff your MIME types. But when Yasmine's site is live on the open internet, these headers protect real users from attacks they will never know about.
Step 5: Check input handling and image optimization
Check the contact form code. Open the CustomWork component and look at how the form handles user input. Is there any innerHTML or dangerouslySetInnerHTML? These patterns render raw HTML strings directly into the DOM, bypassing React's built-in escaping. If Claude used either one, that is a security vulnerability, even if the form appears to work correctly.
Look at the form submission handler too. Does it validate that the email field contains something that looks like an email? Does it check that required fields are not empty? Input validation is not the same as sanitization, but both are necessary. Validation catches obviously wrong data early. Sanitization prevents malicious data from doing damage.
Now check the images across the site. Images are typically the largest assets on a web page. Serving a 3MB photo when a 150KB version at the correct display dimensions would look identical is a waste of bandwidth that Yasmine's customers pay for on mobile data.
Check the product images for srcset and sizes attributes. Without them, every device downloads the largest version. A phone with a 375px screen downloads the same image as a 27-inch monitor. srcset lets the browser choose the right size for the device.
Check the hero image or any large image above the fold. Does it have loading="lazy"? That is an anti-pattern for above-the-fold images. Lazy loading delays the largest element on the page, which is the LCP (Largest Contentful Paint) element. The hero image should load immediately.
AI generates image markup with optimization attributes when asked, but applies them as patterns without analyzing the specific page. It adds loading="lazy" to every image, including the one that should load first. It adds srcset breakpoints that may not match the layout widths. Check each image against what the layout actually needs.
Check any logging in the codebase. If Claude added console.log statements, those are unstructured noise in production. A console.log("form submitted") tells no one anything useful. If logging is needed, it should use structured entries with timestamps, severity levels, and context. Log levels exist so you can filter by severity. A system that logs everything at the same level is a system where finding real problems means reading everything.
Run html-validate:
npx html-validate dist/index.html
Fix any structural issues. Then run Lighthouse in Chrome DevTools and check all four categories. Fix findings until all categories reach 90 or higher.
Step 6: Self-review for security
In Unit 2, you directed Claude to self-review the planning documents against the PRD. Now extend that technique to code, specifically to security patterns.
Direct Claude:
Review the codebase for any places where user input is rendered without sanitization. Check for innerHTML, dangerouslySetInnerHTML, and any direct DOM manipulation that bypasses React's default escaping. Also check that form inputs are validated before use. List every finding.
Read the findings. AI self-review with a specific prompt produces useful results. "Review what you built against the acceptance criteria and list every gap" works. "Does this look right?" produces confident reassurance with no substance. The specificity of the prompt determines the quality of the review.
If Claude finds any issues, fix them one at a time. Each fix is its own prompt. Bundling multiple security fixes into one request produces sloppier results than addressing them individually.
This self-review is different from the verification you have done so far. Lighthouse checks performance and accessibility against a known rubric. html-validate checks structure against a spec. But "are there security vulnerabilities in this code?" has no automated tool at this level. You are directing Claude to check something that requires judgment, not just comparison. That is a harder form of verification, and it depends entirely on asking the right question.
Check: Custom work section accurate. Helmet.js configured. No innerHTML/dangerouslySetInnerHTML. html-validate: 0 errors. Lighthouse: all >= 90.