Learn by Directing AI
Unit 6

The Report

Step 1: Read the Report Template

Open materials/report-template.md. This is an expanded structure compared to P1's single-finding format. The template has sections for an executive summary, a findings summary table with priority rankings, detailed per-finding sections with evidence and verification, hardening actions, and forward-looking recommendations.

The structure reflects the scope of the assessment. P1 had one finding and one fix. This report covers multiple findings across different vulnerability types, each with distinct remediation and verification evidence. The template handles that complexity, but you fill it with the specific details from Ruta's assessment.

Step 2: Draft the Executive Summary

The executive summary is the first thing Ruta reads. It answers her original question: did someone actually get into her system, or was the phishing email just a copy of her branding?

Direct Claude to draft it:

Draft the executive summary for the assessment report. Address Ruta's core question from her email -- whether someone accessed her customer data or just copied her branding. Summarize the assessment scope, the findings at a high level, and the current state of the system after remediation. Write in language Ruta would understand -- she knows amber and customer service, not cybersecurity.

Review what Claude produces. AI report drafts commonly default to technical language -- "reflected XSS vulnerability in the search endpoint with insufficient output encoding." Ruta needs to understand what this means for her customers. The executive summary should tell her: what was at risk, what was found, what was fixed, and what she should do next.

Step 3: Populate the Findings Section

Each finding needs its own section in the report. The findings table at the top should list them in priority order -- the same ordering you used for remediation in Unit 5, not the discovery sequence from Unit 3.

Populate the findings section of the report. For each confirmed finding, include: what was found, why it matters for Ruta's customers, what was done to fix it, and evidence that the fix works. Order findings by priority -- highest risk first. Include the ATT&CK technique mapping for each.

Check the ordering. AI commonly lists findings in the order the tools reported them -- SQL injection first because sqlmap ran first. The report should lead with whatever affects Ruta's customers most directly. A stored XSS that executes for every visitor to a product page is more urgent than a missing security header, even if the header finding was discovered first.

If the session has been long, check whether Claude's findings section is consistent with the evidence collected in earlier units. Context degradation can cause AI to describe findings differently than they were originally documented, or to omit details that were recorded hours ago.

Step 4: Check the Hardening Section

The hardening actions -- security headers and information disclosure removal -- go in their own section. Each entry should explain what the hardening prevents in terms Ruta would understand.

Write the hardening section. For each security header and information disclosure fix, explain: what was added or removed, what class of attack it prevents, and how it was verified. Write it so Ruta understands why each change matters for her customers' safety.

Ruta does not need to know the HTTP specification. She needs to know that the server was announcing its exact software version to anyone who asked, and now it is not. She needs to know that the shop was vulnerable to being embedded in a fake page on another website, and now it is not.

Step 5: Write Recommendations

Recommendations look forward. The assessment found and fixed specific problems, but Ruta's shop needs ongoing maintenance.

Write the recommendations section. Include: updating the WordPress plugins Tomas has not touched in six months, establishing a regular update schedule, reviewing the staging site's access controls, and considering a follow-up assessment for any new integrations. Keep the language actionable -- tell Ruta what to do, not what concepts to understand.

Two audiences emerge in this section. Ruta reads the business-level recommendations -- what to prioritize, what to budget for, what questions to ask. Tomas reads the technical follow-up -- which plugins to update, which configurations to review, which monitoring to maintain. A good report serves both without making either feel lost.

Step 6: Self-Review the Report

Before delivering the report, direct Claude to review it against Ruta's original requirements:

Review the complete report against Ruta's original email. Does it answer her question about whether someone accessed her system? Is it written in language she can understand? Does it tell her what to fix first? Are findings in priority order? Does every finding include verification evidence?

This is the same self-review technique from Unit 4. A specific prompt produces useful findings. A vague "does this look good?" produces reassurance. The report is the deliverable Ruta acts on -- it needs to be checked with the same rigor you applied to the detection rules.

Step 7: Send the Report to Ruta

This is the client touchpoint. Ruta receives the report and reads it carefully. She wants to know: did someone actually get into the system, or was the phishing email just a copy of her branding? She is relieved that the customer data vulnerabilities have been fixed. She appreciates that the report tells her what was most urgent.

She asks a follow-up question: "My nephew also set up a Facebook shop that connects to our inventory -- should we check that too?"

This is a reasonable question. It is also out of scope. The Facebook shop integration is a separate system with its own attack surface, its own data flows, and its own authorization boundaries. The right response acknowledges the concern, recommends a separate assessment, and does not start testing something that was never in the scope document.

Step 8: Push to GitHub

Commit the project and push to GitHub:

Commit all project files with the message "p2-t6: complete assessment report" and push to GitHub.

The project is complete. You directed Claude through the full assessment pipeline -- reconnaissance with version detection, exploitation of multiple vulnerability types, Sigma rule authoring for each finding, priority-based remediation, web application hardening, and a multi-finding report for a non-technical client. The purple team loop from P1 is the same pattern. What changed is the number of vulnerability types, the detection complexity, and the professional judgment required to prioritize and communicate the findings.

✓ Check

✓ Check: Report contains: executive summary addressing the phishing question, at least three findings with priority rankings, remediation verification evidence, hardening actions, and forward-looking recommendations. Project pushed to GitHub.

Project complete

Nice work. Ready for the next one?