Learn by Directing AI
Unit 7

Fixing Across Domains

Step 1: Prioritize with CVSS and EPSS

You have exploitation findings from two domains -- the web platform and the restaurant API. AI will present them in discovery order. Discovery order is not priority order.

CVSS scores rate severity: how bad is it if this vulnerability is exploited? EPSS scores rate likelihood: how likely is it that this vulnerability will be exploited in the wild? The combination tells you where to focus remediation effort first.

List all confirmed exploitation findings from Units 5 and 6. For each finding, look up the CVSS score and EPSS probability. Rank them by combined risk: most exploitable AND most severe AND most exposed in Dimitar's environment.

The winery's context matters for prioritization. The API handles 40% of revenue through 12 restaurant partners. A BOLA vulnerability on the API is higher priority than a reflected XSS on DVWA's lab pages, even if CVSS rates them similarly. EPSS captures real-world exploitation likelihood. CVSS captures theoretical severity. The environmental context -- what matters to Dimitar's business -- is yours to add.

Step 2: Remediate web findings

For each web finding, apply the fix and then verify two things: the exploit no longer works AND a detection rule fires on future attempts. Prevention plus detection -- both are required. The fix stops the current attack. The detection rule watches for the next one.

Remediate the [highest priority web finding]. Apply the fix, then re-run the original exploit to verify it fails. Check the detection rule from Unit 5 -- does it still fire on the remediation-test traffic?

Work through the web findings in priority order. For each fix, document: what was found, what was fixed, how the fix works, and why this approach. The documentation serves whoever maintains this platform next -- Dimitar's nephew, an external developer, or Dimitar himself hiring someone new.

AI may apply generic remediation patterns without checking whether they fit this application's specific architecture. A fix that works in theory but breaks the shopping cart is not a fix. Verify the application still functions after each remediation.

Step 3: Remediate API findings

API remediation follows different patterns than web remediation. The web platform needed input validation and output encoding. The API needs authorization enforcement, input schema validation, and rate limiting. These are structurally different fixes.

Remediate the [highest priority API finding]. If this is a BOLA finding, the fix is authorization logic -- verifying that the requesting partner's key only grants access to their own data. Apply the fix and re-test.

AI may apply web remediation patterns to API endpoints -- output encoding for an authorization flaw, for example. The domain mismatch matters: encoding user output does not fix an authorization architecture problem. Catch the mismatch if it occurs.

For each API fix, apply the same prevention-plus-detection standard: the exploit fails AND the detection rule fires on the attempt.

Step 4: Document the remediations

Each remediation gets a record: the finding, the fix, the verification result, and the reasoning. This documentation is a professional obligation -- it tells the next person what was wrong, what was done about it, and why this particular approach was chosen over alternatives.

Create a remediation log documenting each fix applied across both domains. Include: finding description, CVSS/EPSS scores, fix applied, verification result (exploit pass/fail), detection rule status, and rationale for the remediation approach.

The remediation log feeds directly into the final report. Organize it by the CVSS/EPSS priority you established, not by discovery order. Dimitar's developer needs specific remediation steps. Dimitar needs to know what was fixed and what risk remains.

Step 5: Re-test all fixes

Run every original exploit against every remediated endpoint. No exceptions. A fix that "should work" but hasn't been re-tested is an assumption, not a verified remediation.

Re-run all exploitation payloads from Units 5 and 6 against the remediated endpoints. For each: confirm the exploit fails, confirm the detection rule fires on the attempt, and document the result.

Open Grafana and watch the detection rules fire on your re-test traffic. The before/after is the prevention-plus-detection pairing made concrete: the exploit that worked before now fails, and the detection rule that watches for it fires on the attempt. Both verified. Both documented.

If any fix fails re-testing -- the exploit still works, or the detection rule doesn't fire -- go back and fix it. The re-test is not optional.

✓ Check

Check: At least one web fix and one API fix verified by re-exploitation. Each fix has a corresponding detection rule that fires on the attempt. Findings are ordered by combined CVSS/EPSS/environmental priority, not discovery order.