Step 1: Test DVWA at Medium difficulty
Open DVWA at localhost:8080 and navigate to the SQL Injection page. In previous projects, you exploited this at Low difficulty. The payload that extracted the database worked because there was no input filtering at all.
Medium difficulty changes the rules. The same payload that worked before will fail. Something is blocking it.
Direct Claude to test the SQL injection:
Test the DVWA SQL Injection page at Medium difficulty. Try the payloads that worked at Low difficulty. When they fail, explain what the filter is blocking and why.
AI's first attempt will likely use the same payloads from Low difficulty -- or payloads from its training data that target Low difficulty configurations. Watch what happens. The payload fails, but AI may not immediately understand why. Its knowledge of DVWA filter behavior is anchored to whichever difficulty level appeared most in its training data.
The reasoning exercise is yours: what does the filter block? What does it allow through? Understanding the filter's logic -- not just finding a payload that works -- is what separates exploitation from guessing.
Step 2: Bypass the filter
Once you understand what the Medium filter blocks, craft a bypass. Direct Claude:
The Medium difficulty filter blocks [what you identified]. Craft a SQL injection payload that bypasses this specific filter. Explain WHY the bypass works -- what the filter misses.
AI will generate payloads. Some will work. The question is whether you can explain why. "It worked" is not analysis. "The filter strips single quotes but not double quotes, so the payload uses double-quoted string delimiters" is analysis. The explanation matters more than the payload because the next filter will be different.
Step 3: Test DVWA XSS at Medium difficulty
Navigate to the XSS (Reflected) page in DVWA. The same pattern applies -- the simple <script>alert(1)</script> that worked at Low difficulty fails at Medium.
Test the DVWA Reflected XSS page at Medium difficulty. The basic script tag payload fails. Analyze what the filter blocks and craft a bypass with explanation.
AI commonly gets filter evasion wrong at higher difficulty levels -- it generates payloads that worked historically but may not account for the specific sanitization logic at this difficulty. Test each payload AI suggests. When one fails, that failure tells you something about the filter.
Two filter bypasses across two vulnerability types builds the pattern: understand the defense before crafting the offense. This reasoning transfers to every filter you encounter in the field.
Step 4: Exploit Juice Shop's web interface
Move to Juice Shop at localhost:3000. The scanner findings from Unit 4 identified probable vulnerabilities. Now test them.
Start with the findings you triaged as "confirmed" or "probable." Direct Claude to test each one against the actual application:
Test the [specific finding] from the scanner results against Juice Shop. Confirm whether it's exploitable. Document what data is accessible if the exploit succeeds.
Some scanner "High" findings will confirm as real vulnerabilities. Others will turn out to be false positives -- the scanner flagged a pattern that looked suspicious but isn't actually exploitable in this application. Record both results. The false positives matter for the report: they demonstrate triage quality.
Step 5: Build an exploit chain
Individual findings are data points. An exploit chain tells the business impact story. Chain multiple vulnerabilities together: authenticate as a user, escalate access, and exfiltrate data.
Build an exploit chain using the confirmed vulnerabilities in Juice Shop. Start with [initial access finding], escalate to [privilege escalation finding], and demonstrate what data an attacker could access. Document each step.
The chain changes the narrative. "An attacker could bypass login" is one finding. "An attacker could bypass login, access all customer records, view order histories, and retrieve stored payment information" is a business impact story. When you explain this to Dimitar later, the chain is what makes him understand the risk -- not the individual CVE numbers.
Document the chain with evidence at each step. The report needs this.
Step 6: Write detection rules
Switch perspective. You just exploited the web platform -- now write detection rules that would catch those exploitation patterns in the logs.
Open Grafana at localhost:3001 and find the log entries from your exploitation. The chained attack produced a sequence of events: the initial injection, the privilege escalation, the data access. A single Sigma rule is insufficient for the chain -- it catches individual events but misses the pattern.
Write Sigma rules to detect the exploitation patterns from this unit. For the exploit chain, consider how to detect the sequence of events, not just individual steps. Convert the rules and verify they fire on attack replay.
AI commonly generates detection rules that are syntactically valid but either too broad (firing on legitimate traffic) or too narrow (matching only the exact payload used). Run the rules against both the attack traffic and normal application use. A rule that fires on every search query is useless in production.
Check: At least one payload that bypasses Medium-difficulty filtering with explanation of WHY it bypasses (not just that it works). At least one exploit chain documented with business impact. Detection rules fire on attack replay and do not fire on normal application use.