Step 1: Open the Defender's View
You just exploited Jean-Marc's booking page. sqlmap confirmed the injection, extracted guest records, and you documented the finding. Everything so far has been from the attacker's perspective — crafting queries, reading tool output, watching data come back.
Now switch sides.
Open Grafana in your browser. Navigate to the Explore view and make sure Loki is selected as the data source. Direct Claude to write a LogQL query that shows access logs from the DVWA container during the time window when your sqlmap scan ran:
Write a LogQL query for Grafana that shows access log entries from the DVWA container during the last 2 hours. I want to see the HTTP request logs so I can find the sqlmap traffic from my earlier scan.
The logging pipeline — Alloy collecting logs from the Docker containers, sending them to Loki for storage, Grafana querying Loki for display — is the foundation of the defense side. Without this pipeline, there is nothing to detect against. The attack happened whether or not anyone was watching. The question is whether the infrastructure exists to see it.
If Claude's query returns no results, do not accept "no logs available" as an answer. A query returning nothing could mean the query syntax is wrong, the label selectors don't match what Alloy configured, or the time window is off. Direct Claude to check the available labels and adjust. The Alloy-Loki-Grafana pipeline only exposes what it's configured to collect — what labels exist determines what queries are possible.
Step 2: Find Your Attack in the Logs
Once the query returns access log entries, look through them. Somewhere in those logs are the requests sqlmap sent — the crafted URLs with SQL syntax embedded in the parameter values. Single quotes, UNION SELECT statements, encoded special characters. Your attack is in there.
This is the moment. The same SQL injection you directed from the attacker's side is now visible from the defender's side. The sqlmap output showed you what data came back. The Grafana logs show you what the request looked like arriving at the server. Same event, two completely different views. From the attacker's side, it was a successful exploit. From the defender's side, it's a suspicious HTTP request with SQL keywords where a booking reference should be.
That dual perspective — seeing the same event from both sides — is the core insight of the purple team approach. The attacker knows what the exploit does. The defender knows what the exploit looks like. You now know both.
Step 3: Read the Noise
Look at the other entries in the logs. Not just the sqlmap traffic — everything else. Health checks from Docker. Legitimate GET requests. Automated probes. Grafana's own queries.
The SQL injection payloads stand out because you know what you sent. But imagine you're seeing these logs for the first time without knowing an attack happened. The injection requests are mixed in with normal traffic. Some of that normal traffic looks suspicious if you don't know the context — a health check hitting the same endpoint repeatedly, a monitoring tool making requests with unusual user-agent strings. Not everything that looks unusual is a threat. Not everything that looks normal is safe.
This is the signal-versus-noise problem, and it starts here. A defender reviewing these logs needs a way to separate the SQL injection payloads from the routine traffic without reading every line. That's what detection rules are for.
Step 4: Read the Sigma Rule
Open materials/sigma-rule-template.yml. This is a Sigma rule — a detection rule written in a vendor-neutral YAML format. It describes what attack pattern to look for in logs, and tools convert it into the specific query language your SIEM speaks.
Read the structure. A Sigma rule is organized into sections, each handling a different part of the detection logic.
The logsource section specifies what kind of log this rule applies to -- web server access logs, application logs, or authentication logs. It tells the conversion tool where to look.
The detection section contains the patterns to match -- SQL injection keywords like UNION SELECT, single-quote sequences, references to information_schema. These are the strings the rule searches for in each log entry.
The condition field ties the detection logic together. It defines how the patterns in the detection section combine -- whether all patterns must match, or any one of them is enough to trigger the rule.
The level field classifies the rule's severity -- how urgent an alert it should produce. The tags field maps the rule to ATT&CK technique IDs, connecting the detection to the same taxonomy you used when mapping the SQL injection finding in Unit 3.
A detection rule is a pattern-matching instruction. The title might say "SQL Injection Detection," but the title does not determine whether the rule catches injection in this system's logs. What matters is whether the patterns in the detection block match what actually appears in the log entries you just saw in Grafana. If the rule looks for patterns in a field called request_uri but your logs store the URI in a field called request, the rule will never fire — regardless of what the title says.
Step 5: Test the Sigma Rule
LogQL is the query language Grafana uses to search logs stored in Loki -- it's what you've been writing queries in since Step 1 of this unit. Converting a Sigma rule to LogQL translates the vendor-neutral detection logic into a query this specific monitoring stack can run.
Direct Claude to test the rule against your attack traffic. The goal is to check whether the Sigma rule's detection patterns match the actual log entries from the sqlmap scan:
Using the Sigma rule in materials/sigma-rule-template.yml, convert it to a LogQL query and run it against the logs in Grafana. I want to see if the rule detects the SQL injection traffic from my earlier sqlmap scan. Show me what it matches.
If Claude reports that the rule fires and matches the injection traffic, verify it yourself. Look at the matched log entries. Do they contain actual SQL injection payloads — the UNION SELECT statements, the encoded characters? Or did the rule match something else entirely?
If Claude reports "no alerts fired," that answer needs investigation. The attack is in the logs — you saw it in Step 2. So either the Sigma rule's patterns don't match the log format, the conversion to LogQL introduced errors, or the field names don't align. Direct Claude to compare the rule's expected fields against the actual log structure. "No alerts" is not a conclusion. It's the start of a troubleshooting process.
Step 6: Evaluate the Rule
The Sigma rule fired on your attack traffic. That's necessary but not sufficient. The question that matters for Jean-Marc's actual booking page is whether this rule would be useful in production.
Direct Claude to evaluate the rule's specificity:
Look at the Sigma rule's detection patterns and the logs from the DVWA container. If this rule ran continuously on Jean-Marc's real booking page, would it fire only on SQL injection attempts, or would it also fire on legitimate guest searches? What would the false positive rate look like?
Think about what the rule matches. If the detection pattern includes the keyword SELECT and Jean-Marc's booking system logs contain legitimate database queries with SELECT in them, the rule fires on every normal search. A rule that fires hundreds of times a day on legitimate traffic trains the person watching to ignore alerts — and the real attack disappears into the noise.
A useful detection rule is specific enough to catch the attack pattern you found without drowning the defender in false positives. That balance — between detection coverage and alert fatigue — is the design problem at the center of detection engineering. You don't need to solve it right now. You need to see that it exists.
Check: SQL injection payload visible in Grafana logs. Sigma rule fires on replayed attack.