Step 1: Review exploitation evidence from the defender's side
Switch perspectives. In Unit 5, you were the attacker. Now you are the defender who needs to catch what you just did.
For each exploitation path, identify what the attack looks like in the logs. The SQL injection against the export tracker produces specific patterns in the access logs -- unusual characters in query parameters, error responses, data extraction patterns. Lateral movement between services produces network connections that normal operations do not. The fermentation API abuse produces API calls from unexpected sources.
Review the exploitation evidence from Unit 5. For each confirmed attack path, describe what the attack looks like from the defender's perspective: what log entries it produces, what network traffic patterns it creates, and what application behavior is abnormal. Identify the specific patterns a detection rule needs to match.
Open Grafana at http://localhost:3001 and explore the logs in Loki. Can you find traces of the exploitation you performed? The attacks are already in the logs -- the question is whether the current monitoring setup captures enough detail to distinguish them from normal traffic.
Step 2: Write web application detection rules
Write Sigma rules for the web application attack patterns. The export tracker's SQL injection produces distinctive log entries. The member portal's authentication patterns reveal unauthorized access attempts.
Write a Sigma rule that detects SQL injection attempts against the export tracker. The rule should match the specific patterns from our exploitation -- UNION-based injection, error-based extraction, or the specific query patterns used against the /search endpoint. Avoid matching on generic keywords like SELECT that would fire on every legitimate search.
A Sigma rule that matches "SELECT" in the access log fires on every legitimate search the cooperative's buyers run. The rule validates -- sigma check passes -- but it generates hundreds of alerts a day, training Andres to ignore them. The rule needs to match the attack pattern specifically enough to distinguish injection attempts from normal buyer search queries.
Write a second rule for lateral movement detection. If the exploitation in Unit 5 showed that the export tracker could reach the fermentation API, a detection rule should flag unexpected cross-service connections.
Step 3: Write a supply chain indicator rule
The Semgrep findings from Unit 5 may have revealed a vulnerable dependency in the payment processor. Write a detection rule that would catch exploitation of that specific vulnerability.
Supply chain indicator rules are narrower than the generic rules from earlier projects. You are not detecting "any suspicious activity" -- you are detecting the specific exploitation pattern for a known CVE in a specific dependency. The PyYAML vulnerability, for example, has a specific exploitation pattern that produces identifiable log entries when unsafe YAML loading is triggered.
Write a Sigma rule that detects exploitation of the vulnerable dependency found in the payment processor. The rule should match the specific attack pattern for that CVE -- not generic suspicious activity, but the actual exploitation signature.
This is the third detection rule covering a third layer -- web application exploitation, lateral movement, and supply chain exploitation. Each layer has a different log format, a different detection paradigm, and different false positive characteristics.
Step 4: Tune rules against normal traffic
The cooperative has legitimate traffic patterns that detection rules must not flag. Buyers log in to check shipment status. Farmers submit harvest data through the member portal. Fermentation sensors send readings every few minutes. Payment processing generates API calls during export cycles.
Replay your attack traffic and verify that each rule fires. Then run the cooperative's normal operations and verify that each rule stays silent.
Test each Sigma rule against both attack replay traffic and normal cooperative operations. For each rule, document: does it fire on the attack? Does it fire on normal traffic? What is the false positive rate? If a rule fires on normal operations, what needs to change?
A rule that fires on SQL injection replay but also fires when a buyer searches for "Portland shipments" needs tuning. A rule that fires on lateral movement but also fires when the monitoring stack collects logs from the fermentation API needs a different approach -- the monitoring traffic is legitimate cross-service communication.
Step 5: Add the fermentation API log source
If the exploitation in Unit 5 used the fermentation sensors as a lateral movement path, the sensor traffic needs to be visible in Loki for detection to work.
Adding a log source is a visibility decision. Without the fermentation API logs in Loki, the lateral movement detection rule has nothing to match against. The rule exists but cannot fire because the data is not there.
Check whether the fermentation API logs are being collected by the Alloy agent and forwarded to Loki. If not, add the fermentation API as a log source in the Alloy configuration. Verify that logs appear in Grafana after the change.
Verify the complete detection picture. Three rules covering three layers: web application, lateral movement, and supply chain. Each tested against attack and normal traffic. Each with a documented false positive rate. The detection engineering is now paired with the exploitation evidence -- for every attack path demonstrated, there is a corresponding detection capability.
Check: Detection rules written for multiple layers, rule tested against attack and normal traffic, false positive documented. At least 3 Sigma rules (web, lateral movement, supply chain), at least 1 tested against both traffic types, false positive rate documented.