Step 1: Configure and run ZAP
ZAP is a vulnerability scanner. Unlike Nmap, which maps network services, ZAP tests web application behaviour -- it sends crafted requests to endpoints and interprets the responses.
Before running anything, configure scope. ZAP's spider follows links. If you don't restrict it, it will crawl external sites, probe third-party resources, and potentially test systems you are not authorised to assess.
Direct Claude to set up ZAP and configure it for the consumer web platform:
Set up ZAP to scan the Juice Shop instance at localhost:3000. Configure the scope to include only localhost:3000 -- exclude all external URLs. Then run an active scan.
After the scan completes, check ZAP's history. Did any requests go to domains outside localhost? If so, the scope configuration failed and must be corrected before proceeding.
Step 2: Run Nuclei scans
Nuclei works differently from ZAP. Instead of probing application behaviour, it checks for known vulnerability patterns using templates. Each template describes a specific vulnerability signature -- an exposed configuration file, a known CVE, a default credential.
Run Nuclei against both the consumer platform and the API endpoints:
Run Nuclei against localhost:3000 including the API paths. Use the default template set. Present the results grouped by severity.
Note the difference: ZAP tested behaviour (sending crafted input and watching what happened). Nuclei matched patterns (checking if known vulnerability signatures are present). Each has blind spots. ZAP misses vulnerabilities that don't trigger on its test inputs. Nuclei misses vulnerabilities not covered by its templates.
Step 3: Discover hidden content with ffuf
Not everything is linked from the main interface. API endpoints, admin panels, backup files, and configuration pages may exist without any visible link.
Use ffuf to discover hidden directories and files on localhost:3000. Try common wordlists for web directories and API endpoints.
ffuf sends requests for common paths and watches for non-404 responses. What it finds supplements the scanner results -- an unlisted API endpoint that neither ZAP nor Nuclei tested is a gap in coverage.
Step 4: Triage the combined results
You now have findings from three tools. ZAP produced severity-rated alerts. Nuclei produced template matches. ffuf found paths. AI will present all of these as equally important confirmed vulnerabilities.
They are not.
Look at the ZAP results. Some "High" findings are real -- ZAP confirmed the vulnerability by exploiting it. Some are probable -- ZAP detected a pattern consistent with a vulnerability but didn't confirm it. Some are false positives -- ZAP flagged behaviour that looks suspicious but isn't actually exploitable.
Triage the findings into three categories:
Confirmed. Evidence of exploitability. The scanner demonstrated the vulnerability. Probable. Strong indicators requiring manual testing. The scanner detected a pattern but didn't confirm it. Informational. Low-confidence matches, version disclosures, or configuration details. Useful for context but not actionable without investigation.
AI presents all findings at scanner severity. The professional act is triage -- deciding which findings are real.
Step 5: Correlate with reconnaissance
Map the triaged findings back to your target profile and threat model. Which scanner findings align with threats you identified? Which suggest attack paths the threat model didn't anticipate?
The scanner found an SQL injection in the search form -- this aligns with the Tampering threat you modelled. The scanner found an API endpoint that returns data without authentication -- this may relate to the Information Disclosure threat for partner data. Nuclei matched a template for an outdated library -- this is a supply chain risk you may not have modelled.
The correlation matters because it determines what you test next. Confirmed findings go to exploitation. Probable findings get manual testing. Informational findings inform the report but don't drive action.
Meanwhile, think about what the scanner activity looks like from the defender's perspective. ZAP's spider generated hundreds of sequential requests. Nuclei's template scanning produced repeating probe patterns. This automated activity is trivially detectable in the logs. Check Grafana -- can you see the scanner traffic? It should be obvious.
Check: Scanner scope was configured correctly (no out-of-scope requests in ZAP history). At least one finding is classified as "probable -- needs manual confirmation" rather than accepted at face value. Findings are triaged into confirmed, probable, and informational.