Step 1: Identify scanning gaps
The correlated findings from Unit 3 show what Nmap, ZAP, and Nuclei found. Now think about what they missed.
Default scanner templates check for known vulnerability patterns. But the cooperative's infrastructure has specific conditions that generic templates do not cover. The export tracker's API may return buyer pricing data in ways that no standard template checks for. The member portal may have stale accounts from former staff that authentication scanners do not detect. The fermentation API has no authentication at all -- scanners report this as a misconfiguration, but the business impact depends on what the API can reach.
Review your scan results and threat model. For each high-priority threat that does not have a confirmed finding from Unit 3, ask: could a custom, targeted check reveal what the automated tools missed?
Step 2: Generate and review the first NSE script
Direct AI to write a custom Nmap NSE script for a specific condition. For example, testing whether the export tracker's API endpoints return buyer pricing data in HTTP responses or headers.
Write a custom Nmap NSE script that tests the export tracker (port 3000) for buyer pricing data exposure. The script should make an HTTP request to the shipments API endpoint and check whether the response contains pricing information. Include proper NSE library imports and result formatting.
Before running the script, read it. AI generates NSE scripts that execute without errors but may test the wrong condition. A script that checks whether port 3000 is open tells you nothing you do not already know. A script that checks whether the /api/shipments endpoint returns pricing data in the response body is testing a specific, meaningful condition.
Look for:
- Does the script target the right endpoint?
- Does the pattern matching check for the correct response data?
- Does the script handle connection errors and edge cases?
If the script tests the wrong thing, correct the prompt. Be specific about what condition you need to verify and what the expected vulnerable response looks like.
Step 3: Run and verify the first script
Run the custom NSE script against the target.
nmap --script=./custom-check.nse -p 3000 localhost
The script produces output. Is the finding real? Verify it through a second method. If the NSE script reports that buyer pricing is exposed, confirm it manually -- curl the same endpoint and check the response yourself.
curl -s http://localhost:3000/api/shipments | head -50
If you are not sure how to verify a specific finding, that uncertainty is a directing opportunity. Ask Claude to help you design a verification approach for the specific condition the script tested. Be honest about what you do not know -- if you overstate your certainty, AI will design verification for a more experienced practitioner and you may miss something.
Step 4: Generate the second targeted check
With the experience from the first script, write a more specific prompt for a second targeted check. This time, give AI tighter constraints -- the exact endpoint, the exact condition, and what the vulnerable response looks like.
Write an NSE script that tests the member portal (port 5000) for stale user account access. The script should attempt to list users or check for accounts that have not been active recently. The expected response format from the member portal's API includes user data with last_login timestamps.
The difference between the first and second script prompts shows the learning. The first prompt was broader -- "check for pricing data exposure." The second prompt specifies the endpoint, the condition, and the expected response pattern. More specific direction produces more accurate scripts.
Run this script and verify its findings through the same cross-tool process.
Step 5: Document custom findings
Add the custom scan findings to your assessment documentation alongside the automated results from Unit 3. The assessment now includes three layers of evidence: automated scanner results, cross-tool correlations, and custom targeted checks.
Update the findings documentation to include the custom NSE script results. For each custom finding, include: what the script tested, what it found, how the finding was verified, and how it relates to the automated scan results. Note any findings that custom checks revealed but automated tools missed.
The gap between automated and custom findings is significant. Automated scanners check for known patterns across broad categories. Custom checks test specific conditions in specific systems. A professional assessment includes both -- the automated sweep for coverage and the targeted checks for depth.
Check: Custom NSE script runs, finding verified through second method, AI script reviewed. At least 1 NSE script executes successfully, finding verified via manual or cross-tool check, AI-generated script reviewed for correctness.