Step 1: Passive Reconnaissance
Before running any scans, look at what the application gives away for free. Direct Claude to examine the target's publicly visible information:
Look at http://localhost:8080. Check the HTTP response headers, the page source HTML, and any visible technology indicators. What can you learn about the application stack without sending any probing requests?
Claude will pull headers, read meta tags, look for generator strings, find JavaScript library references, and spot framework-specific file paths. Pay attention to what it reports. HTTP headers often include a Server line with the web server name and version. Page source may contain WordPress or WooCommerce references. These details map directly to the attack surface.
Passive reconnaissance generates minimal log activity. The defender sees normal-looking GET requests. This is why attackers start here -- it is cheap, quiet, and often gives away more than the target intends.
Step 2: Version Detection with Nmap
In P1, you ran a basic Nmap scan -- port numbers and service names. This time, you are adding two flags that change what Nmap tells you.
The -sV flag enables version detection. Instead of just reporting that port 80 is running "http," Nmap probes the service and tries to identify the specific software and version -- Apache 2.4.58, for example. The -sC flag runs Nmap's default scripts, which perform targeted checks against each service: pulling HTTP titles, checking for common misconfigurations, retrieving MySQL protocol details.
Direct Claude to run the scan:
Run nmap -sV -sC against localhost, targeting ports 8080, 8081, and 3306. These are the ports listed in the scope document.
The output will be visibly richer than what you saw in P1. The port table now includes version strings and script output beneath each entry. Apache reports its version. MySQL reports protocol details. The HTTP title scripts pull the shop name from the page.
This extra detail costs something. Version detection sends more probes, and default scripts send recognizable request patterns. The information is valuable for the attacker, but the noise it creates is valuable for the defender. That trade-off is real, and it matters throughout the assessment.
Step 3: Interpret the Service Banners
Look at the version strings in the Nmap output. AI will present these as facts -- "The server runs Apache 2.4.58 on Debian" -- but service banners are self-reported. The server tells Nmap what it is. Nmap reports what it was told.
Check the confidence percentages. Nmap assigns a confidence level to each version detection result. A 95% confidence on Apache is different from a 60% confidence on a custom service. AI commonly ignores these percentages and treats every banner as ground truth. When you review Claude's scan summary, check whether it mentions confidence or treats every version string as definitive.
This matters for the next phase. If Claude says "the server runs MySQL 8.0.36" based on a banner, and you plan your exploitation around that version, you are trusting the server to tell you the truth. A hardened server could be running a different version entirely with a spoofed banner.
Step 4: Identify the Application Stack
From the passive and active reconnaissance combined, build a picture of the target. Direct Claude to summarize:
Based on the passive reconnaissance and the Nmap scan results, summarize the application stack. List each service with its version, the confidence level from Nmap, and what that service means for the assessment scope.
The summary should include the web server (Apache with version), the database (MySQL with version), the application framework (PHP with version if detected), and any WordPress/WooCommerce indicators found in the page source. The staging site on port 8081 should appear as a separate target running the same stack.
Map this information back to the TTP selection document. The scan results tell you what is running. The TTP selection tells you what to test for. Where those two meet -- a search form that accepts user input, an admin login panel, a review submission form -- that is where exploitation begins.
Step 5: Document the Reconnaissance Findings
Direct Claude to write up the reconnaissance findings:
Write a reconnaissance summary documenting: the scan command and flags used, each detected service with version and confidence level, a note about what version detection reveals that a basic port scan does not, and a list of potential attack surfaces mapped to the TTP selection document.
This documentation is not for the final report. It is the working reference you will come back to throughout the assessment. When you exploit a specific input field in Unit 3, you will trace it back to what the recon told you.
Professional reconnaissance documentation records the commands exactly as run, including flags, targets, and timestamps. This habit matters because someone reviewing your work -- or you, three hours from now -- needs to know exactly what you did and why.
Step 6: Check the Defender's Logs
Switch to Grafana at http://localhost:3000. Run a Loki query scoped to the time window when you ran the Nmap scan. The application logs should show a burst of requests that look nothing like normal browsing.
In Grafana, query the application logs for the time window when the Nmap scan ran. Look for the scan traffic pattern -- rapid requests, unusual paths, scripted probes.
The -sC scripts generate distinctive log entries. You should see requests to paths like /robots.txt and /sitemap.xml, rapid-fire GET requests clustered within seconds, and user-agent strings that no real browser sends. Compare this to what normal traffic looks like -- a handful of page loads with natural timing.
This is the first moment of the purple team loop in P2. You just ran a scan as the attacker. Now you are looking at what that scan produced as the defender. The richness of -sV and -sC data is the attacker's advantage. The noisiness of that same scanning is the defender's advantage. Both sides see the same event differently.
✓ Check: Nmap with -sV returns version information for at least two services. The student can explain the difference between the version detection output and a basic port scan.