Step 1: Run passive reconnaissance
The scope document from Unit 1 lists what Andres told you about. Now find out what actually exists.
Start with passive reconnaissance against the cooperative's domain. Direct Claude to run DNS enumeration, subdomain discovery, and public record searches within the Docker environment. The developer in Caracas built the infrastructure over three years -- there may be cloud assets, staging environments, or forgotten services that Andres does not know about.
Run passive reconnaissance against the cooperative's infrastructure. Enumerate DNS records, discover subdomains, and look for any cloud assets (S3 buckets, staging environments) the developer may have left behind. Stay passive -- no active probing yet. Document what you find.
There is an important boundary here. Checking whether an S3 bucket exists by name is passive -- you are looking at public DNS records. Attempting to list the bucket's contents or read objects crosses into active testing. At this stage, stay on the passive side. Note what exists without interacting with it.
Step 2: Map the network topology
Move inside the Docker environment. The passive reconnaissance tells you what is externally visible. Now map what is internally connected.
Map the network topology of the Docker environment. Which services are on the same network? What ports are exposed on each service? Can the fermentation sensors reach the member portal? Can any service reach the payment processor? Show me the full internal connectivity picture.
The results matter for scope. If the fermentation API shares a network with the member portal and both can reach the payment processor, a compromised sensor could pivot to farmer payment data. If they are isolated, the attack paths are different. Network topology determines what exploitation chains are possible.
Step 3: Investigate third-party integrations
The export tracking system uses a shipping API. Look at how that integration works.
Direct Claude to examine the export tracker's code and network connections to understand the shipping API integration. The question is not just "does it connect to an external service" -- it is "what data crosses that boundary and how."
AI is useful for tracing code paths and finding API calls. But AI commonly misses the security significance of what it finds. If buyer pricing appears in URL query parameters, AI may report the API integration as functional without flagging that URL parameters are logged by every proxy, CDN, and web server between the cooperative and the shipping provider.
Compare what you find with what Andres described. He knew about the export tracking system. He did not know how the shipping integration works. The gap between the client's understanding and the technical reality is the attack surface you need to document.
Step 4: Document and prioritize the attack surface
You have passive reconnaissance results, internal network topology, and third-party integration findings. Now bring them together into an attack surface map.
Prioritize by exploitability, not by the order you found things. A forgotten staging environment running outdated software is higher priority than a production service behind proper controls. The fermentation API with no authentication is more immediately exploitable than a theoretical DNS rebinding scenario.
Create an attack surface document. For each target, include: what it is, how it was discovered, what exposure it creates, and an exploitability rating. Order by exploitability -- highest first. Flag anything the client did not know about.
AI commonly orders findings by discovery sequence or by alphabetical name. The student who lets AI structure the attack surface map without imposing a risk-based ordering produces a document that reads like a scan log, not an assessment. Review the ordering and adjust it.
Update materials/scope-document-template.md with the full target list. The scope has likely grown since Unit 1 -- you now know about infrastructure Andres could not describe.
Step 5: Build the threat model
Open materials/threat-model-template.md. The STRIDE categories are empty.
In previous projects, the threat model was built from a provided description or a known network diagram. This time, the threat model is driven by what you discovered -- not what the client described. The threats you identify here determine which TTPs you select for active scanning in Unit 3.
Work through each STRIDE category with the cooperative's specific infrastructure in mind:
- Spoofing -- Can someone impersonate a buyer on the export tracker? A farmer on the member portal?
- Tampering -- Can someone modify fermentation sensor readings? Alter shipment records?
- Repudiation -- Are actions logged? Can someone deny they accessed pricing data?
- Information Disclosure -- Where is buyer pricing stored and transmitted? Where is farmer personal data?
- Denial of Service -- What happens if fermentation monitoring goes down during a critical processing window?
- Elevation of Privilege -- Can a farmer account access buyer data? Can an external party reach internal services?
After filling the threat model, share the findings with Andres. He needs to understand that the cooperative's infrastructure extends beyond what he knew. When you explain that the fermentation sensors are computers on the office network, he will be surprised. When you explain the shipping API pricing exposure, he will immediately understand the business risk -- pricing relationships are the most sensitive part of his export business.
The threat model is now the engagement brief. Everything downstream -- which tools to run, which targets to scan first, how deep to exploit -- follows from the threats you identified.
Check: Attack surface map includes unknown-to-client finding, threat model covers STRIDE categories, findings prioritised by exploitability. At least 1 finding client did not know about, at least 4 STRIDE categories, findings ordered by exploitability.