Step 1: Direct passive reconnaissance
Before you scan anything, build a picture of what's there from public sources. Direct Claude to perform passive reconnaissance on Todorovi Wines' digital presence.
Search for Todorovi Wines online. Look for email addresses, employee information, technology stack indicators, and any publicly available documents. Extract metadata from any documents you find. Present all findings organized by category.
AI produces results. Look at what comes back. It will likely be a flat list of data points -- email addresses, names, technology mentions, document metadata -- presented with equal weight. Some of this is useful intelligence. Some is noise. Right now it's data, not intelligence.
Step 2: Impose structure on the results
The difference between data collection and intelligence analysis is structure and prioritisation. AI gave you everything it found. Your job is to decide what matters.
Create a target profile document. Organise the findings into categories:
Personnel. Who works at the winery? What are their roles? Do email naming conventions reveal the internal pattern (firstname.lastname@ vs first initial + lastname@)? Role-based accounts (info@, orders@, api-support@) indicate what systems face the public.
Technology stack. What technologies are confirmed from public sources? WordPress + WooCommerce for the consumer side (Dimitar mentioned this). REST API for wholesale (also mentioned). Hosting? Framework versions visible in HTTP headers or page source?
Potential entry points. Which findings suggest attack vectors? An exposed API endpoint in a sitemap. A login page with no rate limiting indicator. A document that reveals internal file paths through metadata.
AI treats every finding equally. The target profile highlights what would actually enable an attack and explains why.
Step 3: Connect intelligence to planning
The target profile is not an end in itself. It feeds the threat model (next unit) and determines which ATT&CK techniques are relevant.
Look at your findings. If the API uses key-based authentication, credential attacks against API keys are relevant. If the consumer platform uses WordPress, WordPress-specific vulnerabilities are relevant. If document metadata revealed internal paths, file inclusion techniques may apply. The intelligence determines the plan -- not the other way around.
Note which STRIDE threats your findings suggest. You will build the full threat model next, but the connection starts here.
Step 4: Create the engagement memory
Create an engagement memory file (update CLAUDE.md or create a separate file) that carries the target profile, scope boundaries, and key findings between sessions. Include:
- Scope boundaries from the scope document
- Target profile summary (personnel, technology, entry points)
- Key findings and what they mean for the assessment
- Engagement phase (currently: passive reconnaissance complete)
Then test it. Start a fresh Claude session. Does AI reference the scope boundaries without being re-told? Does it know the target's technology stack? Does it remember which systems are in scope?
The difference between a session that starts with good infrastructure and one that starts from scratch is measurable. This is the first concrete experience of infrastructure determining output quality.
Step 5: Evaluate context quality
Two students using the same AI on the same engagement -- one who wrote a thorough engagement memory file and one who types "scan the winery" -- produce measurably different results. The variable is the context provided, not the model capability.
This is worth understanding now because it becomes more important with every project. The quality of what AI "knows" when it starts a session determines the quality of what it produces.
Check: Target profile contains at least three intelligence categories (personnel, technology stack, potential entry points) with analysis, not just raw data. Engagement memory file loads in a fresh session and AI references scope boundaries without being re-told.