Learn by Directing AI
Unit 2

Passive Reconnaissance

Step 1: Certificate transparency

Before you send a single packet to Samir's server, find out what is already public about it.

Certificate transparency logs are a public record of every TLS certificate ever issued for a domain. Browsers require certificates to be logged so that misissued certificates can be detected. The side effect: anyone can query these logs and discover subdomains the domain owner may not have intended to be publicly known.

Direct Claude to search crt.sh for Samir's domain. crt.sh is a certificate transparency log search engine -- you provide a domain name, and it returns every certificate that has been issued for that domain and its subdomains.

Search crt.sh for certificates issued for Samir's domain. List all unique subdomains found.

Review what comes back. Look for subdomains that suggest staging environments, development servers, API endpoints, or internal tools. Each subdomain is a potential attack surface -- but not all of them are in your scope. A staging server discovered through certificate transparency is interesting, but if it is not listed in the scope document, you report its existence to Samir rather than testing it.

Step 2: Shodan and Censys

Shodan and Censys are search engines for internet-connected devices. They continuously scan IP ranges and index what they find -- open ports, running services, software versions, TLS configurations. When you query them, you see what they recorded the last time they scanned that address.

That distinction matters. Shodan results are historical snapshots, not live data. A port listed as open in Shodan may have been closed since the last scan. A service banner may reflect a version that has since been updated. The timestamp on each result tells you when Shodan last looked -- and that gap between "last scanned" and "right now" means passive intelligence is a hypothesis, not a confirmation.

Direct Claude to search Shodan for Samir's server IP.

Search Shodan for the target server IP. Note all discovered ports, services, and technologies. Pay attention to the "last seen" timestamp on each result.

Review the results. Open ports, service banners, detected technologies, operating system guesses. AI will present everything it finds as equally significant. Your job is to separate what matters for this engagement from what does not. A port that is in scope gets noted for active scanning. A service that is out of scope gets documented and reported to Samir.

Step 3: Google dork operators

Google dork operators turn a search engine into a reconnaissance tool. Operators like site:, filetype:, inurl:, and intitle: let you find indexed pages, configuration files, admin panels, error messages, and backup files that a standard search would not surface.

Direct Claude to run targeted Google dork searches for the platform.

Run Google dork searches for Samir's domain. Use site:, filetype:, inurl:admin, and intitle: operators to find indexed configuration files, admin interfaces, error pages, or backup files.

AI will generate an extensive list of dork queries and present every result as a finding. Some results will be irrelevant. Some will be for resources outside your scope. The skill is filtering: which results map to the engagement, and which are noise? A Google-indexed admin page for the ordering platform is in scope. An old cached page from an unrelated domain is not.

Step 4: Map findings to scope

You now have results from three passive sources: certificate transparency, Shodan/Censys, and Google dorks. The raw results are data. Mapping them against the scope document turns data into intelligence.

Open materials/scope-document.md alongside your findings. For each discovery, categorize it:

  • In scope: Maps to a target listed in the scope document. Will inform your active scanning strategy.
  • Out of scope but noteworthy: Exists but is not listed in the scope. Report to Samir, do not investigate.
  • Noise: Not relevant to the engagement.

If you discovered a subdomain or service not in the original scope, that creates a professional obligation. You tell Samir it exists. You do not test it. The difference between "I found this exists" and "I tested its security" is a legal boundary. AI will not enforce this distinction for you -- it will happily investigate anything it can reach.

Step 5: Document the intelligence summary

Compile your passive findings into a structured document. This is not a list of raw tool output -- it is an intelligence summary that feeds your active scanning decisions in the next unit.

Compile passive reconnaissance findings into a structured intelligence summary. For each discovery, include the source (crt.sh, Shodan, Google), what was found, the scope classification (in-scope, out-of-scope, noise), and what it means for the active scanning plan.

The intelligence summary should answer three questions: What is publicly visible about Samir's infrastructure? What does each discovery mean for the assessment? And what should you scan first when you move to active reconnaissance?

Refer back to materials/ttp-selection-guide.md -- the passive reconnaissance category describes exactly this workflow: build a map of the target's exposure before sending any traffic.

✓ Check

Check: Certificate transparency search returns at least one result. Passive findings are categorized as in-scope versus out-of-scope. The student can explain why Shodan results might not reflect the current state.