Learn by Directing AI
All materials

ttp-selection-guide.md

TTP Selection Guide

Engagement: Kabylie Gold -- Container security and reconnaissance depth assessment Purpose: This guide describes testing categories for the assessment. It does not enumerate every specific test. Use these categories to design your approach, selecting specific tests based on what you discover during reconnaissance.

Testing categories

1. Passive reconnaissance

Objective: Build a map of the target's exposure before sending any traffic to the system.

  • Certificate transparency: Query public certificate logs (crt.sh) to discover subdomains and services the target may not have intended to be publicly known. Certificates are public by design, but the subdomain enumeration they enable is a side effect.
  • Infrastructure discovery: Use Shodan and Censys to find what these services have recorded about the target's server -- open ports, running services, technologies detected. These results are historical snapshots, not live data.
  • Targeted search: Use Google dork operators to find publicly indexed information about the platform -- configuration files, documentation, admin interfaces, error pages, backup files.

Map all passive findings against the scope document. Discoveries outside the scope must be reported to the client, not investigated.

2. Active reconnaissance

Objective: Identify all accessible services, their versions, and their configuration across TCP and UDP protocols.

  • Multi-protocol scanning: Scan both TCP and UDP ports. TCP provides reliable results. UDP is inherently less reliable -- "open|filtered" means ambiguous, not confirmed. Choose your scan strategy: which ports, which protocols, which timing.
  • OS detection: Use Nmap's OS fingerprinting to identify the operating system. Results are probabilistic -- match percentages, not definitive identification.
  • Content discovery: Use ffuf or similar tools to discover hidden directories, files, and endpoints on the web application.
  • Scan timing: Choose your timing profile deliberately. Aggressive timing is faster but generates distinctive traffic patterns in the logs. Polite timing is slower but harder to detect. Your detection rules will need to handle whichever pattern you create.

3. Web application testing

Objective: Test whether buyer data, trade terms, and authentication mechanisms are properly protected.

  • Authentication: Test login mechanisms for weaknesses -- default credentials, session management, password policies.
  • Input handling: Test all user-facing input fields for injection and cross-site scripting vulnerabilities.
  • API security: Test API endpoints for authentication requirements, access controls, and data exposure.
  • Information disclosure: Check for exposed configuration, verbose error messages, server version information.

4. Container security testing

Objective: Assess whether the Docker container infrastructure introduces risk beyond the web application itself.

  • Runtime configuration: Check container user permissions (root vs non-root), filesystem modes (read-only vs writable), resource limits, network exposure.
  • Base image security: Scan container images for known vulnerabilities using Trivy. Check whether images use pinned versions or mutable tags.
  • Build-time security: Review Dockerfiles for embedded secrets, exposed ports, unnecessary packages, and build-time decisions that persist in image history.
  • Monitoring exposure: Assess whether monitoring tools (Grafana, Loki) are accessible without authentication and whether they expose internal system information.

Priority guidance

Start passive, move to active, then test what matters most for buyer data protection and compliance. Container hardening is a separate assessment domain that follows web application testing. The final report must address both domains in terms that satisfy French food safety digital security requirements.

Note

This guide describes categories. You decide which specific tests to run within each category based on what your reconnaissance reveals. This is different from the P2 engagement, where specific vulnerability types were listed for you to test.