Learn by Directing AI
Unit 8

Benchmarks and Compliance

Step 1: Introduce CIS Docker Benchmark

CIS Benchmarks are codified expert consensus -- a baseline for what "hardened" means. The CIS Docker Benchmark covers host configuration, daemon configuration, container images, and runtime settings. Each item is a specific, auditable check with a pass/fail/not-applicable result.

Open materials/cis-docker-reference.md. This is a curated subset of the full benchmark -- 15 items relevant to this lab environment. The full benchmark has over 200 items. AI will try to apply all 200 indiscriminately. Your job is to determine which items apply to Dimitar's single-purpose winery containers and which do not.

Run the CIS Docker Benchmark assessment against the Docker environment. For each item in the reference, run the audit command and record the result. Classify each as pass, fail, or not applicable.

Not every benchmark item applies. Item 1.1 relates to the host kernel -- in a lab using a shared Docker host, that is outside your control. Item 4.5 (non-root USER in container images) would have prevented the container privilege escalation you exploited. Item 2.14 may not apply to containers that don't need network access between them. The "not applicable" decisions are where professional judgment lives.

Step 2: Apply OWASP ASVS

OWASP Application Security Verification Standard is the application-level equivalent of CIS Benchmarks. Where CIS covers infrastructure, ASVS covers the application itself -- authentication, session management, input validation, error handling, API security.

Open materials/owasp-asvs-reference.md. This curated subset contains Level 1 requirements relevant to the web platform and the API. Level 1 is the baseline -- the minimum for any internet-facing application.

Apply the OWASP ASVS Level 1 requirements from the reference to both the web platform and the API. For each item, determine whether the application passes or fails. Map your exploitation findings to specific ASVS failures.

The mapping is direct. The SQL injection you confirmed in Unit 5 is evidence of a V5 (Validation) failure. The BOLA vulnerability in Unit 6 is evidence of a V13 (API) failure. Your exploitation findings are not separate from the compliance assessment -- they are evidence within it.

Step 3: Map existing remediations

Some items that fail the benchmark assessment have already been remediated in Unit 7. Others are new findings that emerged only through the benchmark lens.

Cross-reference the CIS and ASVS assessment results with the remediation log from Unit 7. Which failing items have already been fixed? Which are new findings? Prioritize the new findings using the threat model.

The benchmark may surface risks you did not test during exploitation. A CIS item about container runtime privileges might reveal an exposure your threat model did not anticipate. An ASVS item about error handling might highlight information leakage you did not test for. The benchmark catches what manual testing missed.

Step 4: Document compliance status

For each benchmark item, document the status with reasoning. Pass, fail, or not applicable -- but the "not applicable" items need justification. "Not applicable because this lab uses a shared Docker host and the host kernel is not under assessment" is a professional judgment. "Not applicable" without explanation is a gap.

Create a compliance documentation table for both CIS and ASVS. For each item: number, title, status (pass/fail/N-A), evidence or justification, and remediation recommendation for failing items.

AI will produce compliance documentation that marks items as pass or fail without reasoning. The reasoning is the professional contribution. Why does this item not apply? What evidence supports the pass? What remediation would address the failure? The documentation quality determines whether Dimitar's next IT hire can act on the assessment.

Step 5: Map findings to the report

Compliance language makes recommendations carry more weight. "The container runs as root" is a finding. "Violation of CIS Docker Benchmark 4.5: Ensure a user for the container has been created" is the same finding with the weight of industry consensus behind it. When Dimitar's developer reads the report, CIS item numbers tell them this is a recognized standard, not an opinion.

Map all confirmed exploitation findings to their corresponding CIS and ASVS items. Each finding in the report should reference the benchmark item it violates, where applicable.

Not every finding maps to a benchmark item -- custom business logic vulnerabilities exist outside compliance frameworks. But where the mapping exists, use it. The final report will include both the technical finding and the compliance reference.

✓ Check

Check: At least three CIS items classified as "not applicable" with documented justification. At least two exploitation findings mapped to specific ASVS items. Compliance documentation distinguishes pass, fail, and not applicable with reasoning.