The Brief
Tshering Pem is the Director of Digital Services at the Bhutan Tourism Council -- the government body responsible for tourism development, regulation, and promotion. The Council operates three digital portals: a Tourism Services Portal for tourists, a Guide Management System for guide licensing and credentials, and an Internal Operations Platform for staff communications and document management.
The Royal Government has issued Directive 2026/CS-04 requiring all agencies with public-facing digital services to implement continuous security monitoring and undergo an independent assessment by the end of the fiscal quarter. The Council has server logs but no centralized system to analyze them. Different vendors built different portals at different times. Tshering knows the systems store tourist passport numbers, guide credentials, and internal communications. She knows the deadline. She does not know the technical details of how her systems connect to each other.
She sends you a formal memorandum requesting the assessment.
Your Role
You are conducting a security assessment for a government agency and deploying a second monitoring system alongside the one you already know. The assessment covers three portals, their interconnections, and the monitoring infrastructure itself. The compliance report at the end goes to a government ministry review committee.
The approach holds from P7. Templates give you structure for the scope document, engagement memory, detection rules, and compliance report. No guides walk you through the work. You decide the assessment approach, the deployment sequence, the tuning thresholds, and what goes in the report. What changes is the terrain -- you are building infrastructure, not just running tools against targets.
What's New
Last time you designed a multi-target security assessment for a coffee cooperative, mapped an attack surface across five services, performed multi-layer exploitation, managed cross-tool correlation, and produced a multi-audience report.
A second SIEM. Wazuh is a categorically different system from Loki. Loki stores logs and makes them queryable. Wazuh collects logs, applies rules, correlates events, maps to ATT&CK, and generates compliance reports. Deploying both is not redundant -- it is an architecture decision. The challenge is making detection rules work across both platforms when they handle the same data differently.
Cross-SIEM detection portability. Sigma rules that worked on Loki need to run on Wazuh too. pySigma converts the rules, but the conversions are not guaranteed to produce equivalent results. Different field mappings, different query languages, different false positive profiles. Testing on one platform and assuming the other works is a common failure mode.
AI infrastructure. You will write your first engagement memory file -- a CLAUDE.md that carries field mapping conventions, detection naming standards, and architecture decisions across sessions. You will also connect AI to the Loki API via MCP, giving it the ability to query logs directly instead of relying on you to copy-paste results. Both change what AI can do. Both require configuration decisions that affect every subsequent interaction.
The hard part: two monitoring systems that serve different purposes produce different views of the same activity. The tuning decisions you make on one do not transfer to the other.
Tools
- Wazuh -- SIEM deployment, agent-based collection, built-in rules, OpenSearch dashboard. New.
- pySigma-backend-opensearch -- Sigma-to-OpenSearch conversion for Wazuh. New.
- MCP -- Loki API connection for AI-directed log queries. New.
- Grafana + Loki -- log aggregation and querying. Continuing.
- Grafana Alloy -- log collection pipeline. Continuing.
- Sigma + sigma-cli -- detection rule authoring and validation. Continuing.
- pySigma-backend-loki -- Sigma-to-LogQL conversion. Continuing.
- Nmap -- reconnaissance. Continuing.
- sqlmap -- exploitation for generating test data. Continuing.
- Docker -- lab environment including the Wazuh deployment. Continuing.
- Claude Code -- AI directing with first engagement memory and MCP connection.
Materials
- Docker environment -- three vulnerable portals (Tourism Services Portal, Guide Management System, Internal Operations Platform) plus the Grafana/Loki/Alloy monitoring stack.
- Wazuh Docker configuration -- Docker Compose overlay and agent setup guide for deploying Wazuh alongside the existing stack.
- Scope document template -- assessment boundaries and rules of engagement aligned to Directive 2026/CS-04.
- Engagement memory template -- structure for the CLAUDE.md file that carries project context across AI sessions.
- Detection naming guide -- naming convention for detection rules across both SIEMs.
- MCP configuration guide -- setup instructions for connecting AI to the Loki API.
- CIS Wazuh benchmark extract -- hardening checks for the SIEM deployment itself.
- Compliance report template -- six-section structure aligned to the government directive.
- Environment verification script -- health check for all services.