Learn by Directing AI
Unit 4

The Engagement Memory and the MCP Connection

Step 1: Examine the engagement memory template

Open materials/engagement-memory-template.md. This template defines the structure for an engagement memory file -- a persistent file that loads at the start of every AI session and carries project context across conversations.

An engagement memory file is not a log. It is infrastructure. The sections -- Engagement Scope, SIEM Architecture, Detection Rule Conventions, Authorised Targets, Known False Positives -- define the baseline context that shapes every AI interaction for the rest of the project. When you write specific, testable constraints in the memory file, AI complies with them from the first prompt. When you write vague entries, AI ignores them.

The concept is simple: instead of re-explaining your project every time you start a new session, you write the context once and it loads automatically. The quality of what you write determines the quality of the baseline.

Step 2: Write the CLAUDE.md engagement memory

Create a CLAUDE.md file in your project root. This is the Claude Code-specific engagement memory file. It loads automatically when Claude Code starts in the project directory.

Include the following, derived from what you have learned in the first three units:

  • Scope boundaries -- the three portals, their interconnections, the BTC assessment context
  • Authorised targets -- specific container addresses and ports
  • Detection rule naming convention -- follow the format from materials/detection-naming-guide.md: CS-BTC-[TTP]-[SIEM]-[VERSION]
  • Log field mappings -- the field names Loki uses for the portal logs (from Grafana Explore) and the field names Wazuh uses for the same data (from the Wazuh dashboard). These differ. Document both.
  • SIEM architecture -- why both Loki and Wazuh exist, what each does, the tuning decisions you made

The detection naming guide provides the convention. Open materials/detection-naming-guide.md to see the format and worked examples showing how the same rule is named for both platforms.

Step 3: Write the AGENTS.md file

Create an AGENTS.md file alongside CLAUDE.md. AGENTS.md captures the cross-platform constraints that transfer to any AI coding agent -- Gemini, Cursor, Windsurf, any tool that reads project memory files.

CLAUDE.md and AGENTS.md are not duplicates. CLAUDE.md can include Claude Code-specific features (MCP configuration references, slash command conventions). AGENTS.md carries the universal constraints: scope, field mappings, naming conventions, architecture decisions. Writing both is authoring infrastructure for two audiences simultaneously.

This pattern -- CLAUDE.md, AGENTS.md, GEMINI.md, .cursorrules -- is one cross-platform pattern with tool-specific filenames. The concept is the same regardless of which AI coding agent you use.

Step 4: Test the contrast

Start a new Claude Code session in the project directory. The engagement memory loads at session start.

Ask AI a detection-related question -- something about the SQL injection finding from Unit 2. Watch whether AI uses the field names from your engagement memory or its own defaults. Does it apply the CS-BTC naming convention from the memory file? Does it reference the correct Loki labels?

Now think back to P7, when you started sessions without this infrastructure. The difference between "AI already knows the project context" and "you explain the project from scratch" is the experiential proof of what engagement memory does. Vague entries produce no visible difference. Specific entries -- exact field names, exact naming conventions, exact scope boundaries -- produce AI that behaves like it has been on the project from the start.

Step 5: Set up the MCP connection

Open materials/mcp-loki-config.md. This guide walks through connecting Claude Code to the Loki API via MCP -- the Model Context Protocol.

MCP is a protocol that connects AI agents to external tools. The configuration adds a Loki MCP server to Claude Code so that AI can query logs directly instead of relying on you to copy-paste LogQL results. The connection is read-only and scoped with a timeout to prevent runaway queries.

Follow the configuration steps to add the Loki MCP server. Verify the connection by asking AI a question about recent logs. AI should be able to query Loki and return actual log data without you touching Grafana.

AI commonly generates MCP configurations that work syntactically but miss authentication or timeout parameters. Verify that the connection works by testing an actual query, not just checking the configuration file.

Step 6: Experience the capability shift

Ask AI to find the SQL injection attack data from Unit 2 in the logs. In previous projects, this would require you to run a LogQL query in Grafana, copy the results, and paste them into the conversation. Now AI queries Loki directly.

This is a categorical capability shift. AI can read the logs itself. But observe what happens: does AI construct a query that scans the last hour, or does it scan weeks of data? Does it verify that Alloy is collecting from all three portals before reporting "no matching entries"? Does it evaluate whether the results are complete?

AI connected to Loki queries eagerly. It runs broad queries when narrow ones would suffice. It reports "no results" without checking whether the data source is actually collecting. The MCP connection gives AI a new capability, but the verification responsibility remains with you. When AI reports what the logs contain, the question is whether the event did not happen or whether AI constructed the wrong query.

✓ Check

Check: AI responds to a detection-related prompt using the correct field mapping conventions from the engagement memory (not AI's defaults). AI successfully queries Loki via MCP and returns log data without the student copy-pasting.