I have been running Wazuh for a while, but it was mostly doing vulnerability scanning and basic log collection. This week I went through the process of turning it into a proper XDR platform with file integrity monitoring, rootkit detection, active response, and – the part I am most interested in – AI-powered alert analysis using my local Ollama server.

Everything runs on Proxmox LXC containers. No cloud services involved. All security telemetry stays on my infrastructure.

The Starting Point#

My Wazuh manager runs in an unprivileged Debian LXC on Proxmox. I have several Debian 12 LXC containers that serve various roles, Nginx Proxy Manager, Dashy, a CMMS application, and others. These had Wazuh agents installed but were only getting basic vulnerability scanning.

The goal was to deploy a full XDR configuration across the fleet and then add AI-powered alert enrichment using Ollama, which I already run on a separate LXC with an NVIDIA GPU.

Centralized XDR Configuration with Agent Groups#

Rather than configuring each agent individually, I created an agent group and pushed configuration centrally from the manager.

/var/ossec/bin/agent_groups -a -g debian-lxc -q
/var/ossec/bin/agent_groups -a -i 010 -g debian-lxc -q

All configuration goes into /var/ossec/etc/shared/debian-lxc/agent.conf on the manager and gets pushed to agents automatically.

File Integrity Monitoring#

FIM is probably the highest-value XDR capability for internet-facing hosts. I configured realtime monitoring on critical paths:

<syscheck>
  <disabled>no</disabled>
  <frequency>600</frequency>
  <scan_on_start>yes</scan_on_start>
  <directories check_all="yes" report_changes="yes" realtime="yes">/etc</directories>
  <directories check_all="yes" report_changes="yes" realtime="yes">/usr/bin</directories>
  <directories check_all="yes" report_changes="yes" realtime="yes">/usr/sbin</directories>
  <directories check_all="yes" report_changes="yes" realtime="yes">/root/.ssh</directories>
  <directories check_all="yes" report_changes="yes" realtime="yes">/etc/cron.d</directories>
  <directories check_all="yes" report_changes="yes" realtime="yes">/var/spool/cron</directories>
  <ignore>/etc/mtab</ignore>
  <ignore>/etc/adjtime</ignore>
  <ignore type="sregex">.log$|.swp$</ignore>
</syscheck>

The key paths are SSH authorized keys, cron jobs (common persistence mechanism), and system binaries. The ignore rules prevent noisy false positives from log rotation and editor swap files.

Active Response for Brute Force Blocking#

For internet-facing systems, automated blocking of brute force attempts is essential. Wazuh ships with firewall-drop, a script that uses iptables to block attacker IPs when brute force rules trigger.

On the manager’s ossec.conf:

<active-response>
  <command>firewall-drop</command>
  <location>local</location>
  <rules_id>5712,5720,5763</rules_id>
  <timeout>3600</timeout>
</active-response>

This blocks the attacker’s IP for one hour when SSH brute force rules fire. One important caveat: if your LXC containers are unprivileged, iptables may not be available inside the container. Verify with iptables -L -n before relying on this.

Also make sure to whitelist your management IPs in the manager’s global config to prevent lockouts:

<global>
  <white_list>YOUR_MANAGEMENT_SUBNET</white_list>
</global>

Additional Modules#

I also enabled rootkit detection (rootcheck), Security Configuration Assessment against CIS benchmarks, and log collection for auth.log, syslog, dpkg.log, and Suricata’s eve.json where applicable.

Integrating Ollama for AI-Powered Alert Triage#

This is where it gets interesting. Wazuh has a built-in integration framework that can send alert data to external services. I pointed it at my local Ollama instance running Gemma 4 (8B parameter model).

The Integration Configuration#

Added to the manager’s ossec.conf:

<integration>
  <name>custom-ollama</name>
  <hook_url>http://192.168.1.111:11434</hook_url>
  <level>10</level>
  <alert_format>json</alert_format>
</integration>

The <level>10</level> setting means only alerts with severity 10 or higher get sent to the LLM. This prevents overwhelming the model with low-priority noise.

The Integration Script#

Wazuh looks for an executable at /var/ossec/integrations/custom-ollama matching the name defined in the config. Here is the Python script that handles the communication:

#!/var/ossec/framework/python/bin/python3

import sys
import json
import os
import requests
from datetime import datetime

OLLAMA_URL = "http://192.168.1.111:11434/api/generate"
MODEL = "gemma4:latest"
LOG_FILE = "/var/ossec/logs/integrations.log"

SYSTEM_PROMPT = """You are a cybersecurity analyst reviewing Wazuh SIEM alerts.
For each alert, provide a brief analysis in this exact format:

SEVERITY: [Critical/High/Medium/Low]
SUMMARY: [One sentence describing what happened]
THREAT: [What this could indicate - attack type, TTP, or benign cause]
ACTION: [Specific recommended action for the SOC analyst]

Be concise. Do not exceed 4 lines. Focus on actionable guidance."""

def log(msg):
    with open(LOG_FILE, "a") as f:
        f.write(f"{datetime.now().strftime('%Y/%m/%d %H:%M:%S')} {msg}\n")

def main():
    try:
        alert_file = sys.argv[1]
        with open(alert_file) as f:
            alert_json = json.load(f)

        rule = alert_json.get("rule", {})
        agent = alert_json.get("agent", {})
        data = alert_json.get("data", {})
        full_log = alert_json.get("full_log", "")

        prompt = f"""Analyze this Wazuh security alert:

Rule ID: {rule.get('id', 'N/A')}
Rule Level: {rule.get('level', 'N/A')}
Description: {rule.get('description', 'N/A')}
Groups: {rule.get('groups', [])}
Agent: {agent.get('name', 'N/A')} ({agent.get('ip', 'N/A')})
Timestamp: {alert_json.get('timestamp', 'N/A')}
Full Log: {full_log[:500]}
Data: {json.dumps(data)[:500]}"""

        response = requests.post(
            OLLAMA_URL,
            json={
                "model": MODEL,
                "system": SYSTEM_PROMPT,
                "prompt": prompt,
                "stream": False,
                "options": {
                    "temperature": 0.3,
                    "num_predict": 256
                }
            },
            timeout=60
        )

        if response.status_code == 200:
            result = response.json().get("response", "No response")
            log(f"Alert {rule.get('id','N/A')} on "
                f"{agent.get('name','N/A')}: {result}")
        else:
            log(f"ERROR: Ollama returned {response.status_code}")

    except Exception as e:
        log(f"ERROR: {str(e)}")

if __name__ == "__main__":
    main()

Set the permissions:

chmod 750 /var/ossec/integrations/custom-ollama
chown root:wazuh /var/ossec/integrations/custom-ollama

What the Output Looks Like#

After restarting the manager, alerts level 10 and above are automatically sent to Gemma 4 for analysis. The results are written to /var/ossec/logs/integrations.log:

2026/04/15 03:12:16 Alert 510 on Wazuh3: SEVERITY: Medium
SUMMARY: A potentially hidden file was detected in the /dev directory.
THREAT: Could indicate container escape or unusual system modification.
ACTION: Investigate file ownership and process accessing /dev/.lxc/proc/iomem.

The structured format (SEVERITY/SUMMARY/THREAT/ACTION) is enforced by the system prompt with a low temperature setting (0.3) to keep responses consistent and focused.

Why Local LLM Instead of Cloud#

A few reasons this approach makes sense:

  1. Data sovereignty. Security alert data never leaves my network. No API keys to manage, no third-party data processing agreements to worry about.
  2. Cost. Zero marginal cost per alert after the initial hardware investment.
  3. Latency. The Ollama server is on the same LAN. Response times are a few seconds, not subject to rate limits or API quotas.
  4. Model flexibility. I can swap models as better ones become available. The script just changes the MODEL variable.

Model Selection#

I tested with Gemma 4 (8B) and it handles security alert analysis well at this parameter size. For reference, other good options include Llama 3.1 (8B) and Qwen 2.5 (7B). There is also a purpose-built model on Ollama called mranv/siem-llama-3.1 that was fine-tuned for Wazuh alert analysis, though I have not evaluated it against a general-purpose model with a strong system prompt.

Vulnerability Scanning and CVE Triage#

While setting this up, Wazuh flagged one of my containers with CVE-2026-32746 – a critical (CVSS 9.8) pre-authentication buffer overflow in GNU InetUtils telnetd. The alert looked alarming, but investigation revealed it was flagging inetutils-telnet (the client package), not the vulnerable telnetd daemon. The client was likely pulled in as a base image dependency and was not actively used.

This is a good example of why vulnerability scanner output needs triage. The package was removed with apt purge inetutils-telnet and the alert cleared on the next scan cycle.

One issue I hit during this process: the vulnerability feed database on the manager became corrupted (RocksDB checksum mismatch errors). The fix was straightforward:

sudo systemctl stop wazuh-manager
sudo rm -rf /var/ossec/queue/vd/feed/*
sudo systemctl start wazuh-manager

The feed rebuilt cleanly on restart.

What I Would Do Differently#

A few lessons from this deployment:

Test iptables in your containers first. Unprivileged LXCs may not have iptables access, which means the firewall-drop active response will silently fail. Verify before you rely on it.

Do not attempt to change LXC kernel parameters from inside the container. I tried setting lxc.prlimit.nofile to resolve a cosmetic file descriptor warning and it prevented the container from starting, requiring a hard power cycle of the Proxmox host. The warning is harmless at small agent counts.

Tune the integration alert level threshold. I initially set it to level 5 for testing, which generated a flood of LLM analysis on routine rootcheck alerts. Level 10 is a much better threshold for production use – it catches the important events without overwhelming the model.

Next Steps#

The immediate areas I want to explore:

  • n8n orchestration. I already have an n8n instance with an AI router workflow. Routing Wazuh alerts through n8n would add notification flexibility (email, Discord, Slack) and allow multi-provider LLM fallback.
  • RAG threat hunting. Wazuh publishes an approach using LangChain and FAISS vector stores to enable natural language querying of archived logs. Combined with the Ollama infrastructure already in place, this would enable questions like “Were there any failed SSH logins in the last 24 hours?” against the full log archive.
  • Custom rules. Suppressing the LXC rootcheck false positives and adding application-specific detection rules for Nginx Proxy Manager access patterns.

Resources#