PostHog Usage Anomaly Detector

AI-powered daily usage anomaly detection — statistical baseline comparison with probable cause analysis. SILENT when normal, alerts only on anomalies.

Monitors PostHog event data daily against a 30-day rolling baseline. Detects spikes, drops, and correlated shifts using standard deviation thresholds. Reasons about probable causes (deployment, campaign, seasonal, technical). SILENT when no anomalies — respects team attention. Slack alert + Notion anomaly log. 29-node n8n workflow with daily scheduler. Blueprint Quality Standard (BQS) 12/12 certified. This came from a product team that found a 40% usage drop three weeks after it started because their dashboard only showed weekly totals. The detector monitors usage patterns at the cohort level and alerts on statistically significant deviations.

Last updated March 18, 2026

Product teams collect feedback from 5-10 channels — support tickets, feature requests, NPS comments, usage analytics, user interviews. Synthesizing across channels is a manual process that happens quarterly at best. Automated product intelligence aggregates signals continuously, surfacing adoption patterns and feedback themes as they emerge.

triggerSchedule01FetcherPostHog API02AssemblerBaselines03AnalystAnomaly Score04FormatterAlert OnlySlackAnomaly AlertNotionAnomaly Log

Four Agents. Daily Anomaly Detection. Silent When Normal.

The Fetcher

Step 1The Fetcher

Code-only

Queries PostHog API for daily usage metrics — event counts, unique users, session durations, and feature usage across the baseline window. Pulls both current day and historical baseline data for statistical comparison.

The Assembler

Step 2The Assembler

Code-only

What does The Assembler actually decide? Computes statistical baselines using rolling averages and standard deviation for each tracked metric. Identifies anomalies where current values deviate beyond the configurable threshold (default 2 standard deviations). Calculates anomaly magnitude and direction (spike or drop).

The Analyst

Step 3The Analyst

Tier 2 Classification

This step exists because raw data alone is not enough. Performs probable cause analysis for each detected anomaly: correlates with deployment timestamps, feature flag changes, marketing campaigns, and day-of-week patterns. Classifies anomaly severity (INFO, WARNING, CRITICAL) and type (organic, deployment-related, external, seasonal).

The Formatter

Step 4The Formatter

Tier 3 Creative

Without this step, upstream analysis sits idle. SILENT when no anomalies detected — no Slack noise on normal days. When anomalies are found: Slack alert with anomaly details, probable causes, and recommended investigation steps. Optional Notion log for anomaly history tracking.Synthetic IDs passed the pipeline logic but failed on the CRM write. We wasted 2 hours debugging the wrong service.

That's the full pipeline. Here's what it intentionally does NOT do — and why those boundaries exist.

What It Does NOT Do

×

Does not fix anomalies automatically — it detects and diagnoses, humans investigate and resolve

×

Does not replace application performance monitoring (APM) — it analyzes product usage patterns, not infrastructure metrics

×

Does not work with non-PostHog analytics tools — this is PostHog-specific

×

Does not guarantee zero false positives — statistical thresholds are configurable to tune sensitivity

×

Does not store historical baselines externally — baselines are computed from PostHog data on each run

With those boundaries clear, here's everything that ships when you purchase.

The Complete Customer Success Bundle

8 files.

README.mdSetup and configuration guide
docs/TDD.mdTechnical Design Document
phud_scheduler_v1_0_0.jsonScheduler workflow
posthog_usage_anomaly_detector_v1_0_0.jsonn8n workflow (main pipeline)
schemas/assembler_output.jsonAssembler output schema
schemas/fetcher_output.jsonFetcher output schema
system_prompts/analyst_system_prompt.mdAnalyst system prompt
system_prompts/formatter_system_prompt.mdFormatter system prompt

The technical specifications below are ITP-measured, not estimated.

Tested. Measured. Documented.

Daily statistical anomaly detection with probable cause analysis. Silent when metrics are normal — alerts only on true anomalies with severity classification and investigation recommendations.

Tested on n8n v2.7.5, March 2026

PostHog Usage Anomaly Detector v1.0.0──────────────────────────────────────────Nodes:        29 main + 3 scheduler (32 total)Agents:       4 (Fetcher, Assembler, Analyst, Formatter)LLM Calls:    2 per run (Analyst + Formatter) — 0 when no anomaliesModel:        Sonnet 4.6 (SINGLE-MODEL)Trigger:      Schedule (daily 7:00 UTC) + WebhookPattern:      BATCH (daily anomaly scan)Tool A:       PostHog API — usage metrics + baselinesTool B:       Slack (httpHeaderAuth) — anomaly alerts (silent when normal)Tool C:       Notion (httpHeaderAuth, optional) — anomaly history logITP:          8/8 records, 14/14 milestonesBQS:          12/12 PASSCost:         $0.03–$0.10 per run (LLM cost $0 on normal days)

What You'll Need

Platform

n8n 2.7.5+

Est. Monthly API Cost

~$0.03-0.10 per daily run + PostHog subscription.

Credentials Required

  • Anthropic API
  • PostHog API Key
  • Slack (Bot Token, httpHeaderAuth Bearer)

Services

  • PostHog account with event data
  • Anthropic API key
  • Slack workspace (Bot Token with chat:write)

Setup Track

Quick Start

~15 min

All credentials live, n8n running

Full Setup

1–2 hrs

Needs API config + tables

From Scratch

2–4 hrs

No n8n, no credentials

PostHog Usage Anomaly Detector v1.0.0

$199

one-time purchase

What you get:

  • 29-node main workflow + 3-node scheduler
  • Daily usage anomaly detection from PostHog event data
  • Statistical baseline comparison with rolling average and standard deviation
  • Configurable anomaly threshold (default 2 standard deviations)
  • Probable cause analysis correlating anomalies with deployments, flag changes, and campaigns
  • Anomaly severity classification: INFO, WARNING, CRITICAL
  • Anomaly type classification: organic, deployment-related, external, seasonal
  • SILENT mode — zero Slack noise on normal days with no anomalies
  • Alert-only notifications when true anomalies are detected
  • Notion anomaly log for historical tracking (optional)
  • Slack alert with anomaly details, probable causes, and investigation steps
  • Configurable: tracked metrics, baseline window, deviation threshold
  • Full technical documentation + system prompts

Frequently Asked Questions

What does "silent when normal" mean?+

The workflow runs daily and checks for anomalies. If all metrics fall within the expected range (within the configured standard deviation threshold), no Slack message is sent. Your team only gets notified when something genuinely unusual happens.

How does probable cause analysis work?+

The Analyst correlates detected anomalies with known context: recent deployments (if deployment timestamps are available), feature flag changes in PostHog, day-of-week patterns (weekend dips), and magnitude direction (spikes vs drops suggest different causes). This is probabilistic analysis, not definitive root cause identification.

What metrics can it track?+

Any PostHog event count or aggregation: total events, unique users, session count, average session duration, specific feature usage events, page views, API calls, etc. Configure the TRACKED_METRICS array in the scheduler with your PostHog event names.

Is there a refund policy?+

All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions?

Read the full guide →

Related Blueprints