How PostHog PLG-to-Enterprise Bridge Automates Product Led Growth
The Problem
Your team runs this workflow every week: pull records from Posthog, Pipedrive, Slack, cross-reference with a second source, apply judgment, format the output, and route it to 3 different stakeholders. Last Tuesday it took 30–60 minutes per cycle. This Tuesday the person who usually runs it is out sick, and nobody else knows the exact steps. The output varies by who runs it and when.
The core issue is data fragmentation. The information exists, but assembling it into actionable intelligence requires manual effort that does not scale with headcount. PostHog PLG-to-Enterprise Bridge closes that gap by automating the product led growth and deal intelligence workflow from data extraction through structured output delivery.
Teams typically spend 30–60 minutes per cycle on the manual version of this workflow. PostHog PLG-to-Enterprise Bridge reduces that to seconds per execution, with consistent quality every time.
What This Blueprint Does
Four Agents. Daily Upgrade Scoring. CRM Deal Creation.
The PostHog PLG-to-Enterprise Bridge pipeline runs 4 agents in sequence. The Fetcher pulls data from Posthog and Pipedrive and Slack, and The Formatter delivers the output. Here is what happens at each stage and why it matters.
- The Fetcher (Code-only): Queries PostHog API daily for organization-level usage data — active users, feature adoption breadth, collaboration signals (shared dashboards, team invites), API usage, and growth trajectories.
- The Assembler (Code-only): Computes five Upgrade Intent Score (UIS) dimensions per organization: ICP fit (company size, industry signals from user properties), usage intensity (DAU/MAU ratio, session depth), collaboration signals (multi-user activity, shared resources), growth trajectory (week-over-week usage growth), and engagement depth (feature breadth, API usage)..
- The Analyst (Tier 2 Classification): Scores each dimension 1-10, computes composite UIS.
- The Formatter (Tier 3 Creative): Creates Pipedrive deals for HIGH accounts with pre-populated notes and expansion context.
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow:
- 30-node main workflow + 3-node scheduler
- Daily PLG-to-enterprise upgrade scoring from PostHog organization data
- 5-dimension Upgrade Intent Score (UIS): ICP fit, usage intensity, collaboration signals, growth trajectory, engagement depth
- UIS 1-10 per dimension with HIGH/MEDIUM/LOW classification
- Automatic Pipedrive deal creation for HIGH-scoring accounts
- Pre-populated deal notes with expansion triggers and usage context
- Watch list for MEDIUM accounts approaching upgrade threshold
- Collaboration signal detection (multi-user activity, shared resources)
- Growth trajectory analysis for identifying accelerating accounts
- Slack digest with newly qualified accounts and upgrade triggers
- Configurable: ICP criteria, UIS thresholds, Pipedrive pipeline/stage
- Full technical documentation + system prompts
All scoring criteria, output formats, and routing rules are configurable in the system prompts — no workflow JSON edits required. This means PostHog PLG-to-Enterprise Bridge adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every component in this pipeline is designed for customization. Modify system prompts to change scoring logic, output format, or routing rules — no code changes required.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the PostHog PLG-to-Enterprise Bridge execution flow.
Step 1: The Fetcher
Tier: Code-only
The pipeline starts here. Queries PostHog API daily for organization-level usage data — active users, feature adoption breadth, collaboration signals (shared dashboards, team invites), API usage, and growth trajectories. Retrieves data for all accounts exceeding minimum activity threshold.
This stage ensures all downstream agents receive clean, validated input. If this step returns incomplete data, every downstream agent works with a degraded picture.
Step 2: The Assembler
Tier: Code-only
Computes five Upgrade Intent Score (UIS) dimensions per organization: ICP fit (company size, industry signals from user properties), usage intensity (DAU/MAU ratio, session depth), collaboration signals (multi-user activity, shared resources), growth trajectory (week-over-week usage growth), and engagement depth (feature breadth, API usage).
Why this step matters: The result is a prioritized action queue, not just a data dump.
Step 3: The Analyst
Tier: Tier 2 Classification
Scores each dimension 1-10, computes composite UIS. Classifies accounts as HIGH (UIS ≥7, creates Pipedrive deal), MEDIUM (UIS 4-6.9, added to watch list), or LOW (UIS <4, no action). Generates per-account upgrade brief with specific expansion triggers.
Every field in the output is structured for the next agent to consume without parsing.
Step 4: The Formatter
Tier: Tier 3 Creative
This is the final deliverable — what lands in your inbox or dashboard. Creates Pipedrive deals for HIGH accounts with pre-populated notes and expansion context. Generates a Slack digest with newly qualified accounts and their upgrade triggers. Maintains a watch list for MEDIUM accounts approaching threshold.
The entire pipeline executes without manual intervention. From trigger to output, every decision point follows a documented path. Every execution produces a traceable audit trail.
All nodes have been validated during Independent Test Protocol (ITP) testing on n8n v2.7.5. The error handling matrix in the bundle documents the recovery path for each failure mode.
This blueprint executes in your own n8n environment using your own API credentials. Zero external data sharing.
Why we designed it this way
n8n's batch node only outputs the last batch. If you process 20 records in batches of 5, you get back 5 records — the last batch. Without static data accumulation, multi-record pipelines silently drop 75% of results. Every multi-record blueprint uses explicit accumulation to collect results across all batches.
— ForgeWorkflows Engineering
Cost Breakdown
Daily 5-dimension PLG-to-enterprise upgrade scoring with automatic CRM deal creation for high-intent accounts and Slack notification of newly qualified organizations.
The primary operating cost for PostHog PLG-to-Enterprise Bridge is the per-execution LLM inference cost. Based on Independent Test Protocol (ITP) testing, the measured cost is: Cost per Run: $0.03–$0.10 per run. This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour for an operations analyst at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 30–60 minutes per cycle, the per-execution cost in human labor is significant. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is ~$0.03-0.10 per daily run + PostHog and Pipedrive subscriptions., depending on your usage volume and plan tiers.
Quality assurance: Blueprint Quality Standard (BQS) audit result is 12/12 PASS. ITP result is 8/8 records, 14/14 milestones. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
All cost and performance figures are ITP-measured — tested against real data fixtures on n8n v2.7.5 in March 2026. See the product page for full test methodology.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
6 files.
When you purchase PostHog PLG-to-Enterprise Bridge, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
CHANGELOG.md— Version historyREADME.md— Setup and configuration guidedocs/TDD.md— Technical Design Documentposthog_plg_to_enterprise_bridge_v1_0_0.json— n8n workflow (main pipeline)schemas/assembler_output.json— Assembler output schemaschemas/fetcher_output.json— Fetcher output schemasystem_prompts/analyst_system_prompt.md— Analyst system promptsystem_prompts/formatter_system_prompt.md— Formatter system promptworkflow/phpe_scheduler_v1_0_0.json— Scheduler workflow
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
PostHog PLG-to-Enterprise Bridge is built for Product, Sales, Growth teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a product or sales or growth function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: PostHog account with organization data, Pipedrive account, Anthropic API key, Slack workspace (Bot Token with chat:write)
- You have API credentials available: Anthropic API, PostHog API Key, Pipedrive API Token, Slack (Bot Token, httpHeaderAuth Bearer)
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not contact accounts directly — it creates CRM deals for your sales team to action
- Does not modify PostHog data or feature flags — this is a read-only analysis tool
- Does not replace sales qualification — it provides usage-based signals that complement traditional qualification
- Does not work with non-PostHog analytics tools — this is PostHog-specific
- Does not guarantee upgrades — it identifies high-intent accounts that sales teams must engage
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Edge cases to know about
Every pipeline has boundaries. These are intentional design decisions, not oversights — understanding them helps you deploy with the right expectations and plan for edge cases in your environment.
Does not contact accounts directly — it creates CRM deals for your sales team to action
This is intentional. We default to human-in-the-loop for actions that carry reputational or financial risk. Once your team has validated output accuracy over 20+ cycles, you can adjust the pipeline to auto-execute — the workflow JSON supports it, but the default is conservative.
Does not modify PostHog data or feature flags — this is a read-only analysis tool
We scoped this boundary after ITP testing revealed inconsistent results when the pipeline attempted this. The agents handle what they handle well — extending beyond this scope requires custom prompt engineering specific to your data shape.
Does not replace sales qualification — it provides usage-based signals that complement traditional qualification
This keeps the pipeline focused on a single workflow. Adding this capability would introduce branching logic that varies by organization, and the tradeoff between complexity and reliability was not worth it for a reusable blueprint. Fork the workflow JSON if your use case demands it.
The dead letter queue captures any records that fail processing. Check it after your first production run to validate data coverage.
Getting Started
Deployment follows a structured sequence. The PostHog PLG-to-Enterprise Bridge bundle is designed for the following tools: n8n, Anthropic API, PostHog, Pipedrive, Slack. Here is the recommended deployment path:
- Step 1: Import workflows and configure credentials. Import both workflow JSON files into n8n (main + scheduler). Configure PostHog API key (httpHeaderAuth), Pipedrive API token, Slack Bot Token (httpHeaderAuth with Bearer prefix, chat:write scope), and Anthropic API key following the README.
- Step 2: Configure scoring and CRM parameters. Set POSTHOG_PROJECT_ID, ICP_CRITERIA (company size, industry filters), UIS_THRESHOLD (default 7 for HIGH), PIPEDRIVE_PIPELINE_ID, PIPEDRIVE_STAGE_ID, and SLACK_CHANNEL in the scheduler Payload Builder node.
- Step 3: Activate scheduler and verify. Update the webhook URL in the scheduler to match your main workflow webhook path. Activate both workflows. Send a test POST with _is_itp: true and sample account data. Verify the deal appears in Pipedrive and the digest appears in Slack.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the PostHog PLG-to-Enterprise Bridge product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
What are collaboration signals?+
Multi-user activity within the same organization: shared dashboards, team workspace invites, multiple users accessing the same features, and API tokens created by different users. These signals indicate the account has outgrown individual use and may need enterprise features like SSO, audit logs, or team management. The system prompts are standalone text files — edit scoring thresholds and output formats without touching the workflow JSON.
How does Pipedrive deal creation work?+
When an account scores UIS ≥7 (HIGH), the Formatter creates a Pipedrive deal via API in your configured pipeline and stage. Deal notes include the UIS breakdown, top expansion triggers, current usage metrics, and recommended next steps for the sales team. No duplicate deals — the workflow checks for existing deals before creation. Check the dependency matrix in the bundle for exact version requirements and credential setup steps.
Can I use HubSpot instead of Pipedrive?+
This version is built for Pipedrive. The workflow pattern (PostHog scoring → CRM deal creation) can be adapted to HubSpot, but the Formatter node uses Pipedrive API endpoints. A HubSpot variant may be released separately. The README walks through configuration in under 10 minutes, including test data for validation.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
What happens if PostHog event data is incomplete for a user?+
The analysis agent handles missing events gracefully — users without activation events score lower on those dimensions but aren't excluded. The output includes a data_completeness flag per user so you can filter results by confidence level.
Related Blueprints
PostHog Feature Adoption Intelligence
AI-powered weekly feature adoption analysis — adoption rates, usage frequency, retention, growth velocity, and power user ratios scored across 5 dimensions with adoption curve classification.
PostHog Power User Identification Agent
AI-powered weekly power user behavioral analysis — fingerprints, aha moments, onboarding paths, segment profiles, and activation patterns that distinguish power from regular users.
Expansion Revenue Detector
AI monitors Stripe payment patterns, scores expansion potential across 5 signal categories, and routes upsell and at-risk briefs to Pipedrive automatically.