How Customer LTV Intelligence Automates Revenue Operations
The Problem
Your sales team has 47 deals in the proposal stage. 12 have not had contact in 5+ days. Three have gone completely dark. Which ones are at risk — and which ones just have a slow procurement process? A rep answering this question manually checks Stripe, Notion, Slack, cross-references email history, and makes a judgment call on each deal. At 15 minutes per deal, that is 30–60 minutes per cycle of triage before any follow-up happens.
The cost is not just time — it is revenue leakage. Deals slip because signals were missed. Pipeline reviews rely on data that was accurate two days ago. Scoring criteria drift between team members, and the CRM becomes a lagging indicator rather than an operational tool. Customer LTV Intelligence automates the revenue operations and customer intelligence workflow from data extraction through analysis to structured output, with zero manual CRM entry.
Teams typically spend 30–60 minutes per cycle on the manual version of this workflow. Customer LTV Intelligence reduces that to seconds per execution, with consistent output quality and zero CRM data entry.
What This Blueprint Does
Four Agents. Five LTV Segments. Monthly Portfolio Intelligence.
The Customer LTV Intelligence pipeline runs 4 agents in sequence. Fetcher pulls data from Stripe and Notion and Slack, and Formatter delivers the output. Here is what happens at each stage and why it matters.
- Fetcher (Schedule + Code): Schedule Trigger fires monthly (1st of month, 08:00 UTC) or manual Webhook for on-demand runs.
- Assembler (Code-only): Computes 13 per-customer LTV metrics from raw Stripe data: MRR current, MRR previous, MRR delta, plan tier movement, payment success rate, failed payment count, refund count, dispute count, subscription age, invoice frequency, total lifetime revenue, and churn flag.
- Analyst (Tier 2 Classification): the analysis model receives ONE aggregate call with the entire customer base.
- Formatter (Tier 2 Classification): the analysis model generates three outputs: (1) Notion monthly LTV report with 7 sections — Executive Summary, Segment Distribution, MRR Trends, At-Risk Customers, Growth Opportunities, Cohort Analysis, Month-over-Month Comparison; (2) Slack executive summary via Block Kit; (3) Slack at-risk alerts — one per at-risk customer with MRR, signals, and recommended action..
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow:
- ITP-tested 28-node n8n workflow — import and deploy
- Monthly Schedule Trigger (1st of month, 08:00 UTC) or manual Webhook for on-demand runs
- Stripe API pagination across customers, subscriptions, invoices, and charges
- 13 per-customer LTV metrics: MRR current/previous/delta, plan tier movement, payment success rate, failed payments, refunds, disputes, subscription age, invoice frequency, lifetime revenue, churn flag
- LTV 5-segment taxonomy: growing, stable, declining, at_risk, churned
- Per-customer reasoning citing specific Stripe metrics
- Executive summary: total MRR, segment distribution, MRR delta, top growers, top at-risk
- Notion monthly LTV report with 7 sections (Executive Summary through Month-over-Month Comparison)
- Slack executive summary (Block Kit) + at-risk customer alerts with recommended actions
- AGGREGATE architecture: single Analyst + Formatter calls — $0.08/run regardless of customer count
- Dual the analysis model: no the primary reasoning modelrequired
- ITP 8 variations, 14/14 milestones, $0.08/run measured
Scoring thresholds, output destinations, and CRM field mappings are configurable in the system prompts — no workflow JSON edits required. This means Customer LTV Intelligence adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every agent prompt is a standalone text file. Customize scoring thresholds, qualification criteria, and output formatting without touching the workflow JSON.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Customer LTV Intelligence execution flow.
Step 1: Fetcher
Tier: Schedule + Code
The pipeline starts here. Schedule Trigger fires monthly (1st of month, 08:00 UTC) or manual Webhook for on-demand runs. Fetcher paginates Stripe API across four endpoints — customers, subscriptions, invoices, and charges — over a configurable 12-month lookback window. Assembles raw financial data per customer for downstream metric computation.
This stage ensures all downstream agents receive clean, validated input. If this step returns incomplete data, every downstream agent works with a degraded picture.
Step 2: Assembler
Tier: Code-only
Computes 13 per-customer LTV metrics from raw Stripe data: MRR current, MRR previous, MRR delta, plan tier movement, payment success rate, failed payment count, refund count, dispute count, subscription age, invoice frequency, total lifetime revenue, and churn flag. Produces a unified customer portfolio payload for the Analyst.
Why this step matters: The result is a prioritized action queue, not just a data dump.
Step 3: Analyst
Tier: Tier 2 Classification
the analysis model receives ONE aggregate call with the entire customer base. Segments all customers into 5 LTV categories: growing, stable, declining, at_risk, churned — with per-customer reasoning citing specific metrics. Generates executive summary: total MRR, segment distribution, MRR delta month-over-month, top growers, top at-risk, notable movements. Optional cohort analysis groups customers by signup month for retention and revenue trends.
Every field in the output is structured for the next agent to consume without parsing.
Step 4: Formatter
Tier: Tier 2 Classification
This is the final deliverable — what lands in your inbox or dashboard. the analysis model generates three outputs: (1) Notion monthly LTV report with 7 sections — Executive Summary, Segment Distribution, MRR Trends, At-Risk Customers, Growth Opportunities, Cohort Analysis, Month-over-Month Comparison; (2) Slack executive summary via Block Kit; (3) Slack at-risk alerts — one per at-risk customer with MRR, signals, and recommended action.
The entire pipeline executes without manual intervention. From trigger to output, every decision point follows a documented path. Every execution produces a traceable audit trail.
All nodes have been validated during Independent Test Protocol (ITP) testing on n8n v2.7.5. The error handling matrix in the bundle documents the recovery path for each failure mode.
This blueprint runs on your own n8n instance with your own API keys. Your CRM data never leaves your infrastructure.
Why we designed it this way
A ghost contact with 524 days inactive crashed the pipeline because output exceeded the token limit. Every field was null, the model tried to explain why each was missing, and the response ballooned past the buffer. Fix: always set max_tokens to 2x expected output and validate response completeness.
— ForgeWorkflows Engineering
Cost Breakdown
Every metric is ITP-measured. The Customer LTV Intelligence agent analyzes your entire Stripe customer base monthly — computing 13 per-customer LTV metrics, segmenting into 5 categories with per-customer reasoning, and delivering a 7-section Notion report with Slack executive summary and at-risk alerts at $0.08/run.
The primary operating cost for Customer LTV Intelligence is the per-execution LLM inference cost. Based on Independent Test Protocol (ITP) testing, the measured cost is: Cost per Run: $0.08/run (ITP-measured average). This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour for a sales ops analyst at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 30–60 minutes per cycle, the per-execution cost in human labor is significant. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $0.08/month (monthly runs) + Stripe/Notion/Slack free tiers, depending on your usage volume and plan tiers.
Quality assurance: Blueprint Quality Standard (BQS) audit result is 12/12 PASS. ITP result is 8 variations, 14/14 milestones PASS. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
All cost and performance figures are ITP-measured — tested against real data fixtures on n8n v2.7.5 in March 2026. See the product page for full test methodology.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
5 files — workflow JSON, system prompts, scoring guide, and complete documentation.
When you purchase Customer LTV Intelligence, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
CHANGELOG.md— Version historyREADME.md— Setup and configuration guideTDD.md— Technical Design Documentcustomer_ltv_intelligence_v1.0.0.json— n8n workflow (main pipeline)system_prompts/analyst_system_prompt.md— Analyst system promptsystem_prompts/formatter_system_prompt.md— Formatter system prompt
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
Customer LTV Intelligence is built for Revops, Customer Success teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a revops or customer success function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: Stripe account (with subscription data), Notion workspace (API access), Slack workspace (Bot Token with chat:write scope)
- You have API credentials available: Anthropic API, Stripe (configured in n8n), Notion (httpHeaderAuth), Slack (Bot Token, httpHeaderAuth)
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not monitor individual payment events in real-time — that is what Expansion Revenue Detector does
- Does not recover failed payments — that is what Stripe Dunning Intelligence does
- Does not monitor CRM engagement health — that is what Account Health Intelligence Agent does
- Does not modify Stripe subscriptions or cancel accounts — read-only analysis
- Does not scrape external websites — all data from Stripe API
- Does not predict churn with external signals — uses Stripe financial data only
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Edge cases to know about
Every pipeline has boundaries. These are intentional design decisions, not oversights — understanding them helps you deploy with the right expectations and plan for edge cases in your environment.
Does not monitor individual payment events in real-time — that is what Expansion Revenue Detector does
This is intentional. We default to human-in-the-loop for actions that carry reputational or financial risk. Once your team has validated output accuracy over 20+ cycles, you can adjust the pipeline to auto-execute — the workflow JSON supports it, but the default is conservative.
Does not recover failed payments — that is what Stripe Dunning Intelligence does
We scoped this boundary after ITP testing revealed inconsistent results when the pipeline attempted this. The agents handle what they handle well — extending beyond this scope requires custom prompt engineering specific to your data shape.
Does not monitor CRM engagement health — that is what Account Health Intelligence Agent does
This keeps the pipeline focused on a single workflow. Adding this capability would introduce branching logic that varies by organization, and the tradeoff between complexity and reliability was not worth it for a reusable blueprint. Fork the workflow JSON if your use case demands it.
Review the error handling matrix in the bundle for the full list of documented failure modes and recovery paths.
Getting Started
Deployment follows a structured sequence. The Customer LTV Intelligence bundle is designed for the following tools: n8n, Anthropic API, Stripe, Notion, Slack. Here is the recommended deployment path:
- Step 1: Import workflow and configure credentials. Import customer_ltv_intelligence_v1_0_0.json into n8n. Configure Stripe credential (configured in n8n), Anthropic API key, Notion httpHeaderAuth credential (Bearer token), and Slack httpHeaderAuth credential (Bearer token) following the README.
- Step 2: Configure schedule and lookback window. The Schedule Trigger defaults to monthly (1st of month, 08:00 UTC). Adjust the cron expression to match your reporting cadence. Configure the lookback window (default 12 months) and Slack channel for executive summary and at-risk alerts.
- Step 3: Activate and verify. Enable the workflow in n8n. Trigger a manual run via the Webhook URL. Verify the Notion monthly LTV report is created with all 7 sections, Slack executive summary posts to the configured channel, and at-risk customer alerts fire for customers with declining MRR or payment issues.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the Customer LTV Intelligence product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
How does it differ from Expansion Revenue Detector?+
Complementary products covering different cadences and scopes. ERD monitors individual invoice.paid events in real-time and scores per-event expansion potential. CLTVI analyzes your entire customer base monthly — computing MRR trends, segment distribution, and portfolio-level intelligence. ERD catches expansion signals as they happen; CLTVI provides the monthly strategic view.
What are the five LTV segments?+
Growing — MRR increasing, plan upgrades, strong payment reliability. Stable — consistent MRR, no plan changes, reliable payments. Declining — MRR decreasing, plan downgrades, reduced activity. At Risk — failed payments, churn signals, dispute activity, MRR below threshold. Churned — cancelled subscription, zero MRR.
What does the Notion report contain?+
Seven sections: Executive Summary (total MRR, customer count, segment breakdown), Segment Distribution (counts and percentages per LTV category), MRR Trends (month-over-month delta, growth rate), At-Risk Customers (individual profiles with signals and recommended actions), Growth Opportunities (top growers with expansion reasoning), Cohort Analysis (retention and revenue trends by signup month), and Month-over-Month Comparison (key metric deltas). The README walks through configuration in under 10 minutes, including test data for validation.
Why is it so cheap at $0.08/run?+
AGGREGATE architecture. Instead of making one LLM call per customer (which would cost $0.02×N), the entire customer base is assembled by code-only nodes and sent to the Analyst in a single call. The Formatter also receives one call. Two Sonnet 4.6 calls total regardless of customer count. 12 monthly runs = $0.96/year in LLM costs.
How many customers can it handle?+
The Analyst receives the full customer portfolio in one call. Sonnet 4.6 handles context windows up to 200K tokens. Practical limit depends on your customer count and data density — typical SaaS with 50–500 customers fits comfortably. For very large bases (1000+), the Assembler can be configured to focus on active subscribers only.
Does it use web scraping?+
No. All data comes from the Stripe API: customer records, subscription details, invoice history, charge data, and payment metadata. No web_search, no external data sources, no scraping. This makes the pipeline fast, reliable, and deterministic.
How does it differ from Account Health Intelligence Agent?+
Different data sources and signals. AHIA monitors HubSpot engagement health (deals, tickets, engagements) per account. CLTVI monitors Stripe financial health (MRR, payments, subscriptions) across your portfolio. AHIA tells you who is disengaged; CLTVI tells you who is financially declining. Use both for complete customer health coverage.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
What should I do if the pipeline dead-letters a record?+
Check the dead letter output for the failure reason — the error context includes which agent failed and why. Common causes: missing input fields, API rate limits, or malformed data. Fix the underlying issue and reprocess. The error handling matrix in the bundle documents every failure mode and its recovery path.
Related Blueprints
Expansion Revenue Detector
AI monitors Stripe payment patterns, scores expansion potential across 5 signal categories, and routes upsell and at-risk briefs to Pipedrive automatically.
Stripe Dunning Intelligence
Intelligent payment recovery with AI-personalized dunning.
Account Health Intelligence Agent
Weekly AI health briefs for every account.