How Win/Loss Intelligence Agent Automates Sales Intelligence
The Problem
AI win/loss analysis that reconstructs deal timelines and generates intelligence briefs. That single sentence captures a workflow gap that costs sales, revops, strategy teams hours every week. The manual process behind what Win/Loss Intelligence Agent automates is familiar to anyone who has worked in a revenue organization: someone pulls data from Hubspot, Notion, copies it into a spreadsheet or CRM, applies a mental checklist, writes a summary, and routes it to the next person in the chain. Repeat for every record. Every day.
Three problems make this unsustainable at scale. First, the process does not scale. As volume grows, the human bottleneck becomes the constraint. Whether it is inbound leads, deal updates, or meeting prep, a person can only process a finite number of records before quality degrades. Second, the process is inconsistent. Different team members apply different criteria, use different formats, and make different judgment calls. There is no single standard of quality, and the output varies from person to person and day to day. Third, the process is slow. By the time a manual review is complete, the window for action may have already closed. Deals move, contacts change roles, and buying signals decay.
These are not theoretical concerns. They are the operational reality for sales, revops, strategy teams handling sales intelligence and deal intelligence workflows. Every hour spent on manual data processing is an hour not spent on the work that actually moves the needle: building relationships, closing deals, and driving strategy.
This is the gap Win/Loss Intelligence Agent fills.
Teams typically spend 30-60 minutes per cycle on the manual version of this workflow. Win/Loss Intelligence Agent reduces that to seconds per execution, with consistent output quality every time.
What This Blueprint Does
Four Agents. Six Factors. Per-Deal Causal Intelligence.
Win/Loss Intelligence Agent is a multiple-node n8n workflow with 4 specialized agents. Each agent handles a distinct phase of the pipeline, and the handoff between agents is deterministic — no ambiguous routing, no dropped records. The blueprint is designed so that each agent does one thing well, and the overall pipeline produces a consistent, auditable output on every run.
Here is what each agent does:
- Fetcher (HubSpot Webhook + Code): HubSpot deal closure webhook fires when a deal moves to Closed Won or Closed Lost (configurable via DEAL_STAGES_CLOSED_WON and DEAL_STAGES_CLOSED_LOST), or manual Webhook for on-demand analysis.
- Researcher (Tier 2 Classification + web_search): the analysis model with web_search gathers company and competitor context when INCLUDE_COMPETITOR_RESEARCH=true and competitor field is populated.
- Analyst (Tier 1 Reasoning): the primary reasoning model performs deep 6-factor causal win/loss analysis: product_fit, sales_execution, competitive_dynamics, pricing_value, champion_engagement, timing_market.
- Formatter (Tier 2 Classification): the analysis model generates two outputs: Notion intelligence brief page (executive summary, deal timeline, per-factor analysis with classification badges, lessons learned, recommendations) and HubSpot deal note (compact win/loss summary with key factors and Notion link).
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow. Specifically, you receive:
- Production-ready 25-node n8n workflow — import and deploy
- HubSpot deal closure webhook fires automatically when deals close (won or lost)
- Manual Webhook for on-demand win/loss analysis with deal ID
- HubSpot Fetcher pulls deal record, contacts, and full activity timeline via API
- Conditional Researcher (the analysis model + web_search) for company and competitor context when INCLUDE_COMPETITOR_RESEARCH=true
- 6-factor causal analysis via the primary reasoning model: product_fit, sales_execution, competitive_dynamics, pricing_value, champion_engagement, timing_market
- Evidence-based factor classification: MAJOR, CONTRIBUTING, or NOT A FACTOR with citations from deal data
- Deal timeline reconstruction from HubSpot activity history
- Lessons learned and actionable recommendations per deal
- Notion intelligence brief page with executive summary, timeline, per-factor analysis, and recommendations
- HubSpot deal note with compact win/loss summary and Notion link
- DUAL-MODEL architecture: the primary reasoning model for deep causal analysis, the analysis model for research + formatting
- Configurable: deal stages, competitor research toggle, Notion database
- ITP 20 records, 14/14 milestones, $0.30-$0.60/deal measured
Every component is designed to be modified. The agent prompts are plain text files you can edit. The workflow nodes can be rearranged or extended. The scoring criteria, output formats, and routing logic are all exposed as configurable parameters — not buried in application code. This means Win/Loss Intelligence Agent adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every agent prompt in the bundle is a standalone text file. You can customize scoring criteria, output formats, and routing logic without modifying the workflow JSON itself.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Win/Loss Intelligence Agent execution flow.
Step 1: Fetcher
Tier: HubSpot Webhook + Code
HubSpot deal closure webhook fires when a deal moves to Closed Won or Closed Lost (configurable via DEAL_STAGES_CLOSED_WON and DEAL_STAGES_CLOSED_LOST), or manual Webhook for on-demand analysis. Config Loader reads INCLUDE_COMPETITOR_RESEARCH, NOTION_DATABASE_ID, and deal stage configurations. Fetcher pulls deal record, associated contacts, and activity timeline from HubSpot API.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Fetcher identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 2: Researcher
Tier: Tier 2 Classification + web_search
the analysis model with web_search gathers company and competitor context when INCLUDE_COMPETITOR_RESEARCH=true and competitor field is populated. Researches company background, market position, competitive landscape, and recent news. When INCLUDE_COMPETITOR_RESEARCH=false, passes through deal data without external research. Conditional execution minimizes cost on deals without competitor data.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Researcher identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 3: Analyst
Tier: Tier 1 Reasoning
the primary reasoning model performs deep 6-factor causal win/loss analysis: product_fit, sales_execution, competitive_dynamics, pricing_value, champion_engagement, timing_market. Reconstructs deal timeline from activity data. Classifies each factor as MAJOR, CONTRIBUTING, or NOT A FACTOR with evidence citations. Generates lessons learned and actionable recommendations for the sales team.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Analyst identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 4: Formatter
Tier: Tier 2 Classification
the analysis model generates two outputs: Notion intelligence brief page (executive summary, deal timeline, per-factor analysis with classification badges, lessons learned, recommendations) and HubSpot deal note (compact win/loss summary with key factors and Notion link). Both outputs are created via their respective APIs.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Formatter identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
The entire pipeline executes without manual intervention. From trigger to output, every decision point is deterministic: if a condition is met, the next agent fires; if not, the record is handled according to a documented fallback path. There are no silent failures. Every execution produces a traceable audit trail that you can review, export, or feed into your own reporting tools.
This architecture follows the ForgeWorkflows principle of tested, measured, documented automation. Every node in the pipeline has been validated during ITP (Inspection and Test Plan) testing, and the error handling matrix in the bundle documents the recovery path for each failure mode.
Tier references indicate the reasoning complexity assigned to each agent. Higher tiers use more capable models for tasks that require nuanced judgment, while lower tiers use efficient models for classification and routing tasks. This tiered approach optimizes both quality and cost.
Cost Breakdown
Every metric is ITP-measured. The Win/Loss Intelligence Agent reconstructs deal timelines and performs 6-factor causal analysis on every closed deal — the primary reasoning model for deep reasoning, the analysis model for research and formatting at $0.30-$0.60/deal.
The primary operating cost for Win/Loss Intelligence Agent is the per-execution LLM inference cost. Based on ITP testing, the measured cost is: Cost per Deal: $0.30-$0.60/deal (ITP-measured). This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 20–40 minutes per cycle, that is $17–50 per execution in human labor. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $6-$12/month (20 deals/month) + HubSpot/Notion included tiers, depending on your usage volume and plan tiers.
Quality assurance: BQS audit result is 12/12 PASS. ITP result is 20/20 records, 14/14 milestones PASS. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
7 files — workflow JSON, 3 system prompts, TDD, and complete documentation.
When you purchase Win/Loss Intelligence Agent, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
win_loss_intelligence_agent_v1_0_0.json— The 25-node n8n workflowREADME.md— 10-minute setup guide with HubSpot, Notion, and Anthropic configurationdocs/TDD.md— Technical Design Document with 6-factor win/loss taxonomy and DUAL-MODEL patternsystem_prompts/researcher_system_prompt.md— Researcher prompt (company/competitor context research, web_search CoT)system_prompts/analyst_system_prompt.md— Analyst prompt (6-factor causal analysis, timeline reconstruction, evidence-based classification)system_prompts/formatter_system_prompt.md— Formatter prompt (Notion intelligence brief blocks, HubSpot deal note)CHANGELOG.md— Version history
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
Win/Loss Intelligence Agent is built for Sales, Revops, Strategy teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a sales or revops or strategy function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: HubSpot account (OAuth2 with deals, contacts, and engagements scopes), Notion workspace (integration token with Bearer prefix), Anthropic API key (~$0.30-$0.60/deal)
- You have API credentials available: Anthropic API, HubSpot (OAuth2), Notion (httpHeaderAuth, Bearer prefix)
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not re-activate lost deals — that is what Lost Deal Re-Activation Agent does
- Does not generate competitive battle cards — that is what Deal Competitor Tracker does
- Does not aggregate metrics across quarters — that is what Quarterly Business Review Generator does
- Does not modify HubSpot deal data or pipeline stages — read-only analysis with Notion and HubSpot note output
- Does not scrape websites unconditionally — web_search only when INCLUDE_COMPETITOR_RESEARCH=true and competitor field populated
- Does not analyze active deals — fires only on deal closure events (Closed Won or Closed Lost)
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Getting Started
Deployment follows a structured sequence. The Win/Loss Intelligence Agent bundle is designed for the following tools: n8n, Anthropic API, HubSpot, Notion. Here is the recommended deployment path:
- Step 1: Import workflow and configure credentials. Import win_loss_intelligence_agent_v1_0_0.json into n8n. Configure HubSpot OAuth2 credential (deals, contacts, engagements scopes), Notion httpHeaderAuth credential (Bearer token), and Anthropic API key following the README.
- Step 2: Configure deal stages and research settings. Set DEAL_STAGES_CLOSED_WON and DEAL_STAGES_CLOSED_LOST to match your HubSpot pipeline stage IDs. Configure NOTION_DATABASE_ID for intelligence brief storage. Optionally set INCLUDE_COMPETITOR_RESEARCH=true for deals with competitor fields populated.
- Step 3: Activate and verify. Enable the workflow in n8n. Close a test deal in HubSpot (or trigger via manual Webhook with a deal ID). Verify the Notion intelligence brief page appears with 6-factor analysis and the HubSpot deal note contains a compact summary with Notion link.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the Win/Loss Intelligence Agent product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
How does it differ from Lost Deal Re-Activation Agent?+
Different timing and purpose. WLIA analyzes WHY deals were won or lost at the moment of closure — 6-factor causal analysis with timeline reconstruction. LDRA monitors lost deals 30-90 days later for condition changes that create re-engagement opportunities. WLIA produces an intelligence brief for strategic learning; LDRA produces a re-opening email for deal recovery. Together they close the loop: WLIA explains deal outcomes, LDRA acts on those insights.
What are the six win/loss factors?+
product_fit — how well the product matched the prospect's requirements and use case. sales_execution — quality of the sales process, responsiveness, demo quality, follow-up cadence. competitive_dynamics — impact of competitors on the deal outcome. pricing_value — price point perception relative to perceived value and budget. champion_engagement — strength and involvement of internal champions and sponsors. timing_market — external timing factors, budget cycles, market conditions.
What is the factor classification system?+
Each of the 6 factors is classified as MAJOR (primary driver of the deal outcome), CONTRIBUTING (played a meaningful role but was not the primary driver), or NOT A FACTOR (did not materially influence the outcome). Classifications are evidence-based — the Analyst cites specific activities, emails, meetings, and deal events from the HubSpot timeline to support each classification.
Why does the Analyst use Opus instead of Sonnet?+
The Analyst performs deep causal reasoning: reconstructing deal timelines from activity data, identifying causal relationships between sales actions and deal outcomes, classifying 6 factors with evidence citations, and generating strategic lessons and recommendations. This requires the reasoning depth of Opus 4.6. The Researcher and Formatter use Sonnet 4.6 because research gathering and output formatting are classification-level tasks.
Does it analyze both won and lost deals?+
Yes. The webhook fires on both Closed Won and Closed Lost stages (configurable via DEAL_STAGES_CLOSED_WON and DEAL_STAGES_CLOSED_LOST). Won deals reveal what worked and should be replicated. Lost deals reveal what failed and should be corrected. Both produce intelligence briefs with the same 6-factor taxonomy.
When does it use web_search?+
Only when INCLUDE_COMPETITOR_RESEARCH=true AND the deal has a competitor field populated. The Researcher uses Anthropic web_search to gather company background, competitive landscape, and recent news to enrich the Analyst's context. When disabled or when no competitor is identified, the pipeline skips web research and analyzes based on HubSpot data alone. This keeps costs at the lower end ($0.30/deal) for deals without competitor research.
How does it relate to Deal Competitor Tracker?+
Complementary products covering different timing in the deal lifecycle. DCT generates battle cards with talk tracks and objection handlers during active deals when a competitor is identified. WLIA analyzes competitive dynamics as one of 6 win/loss factors after the deal closes. DCT is tactical preparation; WLIA is strategic post-mortem. Together they bookend competitive intelligence across the entire deal lifecycle.
What outputs are generated?+
Two outputs per deal: (1) Notion intelligence brief page with executive summary, reconstructed deal timeline, per-factor analysis with MAJOR/CONTRIBUTING/NOT A FACTOR badges, lessons learned, and actionable recommendations. (2) HubSpot deal note with compact win/loss summary highlighting key factors and a link to the full Notion brief.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.