How Deal Competitor Tracker Automates Competitive Intelligence
The Problem
Your sales team has 47 deals in the proposal stage. 12 have not had contact in 5+ days. Three have gone completely dark. Which ones are at risk — and which ones just have a slow procurement process? A rep answering this question manually checks Pipedrive, Notion, Slack, cross-references email history, and makes a judgment call on each deal. At 15 minutes per deal, that is 30–60 minutes per cycle of triage before any follow-up happens.
The cost is not just time — it is revenue leakage. Deals slip because signals were missed. Pipeline reviews rely on data that was accurate two days ago. Scoring criteria drift between team members, and the CRM becomes a lagging indicator rather than an operational tool. Deal Competitor Tracker automates the competitive intelligence and deal intelligence workflow from data extraction through analysis to structured output, with zero manual CRM entry.
Teams typically spend 30–60 minutes per cycle on the manual version of this workflow. Deal Competitor Tracker reduces that to seconds per execution, with consistent output quality and zero CRM data entry.
What This Blueprint Does
Five Agents. Six Dimensions. Per-Deal Battle Cards.
The Deal Competitor Tracker pipeline runs 4 agents in sequence. Fetcher pulls data from Pipedrive and Notion and Slack, and Formatter delivers the output. Here is what happens at each stage and why it matters.
- Fetcher (Pipedrive Trigger + Code): Pipedrive Trigger fires when the competitor field is set or updated on a deal, or manual Webhook for on-demand analysis.
- Researcher (Tier 2 Classification + web_search): the analysis model with web_search researches the named competitor across 6 dimensions: pricing comparison, feature gaps, market positioning, recent momentum, known weaknesses, and switching barriers.
- Analyst (Tier 1 Reasoning): the primary reasoning model generates a detailed battle card: executive summary, per-dimension analysis with CTL scores (1-10), aggregate CTL classification (HIGH/MEDIUM/LOW), 3-5 talk tracks tied to research evidence, objection handlers with acknowledge-bridge-differentiate pattern, win themes, and competitive landmines.
- Formatter (Tier 2 Classification): the analysis model generates three outputs: Notion battle card page (executive summary callout, per-dimension sections with CTL badges, talk tracks, toggle-block objection handlers, win themes, landmines), Slack Block Kit alert (aggregate CTL + top risks + primary talk track), and Pipedrive deal note (compact battle card summary with Notion link)..
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow:
- ITP-tested 24-node n8n workflow — import and deploy
- Pipedrive Trigger fires per-event when competitor field is set/updated on a deal
- Manual Webhook for on-demand competitor analysis with deal ID
- 6-dimension competitive research via the analysis model + web_search (pricing, features, positioning, momentum, weaknesses, switching barriers)
- Detailed battle cards via the primary reasoning model with CTL scoring (1-10), talk tracks, objection handlers, win themes, and landmines
- Notion battle card page with executive summary, per-dimension CTL sections, and toggle-block objection handlers
- Slack Block Kit alert with aggregate CTL score and top competitive risks
- Pipedrive deal note with compact battle card summary and Notion link
- Configurable: competitor field name, pricing comparison toggle, Notion database, Slack channel
- DUAL-MODEL architecture: the analysis model for research + formatting, the primary reasoning model for battle card reasoning
- ITP 20 records, 14/14 milestones, $0.48/deal measured
Scoring thresholds, output destinations, and CRM field mappings are configurable in the system prompts — no workflow JSON edits required. This means Deal Competitor Tracker adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every agent prompt is a standalone text file. Customize scoring thresholds, qualification criteria, and output formatting without touching the workflow JSON.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Deal Competitor Tracker execution flow.
Step 1: Fetcher
Tier: Pipedrive Trigger + Code
The pipeline starts here. Pipedrive Trigger fires when the competitor field is set or updated on a deal, or manual Webhook for on-demand analysis. Config Loader reads COMPETITOR_FIELD_NAME, INCLUDE_PRICING_COMPARISON, NOTION_DATABASE_ID, SLACK_CHANNEL. Fetcher extracts deal context: deal ID, title, value, stage, organization, contact, and competitor name from the configured custom field.
This stage ensures all downstream agents receive clean, validated input. If this step returns incomplete data, every downstream agent works with a degraded picture.
Step 2: Researcher
Tier: Tier 2 Classification + web_search
the analysis model with web_search researches the named competitor across 6 dimensions: pricing comparison, feature gaps, market positioning, recent momentum, known weaknesses, and switching barriers. Each dimension gets preliminary CTL scoring (1-10) with source URLs. INCLUDE_PRICING_COMPARISON toggle excludes pricing dimension when set to false.
Why this step matters: The result is a prioritized action queue, not just a data dump.
Step 3: Analyst
Tier: Tier 1 Reasoning
the primary reasoning model generates a detailed battle card: executive summary, per-dimension analysis with CTL scores (1-10), aggregate CTL classification (HIGH/MEDIUM/LOW), 3-5 talk tracks tied to research evidence, objection handlers with acknowledge-bridge-differentiate pattern, win themes, and competitive landmines. All contextualized to the specific deal stage and value.
This is where the pipeline applies judgment — not just data retrieval, but analysis.
Step 4: Formatter
Tier: Tier 2 Classification
This is the final deliverable — what lands in your inbox or dashboard. the analysis model generates three outputs: Notion battle card page (executive summary callout, per-dimension sections with CTL badges, talk tracks, toggle-block objection handlers, win themes, landmines), Slack Block Kit alert (aggregate CTL + top risks + primary talk track), and Pipedrive deal note (compact battle card summary with Notion link).
The entire pipeline executes without manual intervention. From trigger to output, every decision point follows a documented path. Every execution produces a traceable audit trail.
All nodes have been validated during Independent Test Protocol (ITP) testing on n8n v2.7.5. The error handling matrix in the bundle documents the recovery path for each failure mode.
This blueprint runs on your own n8n instance with your own API keys. Your CRM data never leaves your infrastructure.
Why we designed it this way
We never grep for API keys in the filesystem. We never search shell history for tokens. Every credential lives in n8n's encrypted credential store, accessed by credential name — not by value. If a credential is missing, the blueprint tells you which credential name to create. It never tells you to paste a key into a code node.
— ForgeWorkflows Engineering
Cost Breakdown
Every metric is ITP-measured. The Deal Competitor Tracker generates battle cards with talk tracks and objection handlers when a competitor is identified on a deal — 6-dimension analysis via the analysis model + web_search, battle card reasoning via the primary reasoning model at $0.48/deal.
The primary operating cost for Deal Competitor Tracker is the per-execution LLM inference cost. Based on Independent Test Protocol (ITP) testing, the measured cost is: Cost per Deal: $0.48/deal (ITP-measured). This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour for a sales ops analyst at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 30–60 minutes per cycle, the per-execution cost in human labor is significant. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $9.60/month (20 deals/month) + Pipedrive/Notion/Slack included tiers, depending on your usage volume and plan tiers.
Quality assurance: Blueprint Quality Standard (BQS) audit result is 12/12 PASS. ITP result is 20 records, 14/14 milestones PASS. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
All cost and performance figures are ITP-measured — tested against real data fixtures on n8n v2.7.5 in March 2026. See the product page for full test methodology.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
7 files — workflow JSON, 3 system prompts, TDD, and complete documentation.
When you purchase Deal Competitor Tracker, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
CHANGELOG.md— Version historyREADME.md— Setup and configuration guidedeal_competitor_tracker_v1.0.0.json— n8n workflow (main pipeline)docs/TDD.md— Technical Design Documentsystem_prompts/analyst_system_prompt.md— Analyst system promptsystem_prompts/formatter_system_prompt.md— Formatter system promptsystem_prompts/researcher_system_prompt.md— Researcher system prompt
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
Deal Competitor Tracker is built for Sales, Revops, Strategy teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a sales or revops or strategy function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: Pipedrive account (pipedriveApi credential, custom competitor field on deals), Notion workspace (integration token with Bearer prefix), Slack workspace (Bot Token with chat:write scope), Anthropic API key
- You have API credentials available: Anthropic API, Pipedrive (pipedriveApi), Notion (httpHeaderAuth, Bearer prefix), Slack (httpHeaderAuth, Bearer prefix, chat:write scope)
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not monitor competitor pricing trends over time — that is what Competitive Pricing Intelligence does
- Does not diagnose stalled deals — that is what Deal Stall Diagnoser does
- Does not score deal health signals — that is what Deal Intelligence Agent does
- Does not modify your Pipedrive deals or competitor data — read-only competitive intelligence with Notion, Slack, and deal note output
- Does not batch multiple competitors per deal — generates one battle card per competitor per deal event
- Does not filter competitors by CTL score — all competitors receive full battle cards regardless of threat level
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Edge cases to know about
Every pipeline has boundaries. These are intentional design decisions, not oversights — understanding them helps you deploy with the right expectations and plan for edge cases in your environment.
Does not monitor competitor pricing trends over time — that is what Competitive Pricing Intelligence does
This is intentional. We default to human-in-the-loop for actions that carry reputational or financial risk. Once your team has validated output accuracy over 20+ cycles, you can adjust the pipeline to auto-execute — the workflow JSON supports it, but the default is conservative.
Does not diagnose stalled deals — that is what Deal Stall Diagnoser does
We scoped this boundary after ITP testing revealed inconsistent results when the pipeline attempted this. The agents handle what they handle well — extending beyond this scope requires custom prompt engineering specific to your data shape.
Does not score deal health signals — that is what Deal Intelligence Agent does
This keeps the pipeline focused on a single workflow. Adding this capability would introduce branching logic that varies by organization, and the tradeoff between complexity and reliability was not worth it for a reusable blueprint. Fork the workflow JSON if your use case demands it.
Review the error handling matrix in the bundle for the full list of documented failure modes and recovery paths.
Getting Started
Deployment follows a structured sequence. The Deal Competitor Tracker bundle is designed for the following tools: n8n, Anthropic API, Pipedrive, Notion, Slack. Here is the recommended deployment path:
- Step 1: Import workflow and configure credentials. Import deal_competitor_tracker_v1.0.0.json into n8n. Configure Pipedrive pipedriveApi credential, Notion httpHeaderAuth credential (Bearer token), Slack httpHeaderAuth credential (Bearer token with chat:write scope), and Anthropic API key following the README.
- Step 2: Set up Pipedrive custom field and configure variables. Create a "competitor" text custom field on Pipedrive deals (or use your existing competitor field). Set COMPETITOR_FIELD_NAME to match. Configure NOTION_DATABASE_ID and SLACK_CHANNEL for output destinations. Optionally set INCLUDE_PRICING_COMPARISON to false to exclude pricing analysis.
- Step 3: Activate and verify. Enable the workflow in n8n. Set a competitor name on a test deal in Pipedrive to trigger battle card generation. Verify the Notion battle card page appears with 6-dimension CTL scoring, the Slack channel receives the CTL alert, and a deal note appears on the Pipedrive deal.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the Deal Competitor Tracker product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
How does it differ from Competitive Pricing Intelligence?+
Complementary products covering different competitive intelligence needs. CPI monitors competitor pricing trends weekly across multiple competitors — pricing signals, significance scoring, strategic recommendations. DCT generates per-deal battle cards for a specific competitor — 6-dimension CTL scoring, talk tracks, objection handlers, win themes. CPI feeds your pricing/strategy team with market-wide intelligence; DCT feeds your sales reps with tactical per-deal competitive preparation.
What are the six battle card dimensions?+
pricing_comparison — how competitor pricing compares to yours. feature_gap — features competitor has or lacks relative to deal requirements. market_positioning — target segments, messaging, analyst coverage. recent_momentum — funding rounds, customer wins, product launches, hiring velocity from last 90 days. known_weaknesses — customer complaints from review sites, churn signals, documented product gaps. switching_barriers — data migration complexity, contract lock-in patterns, integration dependencies.
What is the CTL score?+
Competitive Threat Level (CTL) is scored 1-10 per dimension and as an aggregate. HIGH (8-10) indicates a strong competitive threat requiring immediate preparation. MEDIUM (5-7) indicates moderate competition with exploitable weaknesses. LOW (1-4) indicates minimal threat. CTL is informational — all competitors receive full battle cards regardless of score.
Why does the Analyst use Opus instead of Sonnet?+
The Analyst generates detailed battle cards requiring strategic reasoning: synthesizing 6 dimensions of research into actionable talk tracks, anticipating objections based on competitor strengths, formulating win themes, and identifying competitive landmines. This requires the reasoning depth of Opus 4.6. The Researcher uses Sonnet 4.6 because per-dimension research is a classification task, not a strategy task. Review the error handling matrix in the bundle — it documents the recovery path for each failure mode.
What triggers a battle card?+
Two trigger modes: (1) Pipedrive Trigger fires automatically when the competitor custom field on a deal is set or updated. This means battle cards are generated the moment a sales rep identifies a competitor. (2) Manual Webhook for on-demand analysis — send a POST request with deal ID and competitor name for instant battle card generation. The ITP test results in the bundle show measured performance across edge cases, not just happy-path data.
Can I exclude pricing comparison from the analysis?+
Yes. Set INCLUDE_PRICING_COMPARISON to false in the Config Loader. When disabled, the pricing_comparison dimension returns null and only 5 dimensions are scored. Useful when competitor pricing data is sensitive, unreliable, or not relevant to the deal.
What outputs are generated?+
Three outputs for every competitor: (1) Notion battle card page with executive summary, per-dimension CTL sections, talk tracks, toggle-block objection handlers, win themes, and landmines. (2) Slack Block Kit alert with aggregate CTL, top risks, and primary talk track. (3) Pipedrive deal note with compact summary and link to the full Notion battle card. Check the dependency matrix in the bundle for exact version requirements and credential setup steps.
Does it use web scraping?+
Yes. The Researcher uses Anthropic web_search tool to investigate competitor pricing pages, feature directories, review sites (G2, Capterra), news, and migration documentation. Research quality depends on competitor web presence and Anthropic web_search coverage. Competitors with limited online presence produce lower-confidence CTL scores.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
What should I do if the pipeline dead-letters a record?+
Check the dead letter output for the failure reason — the error context includes which agent failed and why. Common causes: missing input fields, API rate limits, or malformed data. Fix the underlying issue and reprocess. The error handling matrix in the bundle documents every failure mode and its recovery path.