product guideMar 16, 2026·14 min read

How Deal Competitor Tracker Automates Competitive Intelligence

The Problem

AI battle cards for every competitive deal — triggered from your CRM. That single sentence captures a workflow gap that costs sales, revops, strategy teams hours every week. The manual process behind what Deal Competitor Tracker automates is familiar to anyone who has worked in a revenue organization: someone pulls data from Pipedrive, Notion, Slack, copies it into a spreadsheet or CRM, applies a mental checklist, writes a summary, and routes it to the next person in the chain. Repeat for every record. Every day.

Three problems make this unsustainable at scale. First, the process does not scale. As volume grows, the human bottleneck becomes the constraint. Whether it is inbound leads, deal updates, or meeting prep, a person can only process a finite number of records before quality degrades. Second, the process is inconsistent. Different team members apply different criteria, use different formats, and make different judgment calls. There is no single standard of quality, and the output varies from person to person and day to day. Third, the process is slow. By the time a manual review is complete, the window for action may have already closed. Deals move, contacts change roles, and buying signals decay.

These are not theoretical concerns. They are the operational reality for sales, revops, strategy teams handling competitive intelligence and deal intelligence workflows. Every hour spent on manual data processing is an hour not spent on the work that actually moves the needle: building relationships, closing deals, and driving strategy.

This is the gap Deal Competitor Tracker fills.

INFO

Teams typically spend 30-60 minutes per cycle on the manual version of this workflow. Deal Competitor Tracker reduces that to seconds per execution, with consistent output quality every time.

What This Blueprint Does

Five Agents. Six Dimensions. Per-Deal Battle Cards.

Deal Competitor Tracker is a multiple-node n8n workflow with 4 specialized agents. Each agent handles a distinct phase of the pipeline, and the handoff between agents is deterministic — no ambiguous routing, no dropped records. The blueprint is designed so that each agent does one thing well, and the overall pipeline produces a consistent, auditable output on every run.

Here is what each agent does:

  • Fetcher (Pipedrive Trigger + Code): Pipedrive Trigger fires when the competitor field is set or updated on a deal, or manual Webhook for on-demand analysis.
  • Researcher (Tier 2 Classification + web_search): the analysis model with web_search researches the named competitor across 6 dimensions: pricing comparison, feature gaps, market positioning, recent momentum, known weaknesses, and switching barriers.
  • Analyst (Tier 1 Reasoning): the primary reasoning model generates a comprehensive battle card: executive summary, per-dimension analysis with CTL scores (1-10), aggregate CTL classification (HIGH/MEDIUM/LOW), 3-5 talk tracks tied to research evidence, objection handlers with acknowledge-bridge-differentiate pattern, win themes, and competitive landmines.
  • Formatter (Tier 2 Classification): the analysis model generates three outputs: Notion battle card page (executive summary callout, per-dimension sections with CTL badges, talk tracks, toggle-block objection handlers, win themes, landmines), Slack Block Kit alert (aggregate CTL + top risks + primary talk track), and Pipedrive deal note (compact battle card summary with Notion link)..

When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow. Specifically, you receive:

  • Production-ready 24-node n8n workflow — import and deploy
  • Pipedrive Trigger fires per-event when competitor field is set/updated on a deal
  • Manual Webhook for on-demand competitor analysis with deal ID
  • 6-dimension competitive research via the analysis model + web_search (pricing, features, positioning, momentum, weaknesses, switching barriers)
  • Comprehensive battle cards via the primary reasoning model with CTL scoring (1-10), talk tracks, objection handlers, win themes, and landmines
  • Notion battle card page with executive summary, per-dimension CTL sections, and toggle-block objection handlers
  • Slack Block Kit alert with aggregate CTL score and top competitive risks
  • Pipedrive deal note with compact battle card summary and Notion link
  • Configurable: competitor field name, pricing comparison toggle, Notion database, Slack channel
  • DUAL-MODEL architecture: the analysis model for research + formatting, the primary reasoning model for battle card reasoning
  • ITP 20 records, 14/14 milestones, $0.48/deal measured

Every component is designed to be modified. The agent prompts are plain text files you can edit. The workflow nodes can be rearranged or extended. The scoring criteria, output formats, and routing logic are all exposed as configurable parameters — not buried in application code. This means Deal Competitor Tracker adapts to your specific process, terminology, and integration requirements without forking the entire workflow.

TIP

Every agent prompt in the bundle is a standalone text file. You can customize scoring criteria, output formats, and routing logic without modifying the workflow JSON itself.

How the Pipeline Works

Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Deal Competitor Tracker execution flow.

Step 1: Fetcher

Tier: Pipedrive Trigger + Code

Pipedrive Trigger fires when the competitor field is set or updated on a deal, or manual Webhook for on-demand analysis. Config Loader reads COMPETITOR_FIELD_NAME, INCLUDE_PRICING_COMPARISON, NOTION_DATABASE_ID, SLACK_CHANNEL. Fetcher extracts deal context: deal ID, title, value, stage, organization, contact, and competitor name from the configured custom field.

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Fetcher identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

Step 2: Researcher

Tier: Tier 2 Classification + web_search

the analysis model with web_search researches the named competitor across 6 dimensions: pricing comparison, feature gaps, market positioning, recent momentum, known weaknesses, and switching barriers. Each dimension gets preliminary CTL scoring (1-10) with source URLs. INCLUDE_PRICING_COMPARISON toggle excludes pricing dimension when set to false.

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Researcher identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

Step 3: Analyst

Tier: Tier 1 Reasoning

the primary reasoning model generates a comprehensive battle card: executive summary, per-dimension analysis with CTL scores (1-10), aggregate CTL classification (HIGH/MEDIUM/LOW), 3-5 talk tracks tied to research evidence, objection handlers with acknowledge-bridge-differentiate pattern, win themes, and competitive landmines. All contextualized to the specific deal stage and value.

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Analyst identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

Step 4: Formatter

Tier: Tier 2 Classification

the analysis model generates three outputs: Notion battle card page (executive summary callout, per-dimension sections with CTL badges, talk tracks, toggle-block objection handlers, win themes, landmines), Slack Block Kit alert (aggregate CTL + top risks + primary talk track), and Pipedrive deal note (compact battle card summary with Notion link).

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Formatter identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

The entire pipeline executes without manual intervention. From trigger to output, every decision point is deterministic: if a condition is met, the next agent fires; if not, the record is handled according to a documented fallback path. There are no silent failures. Every execution produces a traceable audit trail that you can review, export, or feed into your own reporting tools.

This architecture follows the ForgeWorkflows principle of tested, measured, documented automation. Every node in the pipeline has been validated during ITP (Inspection and Test Plan) testing, and the error handling matrix in the bundle documents the recovery path for each failure mode.

INFO

Tier references indicate the reasoning complexity assigned to each agent. Higher tiers use more capable models for tasks that require nuanced judgment, while lower tiers use efficient models for classification and routing tasks. This tiered approach optimizes both quality and cost.

Cost Breakdown

Every metric is ITP-measured. The Deal Competitor Tracker generates battle cards with talk tracks and objection handlers when a competitor is identified on a deal — 6-dimension analysis via the analysis model + web_search, battle card reasoning via the primary reasoning model at $0.48/deal.

The primary operating cost for Deal Competitor Tracker is the per-execution LLM inference cost. Based on ITP testing, the measured cost is: Cost per Deal: $0.48/deal (ITP-measured). This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.

To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 20–40 minutes per cycle, that is $17–50 per execution in human labor. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.

Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $9.60/month (20 deals/month) + Pipedrive/Notion/Slack included tiers, depending on your usage volume and plan tiers.

Quality assurance: BQS audit result is 12/12 PASS. ITP result is 20 records, 14/14 milestones PASS. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.

TIP

Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.

What's in the Bundle

7 files — workflow JSON, 3 system prompts, TDD, and complete documentation.

When you purchase Deal Competitor Tracker, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:

  • deal_competitor_tracker_v1.0.0.json — The 24-node n8n workflow
  • README.md — 10-minute setup guide with Pipedrive, Notion, Slack, and Anthropic configuration
  • docs/TDD.md — Technical Design Document with 6-dimension CTL taxonomy and DUAL-MODEL pattern
  • system_prompts/researcher_system_prompt.md — Researcher prompt (6-dimension competitive research, web_search CoT, preliminary CTL scoring)
  • system_prompts/analyst_system_prompt.md — Analyst prompt (battle card generation, CTL rubric, talk tracks, objection handlers)
  • system_prompts/formatter_system_prompt.md — Formatter prompt (Notion battle card blocks, Slack Block Kit alert, Pipedrive deal note)
  • CHANGELOG.md — Version history

Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.

Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.

Who This Is For

Deal Competitor Tracker is built for Sales, Revops, Strategy teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:

  • You operate in a sales or revops or strategy function and handle the workflow this blueprint automates on a recurring basis
  • You have (or are willing to set up) an n8n instance — self-hosted or cloud
  • You have active accounts for the required integrations: Pipedrive account (pipedriveApi credential, custom competitor field on deals), Notion workspace (integration token with Bearer prefix), Slack workspace (Bot Token with chat:write scope), Anthropic API key
  • You have API credentials available: Anthropic API, Pipedrive (pipedriveApi), Notion (httpHeaderAuth, Bearer prefix), Slack (httpHeaderAuth, Bearer prefix, chat:write scope)
  • You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)

This is NOT for you if:

  • Does not monitor competitor pricing trends over time — that is what Competitive Pricing Intelligence does
  • Does not diagnose stalled deals — that is what Deal Stall Diagnoser does
  • Does not score deal health signals — that is what Deal Intelligence Agent does
  • Does not modify your Pipedrive deals or competitor data — read-only competitive intelligence with Notion, Slack, and deal note output
  • Does not batch multiple competitors per deal — generates one battle card per competitor per deal event
  • Does not filter competitors by CTL score — all competitors receive full battle cards regardless of threat level

Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.

NOTE

All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.

Getting Started

Deployment follows a structured sequence. The Deal Competitor Tracker bundle is designed for the following tools: n8n, Anthropic API, Pipedrive, Notion, Slack. Here is the recommended deployment path:

  1. Step 1: Import workflow and configure credentials. Import deal_competitor_tracker_v1.0.0.json into n8n. Configure Pipedrive pipedriveApi credential, Notion httpHeaderAuth credential (Bearer token), Slack httpHeaderAuth credential (Bearer token with chat:write scope), and Anthropic API key following the README.
  2. Step 2: Set up Pipedrive custom field and configure variables. Create a "competitor" text custom field on Pipedrive deals (or use your existing competitor field). Set COMPETITOR_FIELD_NAME to match. Configure NOTION_DATABASE_ID and SLACK_CHANNEL for output destinations. Optionally set INCLUDE_PRICING_COMPARISON to false to exclude pricing analysis.
  3. Step 3: Activate and verify. Enable the workflow in n8n. Set a competitor name on a test deal in Pipedrive to trigger battle card generation. Verify the Notion battle card page appears with 6-dimension CTL scoring, the Slack channel receives the CTL alert, and a deal note appears on the Pipedrive deal.

Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.

Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.

For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.

Ready to deploy? View the Deal Competitor Tracker product page for full specifications, pricing, and purchase.

TIP

Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.

Frequently Asked Questions

How does it differ from Competitive Pricing Intelligence?+

Complementary products covering different competitive intelligence needs. CPI monitors competitor pricing trends weekly across multiple competitors — pricing signals, significance scoring, strategic recommendations. DCT generates per-deal battle cards for a specific competitor — 6-dimension CTL scoring, talk tracks, objection handlers, win themes. CPI feeds your pricing/strategy team with market-wide intelligence; DCT feeds your sales reps with tactical per-deal competitive preparation.

What are the six battle card dimensions?+

pricing_comparison — how competitor pricing compares to yours. feature_gap — features competitor has or lacks relative to deal requirements. market_positioning — target segments, messaging, analyst coverage. recent_momentum — funding rounds, customer wins, product launches, hiring velocity from last 90 days. known_weaknesses — customer complaints from review sites, churn signals, documented product gaps. switching_barriers — data migration complexity, contract lock-in patterns, integration dependencies.

What is the CTL score?+

Competitive Threat Level (CTL) is scored 1-10 per dimension and as an aggregate. HIGH (8-10) indicates a strong competitive threat requiring immediate preparation. MEDIUM (5-7) indicates moderate competition with exploitable weaknesses. LOW (1-4) indicates minimal threat. CTL is informational — all competitors receive full battle cards regardless of score.

Why does the Analyst use Opus instead of Sonnet?+

The Analyst generates comprehensive battle cards requiring strategic reasoning: synthesizing 6 dimensions of research into actionable talk tracks, anticipating objections based on competitor strengths, formulating win themes, and identifying competitive landmines. This requires the reasoning depth of Opus 4.6. The Researcher uses Sonnet 4.6 because per-dimension research is a classification task, not a strategy task.

What triggers a battle card?+

Two trigger modes: (1) Pipedrive Trigger fires automatically when the competitor custom field on a deal is set or updated. This means battle cards are generated the moment a sales rep identifies a competitor. (2) Manual Webhook for on-demand analysis — send a POST request with deal ID and competitor name for instant battle card generation.

Can I exclude pricing comparison from the analysis?+

Yes. Set INCLUDE_PRICING_COMPARISON to false in the Config Loader. When disabled, the pricing_comparison dimension returns null and only 5 dimensions are scored. Useful when competitor pricing data is sensitive, unreliable, or not relevant to the deal.

What outputs are generated?+

Three outputs for every competitor: (1) Notion battle card page with executive summary, per-dimension CTL sections, talk tracks, toggle-block objection handlers, win themes, and landmines. (2) Slack Block Kit alert with aggregate CTL, top risks, and primary talk track. (3) Pipedrive deal note with compact summary and link to the full Notion battle card.

Does it use web scraping?+

Yes. The Researcher uses Anthropic web_search tool to investigate competitor pricing pages, feature directories, review sites (G2, Capterra), news, and migration documentation. Research quality depends on competitor web presence and Anthropic web_search coverage. Competitors with limited online presence produce lower-confidence CTL scores.

Is there a refund policy?+

All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.

Get Deal Competitor Tracker

$199

View Blueprint

Related Blueprints

Related Articles

Deal Competitor Tracker$199