How Buying Signal Detector Identifies Active Buyer Windows
The Problem
Your sales team has 47 deals in the proposal stage. 12 have not had contact in 5+ days. Three have gone completely dark. Which ones are at risk — and which ones just have a slow procurement process? A rep answering this question manually checks Newsapi, Slack, cross-references email history, and makes a judgment call on each deal. At 15 minutes per deal, that is 30–60 minutes per cycle of triage before any follow-up happens.
The cost is not just time — it is revenue leakage. Deals slip because signals were missed. Pipeline reviews rely on data that was accurate two days ago. Scoring criteria drift between team members, and the CRM becomes a lagging indicator rather than an operational tool. Buying Signal Detector automates the buying signals and deal intelligence workflow from data extraction through analysis to structured output, with zero manual CRM entry.
Teams typically spend 30–60 minutes per cycle on the manual version of this workflow. Buying Signal Detector reduces that to seconds per execution, with consistent output quality and zero CRM data entry.
What This Blueprint Does
One Analyst. Daily Signal Sweep. Ranked Slack Digest.
The Buying Signal Detector pipeline runs 4 agents in sequence. Signal Fetcher pulls data from Newsapi and Slack, and Slack Delivery delivers the output. Here is what happens at each stage and why it matters.
- Signal Fetcher (Code Only): Every weekday at 07:00, the scheduler triggers three parallel NewsAPI queries — funding rounds, leadership changes, and growth signals (hiring surges, expansion announcements).
- The Analyst (Tier 1 Reasoning): the primary reasoning model evaluates each signal against your configured ICP profile — industry match, company size, geography, persona keywords.
- The Formatter (Code Only): Converts the ranked signals into a Slack Block Kit digest.
- Slack Delivery (HTTP): Posts the formatted digest to your configured Slack sales channel via Bot Token.
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow:
- ITP-tested 23-node n8n workflow + 3-node scheduler — import and deploy
- Daily NewsAPI sweep across 3 signal categories: funding, leadership, growth
- ICP-scored signals — configure your target industries, company size, geography, persona keywords
- Signal strength classification: WEAK / MODERATE / STRONG per signal
- Day heat scoring: HOT / WARM / COLD — zero noise on quiet days
- 7-day URL deduplication — no repeat signals
- Ranked Slack Block Kit digest delivered to your sales channel every weekday
- $0.093/run blended average — $2.05/month at 22 weekday runs
- ITP test results with 20 fixtures and 14/14 milestones (100% heat accuracy)
Scoring thresholds, output destinations, and CRM field mappings are configurable in the system prompts — no workflow JSON edits required. This means Buying Signal Detector adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every agent prompt is a standalone text file. Customize scoring thresholds, qualification criteria, and output formatting without touching the workflow JSON.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Buying Signal Detector execution flow.
Step 1: Signal Fetcher
Tier: Code Only
The pipeline starts here. Every weekday at 07:00, the scheduler triggers three parallel NewsAPI queries — funding rounds, leadership changes, and growth signals (hiring surges, expansion announcements). Articles from the last 24 hours are collected, deduplicated against a 7-day URL cache, and assembled into a unified signal set. Zero LLM cost.
This stage ensures all downstream agents receive clean, validated input. If this step returns incomplete data, every downstream agent works with a degraded picture.
Step 2: The Analyst
Tier: Tier 1 Reasoning
the primary reasoning model evaluates each signal against your configured ICP profile — industry match, company size, geography, persona keywords. Assigns ICP relevance scores and classifies signal strength as WEAK, MODERATE, or STRONG. Scores the overall day heat: HOT (high-relevance signals detected), WARM (moderate signals), or COLD (nothing actionable). Chain-of-thought enforced.
Why this step matters: This is where the pipeline applies judgment — not just data retrieval, but analysis.
Step 3: The Formatter
Tier: Code Only
Converts the ranked signals into a Slack Block Kit digest. HOT signals surface first with company name, signal type, strength badge, and ICP score. Day heat badge appears at the top. COLD days produce a minimal empty digest — zero noise. Zero LLM cost.
Every field in the output is structured for the next agent to consume without parsing.
Step 4: Slack Delivery
Tier: HTTP
This is the final deliverable — what lands in your inbox or dashboard. Posts the formatted digest to your configured Slack sales channel via Bot Token. One message per day. Non-blocking — if Slack fails, the workflow still completes and returns the digest in the webhook response.
The entire pipeline executes without manual intervention. From trigger to output, every decision point follows a documented path. Every execution produces a traceable audit trail.
All nodes have been validated during Independent Test Protocol (ITP) testing on n8n v2.7.5. The error handling matrix in the bundle documents the recovery path for each failure mode.
This blueprint runs on your own n8n instance with your own API keys. Your CRM data never leaves your infrastructure.
Why we designed it this way
A ghost contact with 524 days inactive crashed the pipeline because output exceeded the token limit. Every field was null, the model tried to explain why each was missing, and the response ballooned past the buffer. Fix: always set max_tokens to 2x expected output and validate response completeness.
— ForgeWorkflows Engineering
Cost Breakdown
Every metric is ITP-measured. The Buying Signal Detector sweeps NewsAPI daily and delivers ranked signals at $0.093/run with a single the primary reasoning model call.
The primary operating cost for Buying Signal Detector is the per-execution LLM inference cost. Based on Independent Test Protocol (ITP) testing, the measured cost is: Cost per Run: $0.093/run blended avg | $2.05/month at 22 runs | $0.00 on empty signal days. This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour for a sales ops analyst at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 30–60 minutes per cycle, the per-execution cost in human labor is significant. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $2–3/month, depending on your usage volume and plan tiers.
Quality assurance: Blueprint Quality Standard (BQS) audit result is 12/12 PASS. ITP result is 20/20 accuracy | 14/14 milestones PASS | 100% heat accuracy (HOT 4/4, WARM 7/7, COLD 7/7). These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
All cost and performance figures are ITP-measured — tested against real data fixtures on n8n v2.7.5 in March 2026. See the product page for full test methodology.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
10 files — main workflow, scheduler, system prompt, rubrics, and complete documentation.
When you purchase Buying Signal Detector, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
CHANGELOG.md— Version historyREADME.md— Setup and configuration guideblueprint_dependency_matrix.md— Third-party service dependenciesbsd_scheduler_v1_0_0.json— Scheduler workflowbuying_signal_detector_v1_0_0.json— n8n workflow (main pipeline)icp_profile_guide.md— ICP profile guidenewsapi_setup_guide.md— NewsAPI setup guidesignal_strength_rubric.md— Signal strength rubricsystem_prompt_analyst.txt— Analyst system prompt
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
Buying Signal Detector is built for Sales, Revops teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a sales or revops function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: NewsAPI account, Slack workspace
- You have API credentials available: Anthropic API, NewsAPI key, Slack Bot Token
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not monitor social media, job boards, or proprietary databases — NewsAPI and configurable RSS feeds only
- Does not integrate with CRMs — delivers a Slack digest, not CRM record updates
- Does not provide historical trend analysis — each run is a point-in-time 24-hour sweep
- Does not persist deduplication across workflow restarts — static data resets on reactivation
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Edge cases to know about
Every pipeline has boundaries. These are intentional design decisions, not oversights — understanding them helps you deploy with the right expectations and plan for edge cases in your environment.
Does not monitor social media, job boards, or proprietary databases — NewsAPI and configurable RSS feeds only
This is intentional. We default to human-in-the-loop for actions that carry reputational or financial risk. Once your team has validated output accuracy over 20+ cycles, you can adjust the pipeline to auto-execute — the workflow JSON supports it, but the default is conservative.
Does not integrate with CRMs — delivers a Slack digest, not CRM record updates
We scoped this boundary after ITP testing revealed inconsistent results when the pipeline attempted this. The agents handle what they handle well — extending beyond this scope requires custom prompt engineering specific to your data shape.
Does not provide historical trend analysis — each run is a point-in-time 24-hour sweep
This keeps the pipeline focused on a single workflow. Adding this capability would introduce branching logic that varies by organization, and the tradeoff between complexity and reliability was not worth it for a reusable blueprint. Fork the workflow JSON if your use case demands it.
Review the error handling matrix in the bundle for the full list of documented failure modes and recovery paths.
Getting Started
Deployment follows a structured sequence. The Buying Signal Detector bundle is designed for the following tools: n8n, Anthropic API, NewsAPI, Slack. Here is the recommended deployment path:
- Step 1: Import workflows and configure credentials. Import buying_signal_detector_v1_0_0.json and bsd_scheduler_v1_0_0.json into n8n. Configure NewsAPI key (HTTP Header Auth, X-Api-Key), Anthropic API key, and Slack Bot Token (HTTP Header Auth, Authorization: Bearer xoxb-...).
- Step 2: Configure ICP profile and Slack channel. Open the Config Loader node in the main workflow. Set your ICP profile (industries, company size, geography, persona keywords). Set your Slack channel ID. Update the scheduler webhook URL to match your main workflow.
- Step 3: Activate and verify. Enable both workflows in n8n. Send a test POST with _is_itp: true to verify end-to-end. Check that a Slack digest appears with signal rankings and day heat badge. Then let the daily schedule take over.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the Buying Signal Detector product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
How does it differ from Job Change Intent Scorer?+
Distinct products with zero overlap. JCIS monitors individual contact job changes for SDR re-engagement via web search and updates Pipedrive. BSD monitors company-level buying signals (funding, leadership, growth) from news sources for AE account prioritization and delivers a daily Slack digest. Different data sources, different outputs, different buyers.
What signal types does it detect?+
Three categories: (1) Funding — Series A/B/C rounds, acquisitions, IPO filings. (2) Leadership — new CTO/VP hires, executive departures, board appointments. (3) Growth — hiring surges, office expansions, product launches, partnership announcements. Check the dependency matrix in the bundle for exact version requirements and credential setup steps.
How does ICP scoring work?+
You configure your ICP profile in the Config Loader node: target industries, company size range, geographies, persona keywords, excluded companies, and signal boost keywords. The Analyst evaluates each signal against this profile and assigns a relevance score. Signals matching multiple ICP criteria score higher. The README walks through configuration in under 10 minutes, including test data for validation.
How much does each run cost?+
ITP-measured: $0.093/run blended average across 20 test fixtures. HOT days with many signals cost slightly more. COLD days with zero signals skip the LLM call entirely — $0.00 cost. At 22 weekday runs per month, total cost is approximately $2.05/month.
What happens on days with no signals?+
The Empty Check node detects zero signals after deduplication and skips the LLM call. A minimal COLD digest is posted to Slack noting no actionable signals. Cost: $0.00. Zero noise.
Why is there a separate scheduler workflow?+
n8n cannot have both a Schedule Trigger and a respondToWebhook node in the same workflow. The split-workflow pattern separates scheduling (3-node cron workflow) from processing (23-node main workflow). The scheduler calls the main workflow via HTTP Request on the cron schedule. The system prompts are standalone text files — edit scoring thresholds and output formats without touching the workflow JSON.
Does deduplication persist across restarts?+
Deduplication uses n8n workflow static data with a 7-day URL TTL. Static data resets when the workflow is deactivated/reactivated or when n8n restarts. After a restart, you may see previously processed articles re-sent for one run. Persistent external storage is planned for v1.1.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
What should I do if the pipeline dead-letters a record?+
Check the dead letter output for the failure reason — the error context includes which agent failed and why. Common causes: missing input fields, API rate limits, or malformed data. Fix the underlying issue and reprocess. The error handling matrix in the bundle documents every failure mode and its recovery path.
Related Blueprints
Autonomous SDR Blueprint
32-node agentic swarm that researches, qualifies, writes, and syncs — so your SDR team focuses on closing.
Outbound Prospecting Agent
Apollo-sourced leads, AI-qualified and personally emailed — zero manual prospecting.
Job Change Intent Scorer
Your champions change jobs. Be the first call they take.
Inbound Lead Qualifier
Qualify inbound form leads with a 3-agent ILQ scoring pipeline — web research, 4-criteria scoring, and automatic Pipedrive routing.