How Deal Sentiment Monitor Automates Sales Intelligence
The Problem
Your sales team has 47 deals in the proposal stage. 12 have not had contact in 5+ days. Three have gone completely dark. Which ones are at risk — and which ones just have a slow procurement process? A rep answering this question manually checks Slack, Pipedrive, cross-references email history, and makes a judgment call on each deal. At 15 minutes per deal, that is 30–60 minutes per cycle of triage before any follow-up happens.
The cost is not just time — it is revenue leakage. Deals slip because signals were missed. Pipeline reviews rely on data that was accurate two days ago. Scoring criteria drift between team members, and the CRM becomes a lagging indicator rather than an operational tool. Deal Sentiment Monitor automates the sales intelligence and deal management workflow from data extraction through analysis to structured output, with zero manual CRM entry.
Teams typically spend 30–60 minutes per cycle on the manual version of this workflow. Deal Sentiment Monitor reduces that to seconds per execution, with consistent output quality and zero CRM data entry.
What This Blueprint Does
Three Agents. Five Sentiment Categories. Per-Message Deal Intelligence.
The Deal Sentiment Monitor pipeline runs 3 agents in sequence. Extractor pulls data from Slack and Pipedrive, and Syncer delivers the output. Here is what happens at each stage and why it matters.
- Extractor (Webhook + Code): Slack Event API Webhook fires on message.channels events in real time.
- Analyst (Tier 2 Classification): the analysis model performs 5-category sentiment scoring with Chain-of-Thought reasoning: message_summary and tone_signals are generated BEFORE sentiment_category and score.
- Syncer (Code + Conditional Route): Updates Pipedrive fw_deal_sentiment custom field with running average formula: (current × 0.7) + (new_score × 0.3).
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow:
- ITP-tested 30-node n8n workflow — import and deploy
- Slack Event API Webhook for real-time message.channels event processing
- 5-category sentiment taxonomy: positive_momentum, neutral_stable, cautious_hesitant, frustrated_negative, urgent_escalation
- Chain-of-Thought scoring: message_summary + tone_signals BEFORE sentiment_category + score
- Running average formula: (current × 0.7) + (new_score × 0.3) for smooth trend tracking
- Conditional alerting: Pipedrive note + Slack thread reply for negative shifts
- Pre-LLM filters: bot filter + short message filter eliminate noise at $0 cost
- Channel-to-deal mapping via configurable DEAL_CHANNEL_PREFIX convention
- SINGLE-MODEL: the analysis model for sentiment classification — no the primary reasoning modelneeded
- Configurable: DEAL_CHANNEL_PREFIX, SENTIMENT_SHIFT_THRESHOLD, ALERT_VIA_SLACK, MIN_MESSAGE_LENGTH
- ITP 20/20 records, 16/16 milestones, $0.005–$0.011/message measured
Scoring thresholds, output destinations, and CRM field mappings are configurable in the system prompts — no workflow JSON edits required. This means Deal Sentiment Monitor adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every agent prompt is a standalone text file. Customize scoring thresholds, qualification criteria, and output formatting without touching the workflow JSON.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Deal Sentiment Monitor execution flow.
Step 1: Extractor
Tier: Webhook + Code
The pipeline starts here. Slack Event API Webhook fires on message.channels events in real time. Extractor parses the raw event payload, extracts message text, sender, channel, and timestamp. Pre-LLM filters run first: bot filter removes automated messages, short message filter eliminates messages under MIN_MESSAGE_LENGTH characters. Channel-to-deal mapping uses DEAL_CHANNEL_PREFIX convention to find the corresponding Pipedrive deal and fetch the current fw_deal_sentiment running average. Only human messages with substance pass through to the Analyst.
This stage ensures all downstream agents receive clean, validated input. If this step returns incomplete data, every downstream agent works with a degraded picture.
Step 2: Analyst
Tier: Tier 2 Classification
the analysis model performs 5-category sentiment scoring with Chain-of-Thought reasoning: message_summary and tone_signals are generated BEFORE sentiment_category and score. Five categories: positive_momentum (8–10) for buying signals and enthusiastic engagement, neutral_stable (6–7) for normal business communication, cautious_hesitant (4–5) for hedging language and qualification concerns, frustrated_negative (2–3) for complaints and competitor mentions, urgent_escalation (1) for threats to leave and deal-breaking language.
Why this step matters: The result is a prioritized action queue, not just a data dump.
Step 3: Syncer
Tier: Code + Conditional Route
This is the final deliverable — what lands in your inbox or dashboard. Updates Pipedrive fw_deal_sentiment custom field with running average formula: (current × 0.7) + (new_score × 0.3). Always writes the updated sentiment score. Conditional routing: if sentiment is negative or a significant shift is detected (exceeding SENTIMENT_SHIFT_THRESHOLD), creates a Pipedrive note with sentiment context and sends a Slack thread reply alerting the team. Normal positive/neutral scores update silently.
The entire pipeline executes without manual intervention. From trigger to output, every decision point follows a documented path. Every execution produces a traceable audit trail.
All nodes have been validated during Independent Test Protocol (ITP) testing on n8n v2.7.5. The error handling matrix in the bundle documents the recovery path for each failure mode.
This blueprint runs on your own n8n instance with your own API keys. Your CRM data never leaves your infrastructure.
Why we designed it this way
We spent a week getting the classification modelto output exactly 3 sentences. Polite instructions like "please write 3 sentences" got ignored. LLMs do not treat polite instructions the same as system constraints. The fix was emphatic constraint language with enforcement: "OUTPUT MUST CONTAIN EXACTLY 3 SENTENCES. If output contains more or fewer than 3 sentences, the response is INVALID."
— ForgeWorkflows Engineering
Cost Breakdown
Every metric is ITP-measured. The Deal Sentiment Monitor scores emotional tone on every Slack deal channel message in real time — the analysis model for 5-category sentiment classification, running average trend tracking, and conditional alerting at $0.005–$0.011/message.
The primary operating cost for Deal Sentiment Monitor is the per-execution LLM inference cost. Based on Independent Test Protocol (ITP) testing, the measured cost is: Cost per Message: $0.005–$0.011/message (ITP-measured). This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour for a sales ops analyst at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 30–60 minutes per cycle, the per-execution cost in human labor is significant. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $5-$11/month (1,000 messages/month), depending on your usage volume and plan tiers.
Quality assurance: Blueprint Quality Standard (BQS) audit result is 12/12 PASS. ITP result is 20/20 records, 16/16 milestones PASS. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
All cost and performance figures are ITP-measured — tested against real data fixtures on n8n v2.7.5 in March 2026. See the product page for full test methodology.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
7 files — workflow JSON, system prompt, configuration guides, and complete documentation.
When you purchase Deal Sentiment Monitor, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
CHANGELOG.md— Version historyREADME.md— Setup and configuration guidedeal_sentiment_monitor_v1_0_0.json— n8n workflow (main pipeline)docs/TDD.md— Technical Design Documentsystem_prompts/analyst_system_prompt.md— Analyst system prompt
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
Deal Sentiment Monitor is built for Sales, Revops teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a sales or revops function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: Slack workspace (Bot Token with channels:history, channels:read, chat:write scopes), Pipedrive account (fw_deal_sentiment custom field), Anthropic API key (~$0.005-$0.011/message)
- You have API credentials available: Anthropic API, Slack (Event API + Bot Token, httpHeaderAuth Bearer), Pipedrive API (pipedriveApi)
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not classify activity types — that is what Slack-to-CRM Activity Logger does
- Does not diagnose stalled deals — that is what Deal Stall Diagnoser does
- Does not perform deep deal health analysis — that is what Deal Intelligence Agent does
- Does not monitor private/direct messages — only public channel messages via Event API
- Does not create Pipedrive deals — updates existing deal sentiment field and creates notes
- Does not scrape external websites — all data from Slack events and Pipedrive API
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Edge cases to know about
Every pipeline has boundaries. These are intentional design decisions, not oversights — understanding them helps you deploy with the right expectations and plan for edge cases in your environment.
Does not classify activity types — that is what Slack-to-CRM Activity Logger does
This is intentional. We default to human-in-the-loop for actions that carry reputational or financial risk. Once your team has validated output accuracy over 20+ cycles, you can adjust the pipeline to auto-execute — the workflow JSON supports it, but the default is conservative.
Does not diagnose stalled deals — that is what Deal Stall Diagnoser does
We scoped this boundary after ITP testing revealed inconsistent results when the pipeline attempted this. The agents handle what they handle well — extending beyond this scope requires custom prompt engineering specific to your data shape.
Does not perform deep deal health analysis — that is what Deal Intelligence Agent does
This keeps the pipeline focused on a single workflow. Adding this capability would introduce branching logic that varies by organization, and the tradeoff between complexity and reliability was not worth it for a reusable blueprint. Fork the workflow JSON if your use case demands it.
Review the error handling matrix in the bundle for the full list of documented failure modes and recovery paths.
Getting Started
Deployment follows a structured sequence. The Deal Sentiment Monitor bundle is designed for the following tools: n8n, Anthropic API, Slack, Pipedrive. Here is the recommended deployment path:
- Step 1: Import workflow and configure credentials. Import deal_sentiment_monitor_v1_0_0.json into n8n. Configure Slack Bot Token (httpHeaderAuth with Bearer prefix, channels:history + channels:read + chat:write scopes), Pipedrive API key, and Anthropic API key following the README.
- Step 2: Create Pipedrive custom field and configure channel mapping. Create the fw_deal_sentiment custom field in Pipedrive (type: double/number). Set DEAL_CHANNEL_PREFIX to match your Slack channel naming convention. Configure SENTIMENT_SHIFT_THRESHOLD and ALERT_VIA_SLACK preferences.
- Step 3: Activate and verify. Enable the workflow in n8n. Send a test message in a mapped Slack deal channel. Verify the sentiment score is computed, the Pipedrive custom field is updated with the running average, and alerts fire for negative sentiment.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the Deal Sentiment Monitor product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
How does it differ from Slack-to-CRM Activity Logger?+
Different output and purpose. SCAL logs ALL Slack deal channel messages to Pipedrive as activities with AI classification (intent category, stakeholder, CRM action). DSM scores the emotional TONE of messages and alerts on negative sentiment shifts. SCAL tracks what happened; DSM detects how people feel about it. Together they provide both activity tracking and emotional intelligence on deal channels.
What are the five sentiment categories?+
positive_momentum (8–10) — buying signals, enthusiastic engagement, deal moving forward. neutral_stable (6–7) — normal business communication, no strong emotional signals. cautious_hesitant (4–5) — hedging language, delayed responses, qualification concerns. frustrated_negative (2–3) — complaints, dissatisfaction, competitor mentions, support issues. urgent_escalation (1) — threats to leave, escalation to management, deal-breaking language.
How does the running average work?+
The Syncer updates the Pipedrive fw_deal_sentiment custom field using a weighted running average: (current_score × 0.7) + (new_score × 0.3). This smooths out single-message noise while still responding to sentiment shifts. A single negative message nudges the average down; sustained negativity drives it down further. The 70/30 weighting means recent history matters more than any individual message.
When does it send alerts?+
Alerts fire on two conditions: (1) the new sentiment score is in the frustrated_negative or urgent_escalation range, or (2) the sentiment shift exceeds SENTIMENT_SHIFT_THRESHOLD (configurable). When triggered, the Syncer creates a Pipedrive note with the message context, sentiment analysis, and score, plus sends a Slack thread reply on the original message alerting the team. Normal positive/neutral scores update the field silently. Review the error handling matrix in the bundle — it documents the recovery path for each failure mode.
Why only Sonnet instead of Opus?+
Sentiment classification is a structured classification task that Sonnet 4.6 handles with high accuracy in ITP testing. The Chain-of-Thought prompt (message_summary + tone_signals BEFORE scoring) provides sufficient reasoning structure. Opus would add cost without measurable quality improvement for this task type. SINGLE-MODEL architecture keeps cost at $0.005–$0.011/message — the lowest per-record LLM cost in the ForgeWorkflows catalog.
How does channel-to-deal mapping work?+
Uses a configurable DEAL_CHANNEL_PREFIX convention. Channels named with the prefix (e.g., #deal-acme-corp) map to their corresponding Pipedrive deals via prefix matching and deal search. The Extractor strips the prefix, searches Pipedrive for matching deals, and fetches the current fw_deal_sentiment running average for the matched deal. The system prompts are standalone text files — edit scoring thresholds and output formats without touching the workflow JSON.
How does it relate to Deal Stall Diagnoser?+
Complementary products covering different signals. DSM detects sentiment shifts that may PREDICT stalls before they appear in activity metrics — real-time per-message monitoring. DSD diagnoses WHY deals are stalling based on activity patterns and stage duration on a daily schedule. Different trigger (real-time vs scheduled), different signal (emotional tone vs activity absence). Together they provide both predictive sentiment signals and diagnostic stall analysis.
How does it relate to Deal Intelligence Agent?+
Complementary products covering different scope. DIA provides full deal analysis with risk scoring across multiple dimensions on demand. DSM monitors real-time message sentiment as a leading indicator of deal health. Different timing (on-demand vs real-time per-message), different scope (full deal intelligence vs sentiment signal). Together they combine deep analysis with continuous sentiment monitoring.
Does it use web scraping?+
No. All data comes from two sources: Slack Event API (message payload) and Pipedrive API (deal lookup, custom field update, notes). No web_search, no external data sources, no scraping. This makes the pipeline fast and reliable.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
What should I do if the pipeline dead-letters a record?+
Check the dead letter output for the failure reason — the error context includes which agent failed and why. Common causes: missing input fields, API rate limits, or malformed data. Fix the underlying issue and reprocess. The error handling matrix in the bundle documents every failure mode and its recovery path.