product guideMar 19, 2026·13 min read

How Proposal Follow-Up Automator Automates Deal Recovery

The Problem

Daily proposal follow-up with 4-category stall classification, 3-tier urgency scoring, and personalized Gmail drafts for stuck deals. That single sentence captures a workflow gap that costs sales, revops teams hours every week. The manual process behind what Proposal Follow-Up Automator automates is familiar to anyone who has worked in a revenue organization: someone pulls data from Pipedrive, Gmail, Slack, copies it into a spreadsheet or CRM, applies a mental checklist, writes a summary, and routes it to the next person in the chain. Repeat for every record. Every day.

Three problems make this unsustainable at scale. First, the process does not scale. As volume grows, the human bottleneck becomes the constraint. Whether it is inbound leads, deal updates, or meeting prep, a person can only process a finite number of records before quality degrades. Second, the process is inconsistent. Different team members apply different criteria, use different formats, and make different judgment calls. There is no single standard of quality, and the output varies from person to person and day to day. Third, the process is slow. By the time a manual review is complete, the window for action may have already closed. Deals move, contacts change roles, and buying signals decay.

These are not theoretical concerns. They are the operational reality for sales, revops teams handling deal recovery and follow up automation workflows. Every hour spent on manual data processing is an hour not spent on the work that actually moves the needle: building relationships, closing deals, and driving strategy.

This is the gap Proposal Follow-Up Automator fills.

INFO

Teams typically spend 30-60 minutes per cycle on the manual version of this workflow. Proposal Follow-Up Automator reduces that to seconds per execution, with consistent output quality every time.

What This Blueprint Does

Five Agents. Daily Stall Classification. Personalized Follow-Up Drafts.

Proposal Follow-Up Automator is a 31-node n8n workflow with 5 specialized agents. Each agent handles a distinct phase of the pipeline, and the handoff between agents is deterministic — no ambiguous routing, no dropped records. The blueprint is designed so that each agent does one thing well, and the overall pipeline produces a consistent, auditable output on every run.

Here is what each agent does:

  • The Fetcher (Code-only): Retrieves all active deals at the configured proposal stage from Pipedrive API — deal value, contact details, activity history, stage entry date, and last communication timestamp.
  • The Classifier (Tier 1 Reasoning): Classifies each stalled deal into one of 4 stall reason categories: decision_delay (stakeholder alignment stalled), budget_review (procurement or finance hold), champion_ghosting (primary contact unresponsive), or competitor_evaluation (active competitive comparison).
  • The Scorer (Code-only): Computes urgency tier for each stalled deal: CRITICAL (≥7 days stalled), WARNING (4-6 days), or MONITOR (3 days).
  • The Writer (Tier 3 Creative): Generates personalized follow-up email drafts tailored to each stall reason category.
  • The Dispatcher (Code-only): Creates Gmail draft emails for each stalled deal (draft mode by default, configurable to auto-send) and posts a daily Slack digest with stall counts by category, urgency distribution, and top 5 deals requiring immediate attention..

When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow. Specifically, you receive:

  • Production-ready n8n workflow (28 nodes + 3-node scheduler)
  • 4-category stall reason classification (decision_delay, budget_review, champion_ghosting, competitor_evaluation)
  • 3-tier urgency scoring (CRITICAL ≥7d, WARNING 4-6d, MONITOR 3d)
  • Personalized follow-up email drafts tailored to each stall reason
  • Configurable follow-up limit to prevent over-contact
  • Gmail draft creation (or auto-send) with deal context
  • Daily Slack digest with stall counts and top 5 urgent deals
  • Configurable: proposal stage name, stale threshold, follow-up limit, draft mode
  • Full technical documentation and system prompts

Every component is designed to be modified. The agent prompts are plain text files you can edit. The workflow nodes can be rearranged or extended. The scoring criteria, output formats, and routing logic are all exposed as configurable parameters — not buried in application code. This means Proposal Follow-Up Automator adapts to your specific process, terminology, and integration requirements without forking the entire workflow.

TIP

Every agent prompt in the bundle is a standalone text file. You can customize scoring criteria, output formats, and routing logic without modifying the workflow JSON itself.

How the Pipeline Works

Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Proposal Follow-Up Automator execution flow.

Step 1: The Fetcher

Tier: Code-only

Retrieves all active deals at the configured proposal stage from Pipedrive API — deal value, contact details, activity history, stage entry date, and last communication timestamp. Calculates days-since-last-activity for stall detection threshold comparison.

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Fetcher identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

Step 2: The Classifier

Tier: Tier 1 Reasoning

Classifies each stalled deal into one of 4 stall reason categories: decision_delay (stakeholder alignment stalled), budget_review (procurement or finance hold), champion_ghosting (primary contact unresponsive), or competitor_evaluation (active competitive comparison). Uses deal activity patterns, email response gaps, and stage duration signals.

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Classifier identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

Step 3: The Scorer

Tier: Code-only

Computes urgency tier for each stalled deal: CRITICAL (≥7 days stalled), WARNING (4-6 days), or MONITOR (3 days). Factors in deal value, stall reason category, and number of prior follow-up attempts. Enforces configurable follow-up limit to prevent over-contact.

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Scorer identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

Step 4: The Writer

Tier: Tier 3 Creative

Generates personalized follow-up email drafts tailored to each stall reason category. Decision delay gets stakeholder alignment hooks, budget review gets ROI reinforcement, champion ghosting gets multi-thread outreach, competitor evaluation gets differentiation talking points. Each draft includes the deal context and specific next-step recommendations.

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Writer identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

Step 5: The Dispatcher

Tier: Code-only

Creates Gmail draft emails for each stalled deal (draft mode by default, configurable to auto-send) and posts a daily Slack digest with stall counts by category, urgency distribution, and top 5 deals requiring immediate attention.

This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Dispatcher identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.

The entire pipeline executes without manual intervention. From trigger to output, every decision point is deterministic: if a condition is met, the next agent fires; if not, the record is handled according to a documented fallback path. There are no silent failures. Every execution produces a traceable audit trail that you can review, export, or feed into your own reporting tools.

This architecture follows the ForgeWorkflows principle of tested, measured, documented automation. Every node in the pipeline has been validated during ITP (Inspection and Test Plan) testing, and the error handling matrix in the bundle documents the recovery path for each failure mode.

INFO

Tier references indicate the reasoning complexity assigned to each agent. Higher tiers use more capable models for tasks that require nuanced judgment, while lower tiers use efficient models for classification and routing tasks. This tiered approach optimizes both quality and cost.

Cost Breakdown

Daily proposal follow-up with 4-category stall classification, 3-tier urgency scoring, and personalized email drafts delivered via Gmail and Slack.

The primary operating cost for Proposal Follow-Up Automator is the per-execution LLM inference cost. Based on ITP testing, the measured cost is: Cost per Run: $0.015 per deal. This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.

To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 20–40 minutes per cycle, that is $17–50 per execution in human labor. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.

Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is Daily cost ~$0.015/deal (~$10-15/month for 30+ deals/day), depending on your usage volume and plan tiers.

Quality assurance: BQS audit result is 12/12 PASS. ITP result is 20/20 records, all milestones PASS. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.

TIP

Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.

What's in the Bundle

6 files. Main workflow + scheduler + prompts + docs.

When you purchase Proposal Follow-Up Automator, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:

  • CHANGELOG.md — Version history
  • README.md — Setup and configuration guide
  • docs/TDD.md — Technical Design Document
  • proposal_follow_up_automator_v1_0_0.json — n8n workflow (main pipeline)
  • system_prompts/analyst_system_prompt.md — Analyst system prompt
  • system_prompts/writer_system_prompt.md — Writer system prompt
  • workflow/pfa_scheduler_v1_0_0.json — Scheduler workflow

Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.

Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.

Who This Is For

Proposal Follow-Up Automator is built for Sales, Revops teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:

  • You operate in a sales or revops function and handle the workflow this blueprint automates on a recurring basis
  • You have (or are willing to set up) an n8n instance — self-hosted or cloud
  • You have active accounts for the required integrations: Pipedrive CRM with active pipeline, Anthropic API key, Gmail account, Slack workspace (Bot Token with chat:write)
  • You have API credentials available: Anthropic API, Pipedrive (API Token), Gmail (OAuth2), Slack (Bot Token, httpHeaderAuth Bearer, chat:write)
  • You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)

This is NOT for you if:

  • Does not modify deal stages or values — read-only analysis with email drafts as output
  • Does not replace human judgment on when to send — draft mode lets reps review before sending
  • Does not predict deal outcomes — it classifies current stall reasons and generates follow-up content
  • Does not monitor real-time deal changes — daily batch analysis at configured schedule
  • Does not handle initial outreach — designed for deals already at proposal stage that have stalled

Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.

NOTE

All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.

Getting Started

Deployment follows a structured sequence. The Proposal Follow-Up Automator bundle is designed for the following tools: n8n, Anthropic API, Pipedrive, Gmail, Slack. Here is the recommended deployment path:

  1. Step 1: Import workflows and configure credentials. Import both workflow JSON files into n8n (main + scheduler). Configure Pipedrive API Token, Gmail OAuth2, Slack Bot Token (httpHeaderAuth with Bearer prefix, chat:write scope), and Anthropic API key following the README.
  2. Step 2: Configure proposal stage and thresholds. Set PROPOSAL_STAGE_NAME (your Pipedrive stage name for proposals), STALE_THRESHOLD_DAYS (default 3), FOLLOW_UP_LIMIT (default 3), GMAIL_DRAFT_MODE (default true), and SLACK_CHANNEL in the scheduler Build Payload node.
  3. Step 3: Activate scheduler and verify. Update the webhook URL in the scheduler to match your main workflow webhook path. Activate both workflows. Send a test POST with _is_itp: true and sample deal data. Verify Gmail drafts are created and the Slack digest appears.

Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.

Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.

For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.

Ready to deploy? View the Proposal Follow-Up Automator product page for full specifications, pricing, and purchase.

TIP

Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.

Frequently Asked Questions

How does it classify stall reasons?+

The Classifier analyzes deal activity patterns — email response gaps, stage duration relative to pipeline benchmarks, and activity sequence signals. Each pattern maps to one of 4 stall categories. Classification confidence is included in the output for transparency.

Can I customize the stall threshold?+

Yes. STALE_THRESHOLD_DAYS (default 3) controls when a deal is considered stalled. FOLLOW_UP_LIMIT (default 3) caps the number of automated follow-ups per deal. Both are configurable in the scheduler payload.

Does it send emails automatically?+

By default, it creates Gmail drafts for human review. Set GMAIL_DRAFT_MODE to false to enable auto-send. The daily Slack digest always fires regardless of draft mode.

Is there a refund policy?+

All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.

Get Proposal Follow-Up Automator

$199

View Blueprint

Related Blueprints

Related Articles

Proposal Follow-Up Automator$199