How Apollo List Quality Scorer Automates Outbound Prospecting
The Problem
Your sales team has 47 deals in the proposal stage. 12 have not had contact in 5+ days. Three have gone completely dark. Which ones are at risk — and which ones just have a slow procurement process? A rep answering this question manually checks Apollo, Google Sheets, cross-references email history, and makes a judgment call on each deal. At 15 minutes per deal, that is 30–60 minutes per cycle of triage before any follow-up happens.
The cost is not just time — it is revenue leakage. Deals slip because signals were missed. Pipeline reviews rely on data that was accurate two days ago. Scoring criteria drift between team members, and the CRM becomes a lagging indicator rather than an operational tool. Apollo List Quality Scorer automates the outbound prospecting and data quality workflow from data extraction through analysis to structured output, with zero manual CRM entry.
Teams typically spend 30–60 minutes per cycle on the manual version of this workflow. Apollo List Quality Scorer reduces that to seconds per execution, with consistent output quality and zero CRM data entry.
What This Blueprint Does
Four Agents. Five Quality Signals. Scored Lists Before Sequencing.
The Apollo List Quality Scorer pipeline runs 4 agents in sequence. Fetcher pulls data from Apollo and Google Sheets, and Formatter delivers the output. Here is what happens at each stage and why it matters.
- Fetcher (Webhook + Code): Webhook accepts JSON array or CSV payload containing prospect lists.
- Researcher (Tier 2 Classification): the analysis model + web_search activates only for thin Apollo data — prospects where Apollo returned incomplete company or contact information.
- Analyst (Tier 2 Classification): the analysis model scores each prospect across 5 weighted LQS criteria: icp_fit (30%), data_completeness (20%), deliverability_signals (20%), enrichment_quality (15%), recency (15%).
- Formatter (Code + 3-way Route): Routes based on LQS composite score.
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow:
- ITP-tested 27-node n8n workflow — import and deploy
- Webhook input accepts JSON array or CSV payload for batch processing
- Apollo.io People Enrichment API per prospect — email verification, company data, tech stack
- LQS 5-criteria weighted scoring: icp_fit (30%), data_completeness (20%), deliverability_signals (20%), enrichment_quality (15%), recency (15%)
- Per-criteria scoring (0–10) with evidence-based assessment and reasoning summary
- 3-way routing: LQS ≥ 7 pursue, 4–6 enrich, < 4 remove
- Thin data detection: missing fields trigger Researcher web_search enrichment automatically
- Annotated Google Sheets output with 16 columns (prospect data, scores, action tags)
- Dual-the analysis model: $0.024/prospect all-in — no the primary reasoning modelrequired
- ITP test results with 20 records, 14/14 milestones, LQS range 0.45–9.37
Scoring thresholds, output destinations, and CRM field mappings are configurable in the system prompts — no workflow JSON edits required. This means Apollo List Quality Scorer adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every agent prompt is a standalone text file. Customize scoring thresholds, qualification criteria, and output formatting without touching the workflow JSON.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Apollo List Quality Scorer execution flow.
Step 1: Fetcher
Tier: Webhook + Code
The pipeline starts here. Webhook accepts JSON array or CSV payload containing prospect lists. Fetcher parses the input, normalizes field names, and calls Apollo.io People Enrichment API per prospect — email verification status, company data, technology stack, headcount, industry, and LinkedIn URL. Thin data detection flags prospects missing key fields for downstream web_search enrichment.
This stage ensures all downstream agents receive clean, validated input. If this step returns incomplete data, every downstream agent works with a degraded picture.
Step 2: Researcher
Tier: Tier 2 Classification
the analysis model + web_search activates only for thin Apollo data — prospects where Apollo returned incomplete company or contact information. Enriches missing fields via web search: company website, headcount range, industry classification, recent news. Rich Apollo records skip this step entirely, keeping cost at $0 for well-populated prospects.
Why this step matters: The result is a prioritized action queue, not just a data dump.
Step 3: Analyst
Tier: Tier 2 Classification
the analysis model scores each prospect across 5 weighted LQS criteria: icp_fit (30%), data_completeness (20%), deliverability_signals (20%), enrichment_quality (15%), recency (15%). Per-criteria scoring 0–10 with evidence. Weighted composite LQS drives 3-way routing: LQS ≥ 7 pursue, 4–6 enrich, < 4 remove. Each score includes reasoning summary.
Every field in the output is structured for the next agent to consume without parsing.
Step 4: Formatter
Tier: Code + 3-way Route
This is the final deliverable — what lands in your inbox or dashboard. Routes based on LQS composite score. Writes annotated Google Sheets output with 16 columns: prospect data, per-criteria scores, LQS composite, action tag (pursue/enrich/remove), and reasoning summary. Summary stats returned via webhook response: total processed, pursue/enrich/remove counts, average LQS, and estimated cost.
The entire pipeline executes without manual intervention. From trigger to output, every decision point follows a documented path. Every execution produces a traceable audit trail.
All nodes have been validated during Independent Test Protocol (ITP) testing on n8n v2.7.5. The error handling matrix in the bundle documents the recovery path for each failure mode.
This blueprint runs on your own n8n instance with your own API keys. Your CRM data never leaves your infrastructure.
Why we designed it this way
We built 100 blueprints in 5 weeks. A RevOps team building one from scratch — scoping requirements, configuring nodes, writing prompts, testing edge cases, documenting error handling — that is 40-80 hours. The factory model works because patterns transfer. Blueprint 47 reuses structural patterns proven in blueprints 1-46.
— ForgeWorkflows Engineering
Cost Breakdown
Every metric is ITP-measured. The Apollo List Quality Scorer validates prospect lists before sequencing — enriching via Apollo.io API, scoring across 5 weighted quality criteria with evidence, and routing to pursue/enrich/remove at $0.024/prospect.
The primary operating cost for Apollo List Quality Scorer is the per-execution LLM inference cost. Based on Independent Test Protocol (ITP) testing, the measured cost is: Cost per Prospect: $0.024/prospect (ITP-measured average). This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour for a sales ops analyst at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 30–60 minutes per cycle, the per-execution cost in human labor is significant. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $24/month (1,000 prospects/month), depending on your usage volume and plan tiers.
Quality assurance: Blueprint Quality Standard (BQS) audit result is 12/12 PASS. ITP result is 20 records, 14/14 milestones PASS, LQS range 0.45–9.37. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
All cost and performance figures are ITP-measured — tested against real data fixtures on n8n v2.7.5 in March 2026. See the product page for full test methodology.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
9 files — workflow JSON, system prompts, scoring guides, and complete documentation.
When you purchase Apollo List Quality Scorer, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
CHANGELOG.md— Version historyREADME.md— Setup and configuration guideapollo_list_quality_scorer_v1.0.0.json— n8n workflow (main pipeline)docs/TDD.md— Technical Design Documentsystem_prompts/analyst_system_prompt.md— Analyst system promptsystem_prompts/researcher_system_prompt.md— Researcher system prompt
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
Apollo List Quality Scorer is built for Sales teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a sales function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: Apollo.io account (People Enrichment API), Google Workspace (Sheets access)
- You have API credentials available: Anthropic API, Apollo.io (httpHeaderAuth), Google Sheets (OAuth2)
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not build prospect lists from scratch — that is what Outbound Prospecting Agent does
- Does not send outreach emails — scores and annotates lists for your sequencer
- Does not create CRM records — outputs to Google Sheets for flexible downstream use
- Does not monitor deal stages — that is what Deal Intelligence Agent does
- Does not score existing CRM contacts — that is what Contact Re-Engagement Scorer does
- Does not verify email addresses — relies on Apollo.io verification status as an input signal
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Edge cases to know about
Every pipeline has boundaries. These are intentional design decisions, not oversights — understanding them helps you deploy with the right expectations and plan for edge cases in your environment.
Does not build prospect lists from scratch — that is what Outbound Prospecting Agent does
This is intentional. We default to human-in-the-loop for actions that carry reputational or financial risk. Once your team has validated output accuracy over 20+ cycles, you can adjust the pipeline to auto-execute — the workflow JSON supports it, but the default is conservative.
Does not send outreach emails — scores and annotates lists for your sequencer
We scoped this boundary after ITP testing revealed inconsistent results when the pipeline attempted this. The agents handle what they handle well — extending beyond this scope requires custom prompt engineering specific to your data shape.
Does not create CRM records — outputs to Google Sheets for flexible downstream use
This keeps the pipeline focused on a single workflow. Adding this capability would introduce branching logic that varies by organization, and the tradeoff between complexity and reliability was not worth it for a reusable blueprint. Fork the workflow JSON if your use case demands it.
Review the error handling matrix in the bundle for the full list of documented failure modes and recovery paths.
Getting Started
Deployment follows a structured sequence. The Apollo List Quality Scorer bundle is designed for the following tools: n8n, Anthropic API, Apollo.io, Google Sheets. Here is the recommended deployment path:
- Step 1: Import workflow and configure credentials. Import apollo_list_quality_scorer_v1_0_0.json into n8n. Configure Apollo.io API key (httpHeaderAuth), Google Sheets OAuth2 credential, and Anthropic API key following the setup guides.
- Step 2: Configure ICP and output settings. Customize the ICP definition in the Analyst system prompt to match your target profile. Create the output Google Sheet and configure the Sheet ID in the workflow. Review the LQS scoring guide for criteria definitions and threshold tuning.
- Step 3: Activate and verify. Enable the workflow in n8n. Send a test batch of 5–10 prospects via webhook. Verify Apollo enrichment runs, LQS scores are computed with per-criteria breakdown, action tags are assigned, and the Google Sheet is populated with all 16 columns.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the Apollo List Quality Scorer product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
How does it differ from Outbound Prospecting Agent?+
Complementary products for different stages. Outbound Prospecting Agent (OPA) builds prospect lists from scratch via Apollo search and sends personalized outreach via Gmail. Apollo List Quality Scorer (ALQS) validates and scores existing lists you already have — from Apollo exports, purchased lists, or event attendee data — before you load them into a sequencer. The system prompts are standalone text files — edit scoring thresholds and output formats without touching the workflow JSON.
What are the five LQS criteria?+
ICP Fit (30%) — industry match, headcount range, geography alignment. Data Completeness (20%) — email presence and verification, phone, title, company coverage. Deliverability Signals (20%) — email verification status, catch-all detection, role-based flagging, domain reputation. Enrichment Quality (15%) — LinkedIn match confidence, website reachability, revenue data. Recency (15%) — profile last updated, company data freshness, activity signals.
What does the 3-way routing do?+
LQS ≥ 7 tags the prospect as "pursue" — ready for sequencing. LQS 4–6 tags as "enrich" — worth keeping but needs additional data before outreach. LQS < 4 tags as "remove" — low quality, do not waste sequence slots. Action tags appear in the Google Sheets output column for easy filtering.
When does the Researcher activate?+
Only for thin Apollo data — prospects where Apollo returned incomplete company or contact information. Rich Apollo records skip the Researcher entirely, keeping cost at $0 for those prospects. This selective enrichment pattern keeps the average cost at $0.024/prospect instead of paying for web_search on every record. Review the error handling matrix in the bundle — it documents the recovery path for each failure mode.
What input formats does it accept?+
Webhook accepts two formats: JSON array (matching Apollo API export structure) and CSV payload (comma-separated with header row). The Fetcher normalizes both formats into a standard internal schema before enrichment. Maximum batch size depends on n8n memory — tested with up to 500 prospects per batch. The ITP test results in the bundle show measured performance across edge cases, not just happy-path data.
Why Sonnet instead of Opus for scoring?+
LQS scoring is structured classification against a defined rubric — not open-ended reasoning. Sonnet 4.6 handles this with high accuracy at $3/$15 per million tokens vs Opus at $15/$75. Dual-Sonnet architecture keeps cost at $0.024/prospect. 1,000 prospects = $24 all-in.
What does the Google Sheets output look like?+
16 columns: prospect name, email, title, company, industry, headcount, email verification status, 5 per-criteria scores (0–10), LQS composite, action tag (pursue/enrich/remove), reasoning summary, and data source (Apollo/web_search/both). One row per prospect, sorted by LQS descending. Check the dependency matrix in the bundle for exact version requirements and credential setup steps.
Does it use web scraping?+
Partially. The Researcher uses Anthropic web_search for thin Apollo data only — prospects where Apollo returned incomplete information. Rich Apollo records skip web_search entirely. This means scraping reliability only affects prospects with poor Apollo coverage, not the full list.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
What should I do if the pipeline dead-letters a record?+
Check the dead letter output for the failure reason — the error context includes which agent failed and why. Common causes: missing input fields, API rate limits, or malformed data. Fix the underlying issue and reprocess. The error handling matrix in the bundle documents every failure mode and its recovery path.
Related Blueprints
Outbound Prospecting Agent
Apollo-sourced leads, AI-qualified and personally emailed — zero manual prospecting.
Contact Re-Engagement Scorer
Re-engage stale contacts with AI-scored outreach timing.
Contact Intelligence Agent
Automated CRM enrichment that researches, scores, and writes back to Pipedrive — zero manual lookup.