How Feature Request Extractor Automates Feature Requests
The Problem
Every feature request in Slack becomes a structured Linear issue. Automatically. That single sentence captures a workflow gap that costs product, engineering teams hours every week. The manual process behind what Feature Request Extractor automates is familiar to anyone who has worked in a revenue organization: someone pulls data from Slack, Linear, copies it into a spreadsheet or CRM, applies a mental checklist, writes a summary, and routes it to the next person in the chain. Repeat for every record. Every day.
Three problems make this unsustainable at scale. First, the process does not scale. As volume grows, the human bottleneck becomes the constraint. Whether it is inbound leads, deal updates, or meeting prep, a person can only process a finite number of records before quality degrades. Second, the process is inconsistent. Different team members apply different criteria, use different formats, and make different judgment calls. There is no single standard of quality, and the output varies from person to person and day to day. Third, the process is slow. By the time a manual review is complete, the window for action may have already closed. Deals move, contacts change roles, and buying signals decay.
These are not theoretical concerns. They are the operational reality for product, engineering teams handling feature requests and product intelligence workflows. Every hour spent on manual data processing is an hour not spent on the work that actually moves the needle: building relationships, closing deals, and driving strategy.
This is the gap Feature Request Extractor fills.
Teams typically spend 30-60 minutes per cycle on the manual version of this workflow. Feature Request Extractor reduces that to seconds per execution, with consistent output quality every time.
What This Blueprint Does
One Classifier. Real-Time Triage. Zero Manual Work.
Feature Request Extractor is a 22-node n8n workflow with 4 specialized agents. Each agent handles a distinct phase of the pipeline, and the handoff between agents is deterministic — no ambiguous routing, no dropped records. The blueprint is designed so that each agent does one thing well, and the overall pipeline produces a consistent, auditable output on every run.
Here is what each agent does:
- Input Filter (Code Only): Slack Events API fires a webhook on every message posted in the configured channel.
- The Classifier (Tier 1 Reasoning): the primary reasoning model evaluates whether the message is a genuine feature request.
- Issue Creator (GraphQL): Creates a structured Linear issue via GraphQL issueCreate mutation with all classified fields — title, description, priority, product area label, and a source link back to the original Slack message.
- Notifier (HTTP): Adds a ✅ reaction to the original Slack message and posts a thread reply with the Linear issue link (e.g., "Feature request captured → ENG-142").
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow. Specifically, you receive:
- Production-ready 22-node n8n workflow — import and deploy
- Real-time Slack monitoring — every message evaluated as it arrives
- Structured Linear issues with title, description, priority, product area, and source link
- Configurable product area taxonomy — adapt to your domain
- Confidence threshold tuning — control precision vs recall
- 24-hour team/label ID caching — zero redundant Linear API calls
- Slack ✅ reaction + thread reply confirmation on every captured request
- $0.046/FR detected, $0.023/non-FR exit, $0.029 blended per message
- ITP test results with 20 fixtures and 14/14 milestones
Every component is designed to be modified. The agent prompts are plain text files you can edit. The workflow nodes can be rearranged or extended. The scoring criteria, output formats, and routing logic are all exposed as configurable parameters — not buried in application code. This means Feature Request Extractor adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every agent prompt in the bundle is a standalone text file. You can customize scoring criteria, output formats, and routing logic without modifying the workflow JSON itself.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Feature Request Extractor execution flow.
Step 1: Input Filter
Tier: Code Only
Slack Events API fires a webhook on every message posted in the configured channel. Bot messages and messages shorter than the configurable minimum length are filtered immediately — no LLM calls wasted on noise. The remaining messages are forwarded to classification. Zero LLM cost.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Input Filter identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 2: The Classifier
Tier: Tier 1 Reasoning
the primary reasoning model evaluates whether the message is a genuine feature request. If yes: extracts a structured title (action-verb format), description, priority (1–4), and product area from your configured taxonomy. Confidence threshold configurable (default 0.7). Chain-of-thought enforced. Non-requests exit immediately.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Classifier identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 3: Issue Creator
Tier: GraphQL
Creates a structured Linear issue via GraphQL issueCreate mutation with all classified fields — title, description, priority, product area label, and a source link back to the original Slack message. Team and label IDs are cached for 24 hours via workflow static data — zero redundant API calls.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Issue Creator identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 4: Notifier
Tier: HTTP
Adds a ✅ reaction to the original Slack message and posts a thread reply with the Linear issue link (e.g., "Feature request captured → ENG-142"). The thread reply confirms triage happened — no requests silently disappear.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If Notifier identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
The entire pipeline executes without manual intervention. From trigger to output, every decision point is deterministic: if a condition is met, the next agent fires; if not, the record is handled according to a documented fallback path. There are no silent failures. Every execution produces a traceable audit trail that you can review, export, or feed into your own reporting tools.
This architecture follows the ForgeWorkflows principle of tested, measured, documented automation. Every node in the pipeline has been validated during ITP (Inspection and Test Plan) testing, and the error handling matrix in the bundle documents the recovery path for each failure mode.
Tier references indicate the reasoning complexity assigned to each agent. Higher tiers use more capable models for tasks that require nuanced judgment, while lower tiers use efficient models for classification and routing tasks. This tiered approach optimizes both quality and cost.
Cost Breakdown
Every metric is ITP-measured. The Feature Request Extractor classifies Slack messages in real time and creates Linear issues at $0.029/message blended with a single the primary reasoning model call.
The primary operating cost for Feature Request Extractor is the per-execution LLM inference cost. Based on ITP testing, the measured cost is: Cost per Message: $0.046/FR detected | $0.023/non-FR exit | $0.029 blended per message. This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 20–40 minutes per cycle, that is $17–50 per execution in human labor. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $3–5/month, depending on your usage volume and plan tiers.
Quality assurance: BQS audit result is 12/12 PASS. ITP result is 20/20 records | 14/14 milestones PASS | Classifier consistency [0.97, 0.97, 0.97] variance=0. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
9 files — workflow, system prompt, configuration guides, and complete documentation.
When you purchase Feature Request Extractor, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
feature_request_extractor_v1_0_0.json— The 22-node n8n workflowREADME.md— 10-minute setup guide with Slack Events API and Linear configurationsystem_prompt_classifier.txt— Classifier system prompt (CoT-enforced, action-verb titles, 4-level priority)product_area_guide.md— Product area configuration with example taxonomies (SaaS B2B, dev tools, consumer apps)linear_setup_guide.md— Linear personal API key, GraphQL endpoint, n8n credential setupconfidence_threshold_guide.md— Confidence threshold tuning guideitp_results.md— ITP test results — 20 fixtures, 14/14 milestonesblueprint_dependency_matrix.md— Prerequisites and cost estimatesCHANGELOG.md— Version history
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
Feature Request Extractor is built for Product, Engineering teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a product or engineering function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: Slack workspace, Linear account
- You have API credentials available: Anthropic API, Slack Event API, Linear API
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not deduplicate feature requests across messages — each message is classified independently
- Does not monitor DMs, threads, or multiple channels — single configured channel only
- Does not update existing Linear issues — creates new issues only
- Does not integrate with Jira, Asana, or other project tools — Linear only
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Getting Started
Deployment follows a structured sequence. The Feature Request Extractor bundle is designed for the following tools: n8n, Anthropic API, Slack, Linear. Here is the recommended deployment path:
- Step 1: Import workflow and configure credentials. Import feature_request_extractor_v1_0_0.json into n8n. Configure Slack Bot Token (HTTP Header Auth), Anthropic API key, and Linear Personal API key (HTTP Header Auth, Authorization: Bearer lin_api_...).
- Step 2: Set up Slack Events API and configure taxonomy. Create a Slack app with Events API subscription for message.channels. Point the webhook URL to your n8n workflow. Configure your product area taxonomy and confidence threshold in the workflow nodes.
- Step 3: Activate and verify. Enable the workflow in n8n. Post a test feature request message in the configured Slack channel. Verify that a Linear issue is created with correct fields and a ✅ reaction + thread reply appear on the Slack message.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the Feature Request Extractor product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
How does it differ from Support Pattern Analyzer?+
Distinct products with zero overlap. SPA runs weekly, pulls Freshdesk tickets, clusters support patterns, and delivers a digest to Notion + Slack for CS teams. FRE monitors Slack in real time, classifies individual messages as feature requests, and creates Linear issues for PM/product teams. Different sources, cadences, outputs, and buyers.
What counts as a feature request?+
The Classifier evaluates each message for intent — explicit asks ("we need X"), implicit needs ("it would be great if..."), and enhancement suggestions. Bot messages, short messages, and general discussion are filtered before the LLM call. Confidence threshold (default 0.7) controls the boundary.
How is priority assigned?+
The Classifier assigns priority 1–4 based on message content: urgency language, customer tier indicators, scope of the request, and alignment with common product patterns. Priority maps directly to Linear issue priority levels.
How does product area classification work?+
You configure your product area taxonomy in the workflow — a list of areas with descriptions. The Classifier maps each feature request to the closest matching area. The guide includes example taxonomies for SaaS B2B, dev tools, and consumer apps.
How much does each message cost?+
ITP-measured: $0.046 per feature request detected (full classification + issue creation), $0.023 per non-feature-request (early exit after classification). Blended average across all messages: $0.029. Bot messages and short messages are filtered before the LLM — $0.00 cost.
What if the same feature request is posted twice?+
Each message is classified independently. The workflow does not deduplicate across messages — that is a Linear-side concern (search before creating). Duplicate detection across Slack messages is planned for v1.1.
Does caching persist across restarts?+
Team/label ID caching uses n8n workflow static data with a 24-hour TTL. Static data resets when the workflow is deactivated/reactivated or when n8n restarts. After a restart, the first run re-fetches team and label IDs from Linear (one extra API call).
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
Related Blueprints
Customer Onboarding Intelligence Agent
Deal closes. AI builds the onboarding brief before CS picks up the phone.
Support Pattern Analyzer
AI reads your Freshdesk. Delivers a weekly support intelligence brief before standup.
Email Intent Classifier
AI reads inbound emails, scores buyer intent across 7 categories, and routes to Pipedrive — deals, activities, and notes created automatically.
Buying Signal Detector
Know which accounts just entered a buying window. Before your competitors do.