product guideMar 10, 2026·11 min read

How Feature Request Extractor Structures Slack Feedback

By Jonathan Stocco, Founder

The Problem

Your team runs this workflow every week: pull records from Slack, Linear, cross-reference with a second source, apply judgment, format the output, and route it to 3 different stakeholders. Last Tuesday it took 30–60 minutes per cycle. This Tuesday the person who usually runs it is out sick, and nobody else knows the exact steps. The output varies by who runs it and when.

The core issue is data fragmentation. The information exists, but assembling it into actionable intelligence requires manual effort that does not scale with headcount. Feature Request Extractor closes that gap by automating the feature requests and product intelligence workflow from data extraction through structured output delivery.

INFO

Teams typically spend 30–60 minutes per cycle on the manual version of this workflow. Feature Request Extractor reduces that to seconds per execution, with consistent quality every time.

What This Blueprint Does

One Classifier. Real-Time Triage. Zero Manual Work.

The Feature Request Extractor pipeline runs 4 agents in sequence. Input Filter pulls data from Slack and Linear, and Notifier delivers the output. Here is what happens at each stage and why it matters.

  • Input Filter (Code Only): Slack Events API fires a webhook on every message posted in the configured channel.
  • The Classifier (Tier 1 Reasoning): the primary reasoning model evaluates whether the message is a genuine feature request.
  • Issue Creator (GraphQL): Creates a structured Linear issue via GraphQL issueCreate mutation with all classified fields — title, description, priority, product area label, and a source link back to the original Slack message.
  • Notifier (HTTP): Adds a ✅ reaction to the original Slack message and posts a thread reply with the Linear issue link (e.g., "Feature request captured → ENG-142").

When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow:

  • ITP-tested 22-node n8n workflow — import and deploy
  • Real-time Slack monitoring — every message evaluated as it arrives
  • Structured Linear issues with title, description, priority, product area, and source link
  • Configurable product area taxonomy — adapt to your domain
  • Confidence threshold tuning — control precision vs recall
  • 24-hour team/label ID caching — zero redundant Linear API calls
  • Slack ✅ reaction + thread reply confirmation on every captured request
  • $0.046/FR detected, $0.023/non-FR exit, $0.029 blended per message
  • ITP test results with 20 fixtures and 14/14 milestones

All scoring criteria, output formats, and routing rules are configurable in the system prompts — no workflow JSON edits required. This means Feature Request Extractor adapts to your specific process, terminology, and integration requirements without forking the entire workflow.

TIP

Every component in this pipeline is designed for customization. Modify system prompts to change scoring logic, output format, or routing rules — no code changes required.

How the Pipeline Works

Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Feature Request Extractor execution flow.

Step 1: Input Filter

Tier: Code Only

The pipeline starts here. Slack Events API fires a webhook on every message posted in the configured channel. Bot messages and messages shorter than the configurable minimum length are filtered immediately — no LLM calls wasted on noise. The remaining messages are forwarded to classification. Zero LLM cost.

This stage ensures all downstream agents receive clean, validated input. If this step returns incomplete data, every downstream agent works with a degraded picture.

Step 2: The Classifier

Tier: Tier 1 Reasoning

the primary reasoning model evaluates whether the message is a genuine feature request. If yes: extracts a structured title (action-verb format), description, priority (1–4), and product area from your configured taxonomy. Confidence threshold configurable (default 0.7). Chain-of-thought enforced. Non-requests exit immediately.

Why this step matters: This is where the pipeline applies judgment — not just data retrieval, but analysis.

Step 3: Issue Creator

Tier: GraphQL

Creates a structured Linear issue via GraphQL issueCreate mutation with all classified fields — title, description, priority, product area label, and a source link back to the original Slack message. Team and label IDs are cached for 24 hours via workflow static data — zero redundant API calls.

Every field in the output is structured for the next agent to consume without parsing.

Step 4: Notifier

Tier: HTTP

This is the final deliverable — what lands in your inbox or dashboard. Adds a ✅ reaction to the original Slack message and posts a thread reply with the Linear issue link (e.g., "Feature request captured → ENG-142"). The thread reply confirms triage happened — no requests silently disappear.

The entire pipeline executes without manual intervention. From trigger to output, every decision point follows a documented path. Every execution produces a traceable audit trail.

All nodes have been validated during Independent Test Protocol (ITP) testing on n8n v2.7.5. The error handling matrix in the bundle documents the recovery path for each failure mode.

INFO

This blueprint executes in your own n8n environment using your own API credentials. Zero external data sharing.

Why we designed it this way

n8n cannot run a cron trigger and a webhook response in the same workflow. A single workflow file cannot have both a Schedule Trigger and a Webhook node as entry points. Every scheduled blueprint ships as two workflow files: one for the scheduled execution, one for the webhook callback.

— ForgeWorkflows Engineering

Cost Breakdown

Every metric is ITP-measured. The Feature Request Extractor classifies Slack messages in real time and creates Linear issues at $0.029/message blended with a single the primary reasoning model call.

The primary operating cost for Feature Request Extractor is the per-execution LLM inference cost. Based on Independent Test Protocol (ITP) testing, the measured cost is: Cost per Message: $0.046/FR detected | $0.023/non-FR exit | $0.029 blended per message. This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.

To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour for an operations analyst at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 30–60 minutes per cycle, the per-execution cost in human labor is significant. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.

Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is $3–5/month, depending on your usage volume and plan tiers.

Quality assurance: Blueprint Quality Standard (BQS) audit result is 12/12 PASS. ITP result is 20/20 records | 14/14 milestones PASS | Classifier consistency [0.97, 0.97, 0.97] variance=0. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.

All cost and performance figures are ITP-measured — tested against real data fixtures on n8n v2.7.5 in March 2026. See the product page for full test methodology.

TIP

Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.

What's in the Bundle

9 files — workflow, system prompt, configuration guides, and complete documentation.

When you purchase Feature Request Extractor, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:

  • README.md — Setup and configuration guide
  • classifier_prompt.md — Classifier system prompt
  • error_handling_matrix.md — Error handling reference
  • feature_request_extractor_v1.json — n8n workflow (main pipeline)
  • schema_classification_result.json — Classification result schema
  • schema_extracted_message.json — Extracted message schema
  • schema_issue_input.json — Issue input schema
  • schema_issue_result.json — Issue result schema
  • schema_webhook_response.json — Webhook response schema

Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.

Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.

Who This Is For

Feature Request Extractor is built for Product, Engineering teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:

  • You operate in a product or engineering function and handle the workflow this blueprint automates on a recurring basis
  • You have (or are willing to set up) an n8n instance — self-hosted or cloud
  • You have active accounts for the required integrations: Slack workspace, Linear account
  • You have API credentials available: Anthropic API, Slack Event API, Linear API
  • You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)

This is NOT for you if:

  • Does not deduplicate feature requests across messages — each message is classified independently
  • Does not monitor DMs, threads, or multiple channels — single configured channel only
  • Does not update existing Linear issues — creates new issues only
  • Does not integrate with Jira, Asana, or other project tools — Linear only

Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.

NOTE

All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.

Edge cases to know about

Every pipeline has boundaries. These are intentional design decisions, not oversights — understanding them helps you deploy with the right expectations and plan for edge cases in your environment.

Does not deduplicate feature requests across messages — each message is classified independently

This is intentional. We default to human-in-the-loop for actions that carry reputational or financial risk. Once your team has validated output accuracy over 20+ cycles, you can adjust the pipeline to auto-execute — the workflow JSON supports it, but the default is conservative.

Does not monitor DMs, threads, or multiple channels — single configured channel only

We scoped this boundary after ITP testing revealed inconsistent results when the pipeline attempted this. The agents handle what they handle well — extending beyond this scope requires custom prompt engineering specific to your data shape.

Does not update existing Linear issues — creates new issues only

This keeps the pipeline focused on a single workflow. Adding this capability would introduce branching logic that varies by organization, and the tradeoff between complexity and reliability was not worth it for a reusable blueprint. Fork the workflow JSON if your use case demands it.

INFO

The dead letter queue captures any records that fail processing. Check it after your first production run to validate data coverage.

Getting Started

Deployment follows a structured sequence. The Feature Request Extractor bundle is designed for the following tools: n8n, Anthropic API, Slack, Linear. Here is the recommended deployment path:

  1. Step 1: Import workflow and configure credentials. Import feature_request_extractor_v1_0_0.json into n8n. Configure Slack Bot Token (HTTP Header Auth), Anthropic API key, and Linear Personal API key (HTTP Header Auth, Authorization: Bearer lin_api_...).
  2. Step 2: Set up Slack Events API and configure taxonomy. Create a Slack app with Events API subscription for message.channels. Point the webhook URL to your n8n workflow. Configure your product area taxonomy and confidence threshold in the workflow nodes.
  3. Step 3: Activate and verify. Enable the workflow in n8n. Post a test feature request message in the configured Slack channel. Verify that a Linear issue is created with correct fields and a ✅ reaction + thread reply appear on the Slack message.

Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.

Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.

For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.

Ready to deploy? View the Feature Request Extractor product page for full specifications, pricing, and purchase.

TIP

Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.

Frequently Asked Questions

How does it differ from Support Pattern Analyzer?+

Distinct products with zero overlap. SPA runs weekly, pulls Freshdesk tickets, clusters support patterns, and delivers a digest to Notion + Slack for CS teams. FRE monitors Slack in real time, classifies individual messages as feature requests, and creates Linear issues for PM/product teams. Different sources, cadences, outputs, and buyers.

What counts as a feature request?+

The Classifier evaluates each message for intent — explicit asks ("we need X"), implicit needs ("it would be great if..."), and enhancement suggestions. Bot messages, short messages, and general discussion are filtered before the LLM call. Confidence threshold (default 0.7) controls the boundary. Check the dependency matrix in the bundle for exact version requirements and credential setup steps.

How is priority assigned?+

The Classifier assigns priority 1–4 based on message content: urgency language, customer tier indicators, scope of the request, and alignment with common product patterns. Priority maps directly to Linear issue priority levels. The README walks through configuration in under 10 minutes, including test data for validation.

How does product area classification work?+

You configure your product area taxonomy in the workflow — a list of areas with descriptions. The Classifier maps each feature request to the closest matching area. The guide includes example taxonomies for SaaS B2B, dev tools, and consumer apps. Review the error handling matrix in the bundle — it documents the recovery path for each failure mode.

How much does each message cost?+

ITP-measured: $0.046 per feature request detected (full classification + issue creation), $0.023 per non-feature-request (early exit after classification). Blended average across all messages: $0.029. Bot messages and short messages are filtered before the LLM — $0.00 cost. The ITP test results in the bundle show measured performance across edge cases, not just happy-path data.

What if the same feature request is posted twice?+

Each message is classified independently. The workflow does not deduplicate across messages — that is a Linear-side concern (search before creating). Duplicate detection across Slack messages is planned for v1.1. The system prompts are standalone text files — edit scoring thresholds and output formats without touching the workflow JSON.

Does caching persist across restarts?+

Team/label ID caching uses n8n workflow static data with a 24-hour TTL. Static data resets when the workflow is deactivated/reactivated or when n8n restarts. After a restart, the first run re-fetches team and label IDs from Linear (one extra API call). Check the dependency matrix in the bundle for exact version requirements and credential setup steps.

Is there a refund policy?+

All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.

What should I do if the pipeline dead-letters a record?+

Check the dead letter output for the failure reason — the error context includes which agent failed and why. Common causes: missing input fields, API rate limits, or malformed data. Fix the underlying issue and reprocess. The error handling matrix in the bundle documents every failure mode and its recovery path.

Get Feature Request Extractor

$199

View Blueprint

Related Blueprints

Related Articles

Feature Request Extractor$199