How Linear Engineering Velocity Reporter Automates Engineering...
The Problem
Weekly AI-generated engineering velocity report from your Linear data — measures throughput trends, identifies bottlenecks, tracks contribution distribution, monitors bug/feature ratio, and flags cycle time outliers. That single sentence captures a workflow gap that costs engineering, leadership teams hours every week. The manual process behind what Linear Engineering Velocity Reporter automates is familiar to anyone who has worked in a revenue organization: someone pulls data from Linear, Notion, Slack, copies it into a spreadsheet or CRM, applies a mental checklist, writes a summary, and routes it to the next person in the chain. Repeat for every record. Every day.
Three problems make this unsustainable at scale. First, the process does not scale. As volume grows, the human bottleneck becomes the constraint. Whether it is inbound leads, deal updates, or meeting prep, a person can only process a finite number of records before quality degrades. Second, the process is inconsistent. Different team members apply different criteria, use different formats, and make different judgment calls. There is no single standard of quality, and the output varies from person to person and day to day. Third, the process is slow. By the time a manual review is complete, the window for action may have already closed. Deals move, contacts change roles, and buying signals decay.
These are not theoretical concerns. They are the operational reality for engineering, leadership teams handling engineering intelligence workflows. Every hour spent on manual data processing is an hour not spent on the work that actually moves the needle: building relationships, closing deals, and driving strategy.
This is the gap Linear Engineering Velocity Reporter fills.
Teams typically spend 30-60 minutes per cycle on the manual version of this workflow. Linear Engineering Velocity Reporter reduces that to seconds per execution, with consistent output quality every time.
What This Blueprint Does
Four Agents. Weekly Velocity Report. Dual-Audience Insights.
Linear Engineering Velocity Reporter is a multiple-node n8n workflow with 4 specialized agents. Each agent handles a distinct phase of the pipeline, and the handoff between agents is deterministic — no ambiguous routing, no dropped records. The blueprint is designed so that each agent does one thing well, and the overall pipeline produces a consistent, auditable output on every run.
Here is what each agent does:
- The Fetcher (Code-only): Queries Linear GraphQL API for completed issues across the lookback window — assignees, estimates, labels, state transitions, and completion timestamps.
- The Assembler (Code-only): Computes 5 Velocity Health Dimensions: throughput trend (week-over-week vs 4-week baseline), bottleneck identification (state accumulation analysis), contribution distribution (Gini coefficient), bug/feature ratio (label classification), and cycle time health (median, P90, outliers)..
- The Analyst (Classification): Scores each dimension 1-10 (VHS), classifies overall health (HEALTHY/CONCERNING/CRITICAL), produces dual-audience output: executive summary for eng leadership + engineering recommendations with specific actions.
- The Formatter (Creative): Generates a Notion weekly velocity report with dimension breakdowns, trend data, and recommendations, plus a Slack Block Kit digest for the eng-leadership channel with VHS scores, executive summary, and top 3 actions..
When the pipeline completes, you get structured output that is ready to act on. The blueprint bundle includes everything needed to deploy, configure, and customize the workflow. Specifically, you receive:
- 24-node main workflow + 3-node scheduler
- Weekly engineering velocity report from Linear completed issue data
- 5-dimension Velocity Health Score (VHS): throughput trend, bottleneck identification, contribution distribution, bug/feature ratio, cycle time health
- VHS 1-10 per dimension with overall health classification (HEALTHY/CONCERNING/CRITICAL)
- Dual-audience output: executive summary for leadership + engineering recommendations for the team
- Throughput decline >20% triggers URGENT alert with root cause analysis
- Per-week throughput trend with 4-week rolling baseline comparison
- Contribution distribution via Gini coefficient — detects bus factor risk
- Bug/feature ratio monitoring with label-based classification
- Cycle time analysis with median, P90, and outlier detection (>3x median)
- Notion velocity report with dimension breakdowns and trend charts
- Slack Block Kit digest with VHS scores and top 3 recommendations
- Configurable: team IDs, velocity metric (issues/points), baseline weeks, alert threshold
- Full technical documentation + system prompts
Every component is designed to be modified. The agent prompts are plain text files you can edit. The workflow nodes can be rearranged or extended. The scoring criteria, output formats, and routing logic are all exposed as configurable parameters — not buried in application code. This means Linear Engineering Velocity Reporter adapts to your specific process, terminology, and integration requirements without forking the entire workflow.
Every agent prompt in the bundle is a standalone text file. You can customize scoring criteria, output formats, and routing logic without modifying the workflow JSON itself.
How the Pipeline Works
Understanding how the pipeline works helps you customize it for your environment and troubleshoot issues when they arise. Here is a step-by-step walkthrough of the Linear Engineering Velocity Reporter execution flow.
Step 1: The Fetcher
Tier: Code-only
Queries Linear GraphQL API for completed issues across the lookback window — assignees, estimates, labels, state transitions, and completion timestamps. Groups issues by week for trend analysis.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Fetcher identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 2: The Assembler
Tier: Code-only
Computes 5 Velocity Health Dimensions: throughput trend (week-over-week vs 4-week baseline), bottleneck identification (state accumulation analysis), contribution distribution (Gini coefficient), bug/feature ratio (label classification), and cycle time health (median, P90, outliers).
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Assembler identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 3: The Analyst
Tier: Classification
Scores each dimension 1-10 (VHS), classifies overall health (HEALTHY/CONCERNING/CRITICAL), produces dual-audience output: executive summary for eng leadership + engineering recommendations with specific actions. Flags throughput decline >20% as URGENT alert.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Analyst identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
Step 4: The Formatter
Tier: Creative
Generates a Notion weekly velocity report with dimension breakdowns, trend data, and recommendations, plus a Slack Block Kit digest for the eng-leadership channel with VHS scores, executive summary, and top 3 actions.
This stage is critical because it ensures that downstream agents receive structured, validated input. Each agent in the pipeline trusts the output contract of the previous agent. If The Formatter identifies an issue — a missing field, a low-confidence score, or an unexpected input format — the pipeline handles it explicitly rather than passing garbage downstream. This is the difference between a prototype and a production-grade workflow: every handoff is defined, every edge case is documented.
The entire pipeline executes without manual intervention. From trigger to output, every decision point is deterministic: if a condition is met, the next agent fires; if not, the record is handled according to a documented fallback path. There are no silent failures. Every execution produces a traceable audit trail that you can review, export, or feed into your own reporting tools.
This architecture follows the ForgeWorkflows principle of tested, measured, documented automation. Every node in the pipeline has been validated during ITP (Inspection and Test Plan) testing, and the error handling matrix in the bundle documents the recovery path for each failure mode.
Tier references indicate the reasoning complexity assigned to each agent. Higher tiers use more capable models for tasks that require nuanced judgment, while lower tiers use efficient models for classification and routing tasks. This tiered approach optimizes both quality and cost.
Cost Breakdown
Weekly 5-dimension engineering velocity analysis with dual-audience reporting (exec summary + eng recommendations) and dual-channel delivery (Notion velocity report + Slack digest with VHS scores).
The primary operating cost for Linear Engineering Velocity Reporter is the per-execution LLM inference cost. Based on ITP testing, the measured cost is: Cost per Run: ~$0.03-$0.10/run (weekly aggregate). This figure includes all API calls across all agents in the pipeline — not just the primary reasoning step, but every classification, scoring, and output generation call.
To put this in context, consider the manual alternative. A skilled team member performing the same work manually costs $50–75/hour at a fully loaded rate (salary, benefits, tools, overhead). If the manual version of this workflow takes 20–40 minutes per cycle, that is $17–50 per execution in human labor. The blueprint executes the same pipeline for a fraction of that cost, with consistent quality and zero fatigue degradation.
Infrastructure costs are separate from per-execution LLM costs. You will need an n8n instance (self-hosted or cloud) and active accounts for the integrated services. The estimated monthly infrastructure cost is ~$0.03-0.10 per weekly run + Linear subscription., depending on your usage volume and plan tiers.
Quality assurance: BQS audit result is 12/12. ITP result is 8/8 variations, 14/14 milestones. These are not marketing claims — they are test results from structured inspection protocols that you can review in the product documentation.
Monthly projection: if you run this blueprint 100 times per month, multiply the per-execution cost by 100 and add your infrastructure costs. Most teams find the total is less than one hour of manual labor per month.
What's in the Bundle
7 files.
When you purchase Linear Engineering Velocity Reporter, you receive a complete deployment bundle. This is not a SaaS subscription or a hosted service — it is a set of files that you own and run on your own infrastructure. Here is what is included:
linear_engineering_velocity_reporter_v1_0_0.json— Main workflow (24 nodes)levr_scheduler_v1_0_0.json— Scheduler workflow (3 nodes)README.md— 10-minute setup guidedocs/TDD.md— Technical Design Documentsystem_prompts/analyst_system_prompt.md— Analyst prompt referencesystem_prompts/formatter_system_prompt.md— Formatter prompt referenceCHANGELOG.md— Version history
Start with the README.md. It walks through the deployment process step by step, from importing the workflow JSON into n8n to configuring credentials and running your first test execution. The dependency matrix lists every required service, API key, and estimated cost so you know exactly what you need before you start.
Every file in the bundle is designed to be read, understood, and modified. There is no obfuscated code, no compiled binaries, and no phone-home telemetry. You get the source, you own the source, and you control the execution environment.
Who This Is For
Linear Engineering Velocity Reporter is built for Engineering, Leadership teams that need to automate a specific workflow without building from scratch. If your team matches the following profile, this blueprint is designed for you:
- You operate in a engineering or leadership function and handle the workflow this blueprint automates on a recurring basis
- You have (or are willing to set up) an n8n instance — self-hosted or cloud
- You have active accounts for the required integrations: Linear account with API access, Anthropic API key, Notion workspace, Slack workspace (Bot Token with chat:write)
- You have API credentials available: Anthropic API, Linear (httpHeaderAuth, Bearer prefix), Notion (httpHeaderAuth Bearer), Slack (Bot Token, httpHeaderAuth Bearer)
- You are comfortable importing a workflow JSON and configuring API keys (the README guides you, but basic technical comfort is expected)
This is NOT for you if:
- Does not create Linear issues — use Feature Request Extractor (#20) for Slack-to-Linear issue creation
- Does not predict sprint risk — use Linear Sprint Risk Analyzer (#46) for forward-looking per-sprint risk analysis
- Does not audit backlog health — use Linear Backlog Grooming Intelligence (#53) for staleness and priority distribution audits
- Does not modify Linear data — this is a read-only analysis tool
- Does not work with non-Linear project management tools — this is Linear-specific
- Does not track individual developer performance — this is team-level velocity analysis
Review the dependency matrix and prerequisites before purchasing. If you are unsure whether your environment meets the requirements, contact support@forgeworkflows.com before buying.
All sales are final after download. Review the full dependency matrix, prerequisites, and integration requirements on the product page before purchasing. Questions? Contact support@forgeworkflows.com.
Getting Started
Deployment follows a structured sequence. The Linear Engineering Velocity Reporter bundle is designed for the following tools: n8n, Anthropic API, Linear, Notion, Slack. Here is the recommended deployment path:
- Step 1: Import workflows and configure credentials. Import both workflow JSON files into n8n (main + scheduler). Configure Linear API key (httpHeaderAuth with Bearer prefix), Notion API token (httpHeaderAuth with Bearer prefix), Slack Bot Token (httpHeaderAuth with Bearer prefix, chat:write scope), and Anthropic API key following the README.
- Step 2: Configure velocity parameters. Set LINEAR_TEAM_IDS (team IDs to analyze), VELOCITY_METRIC (issues or points), BASELINE_WEEKS (default 4), DECLINE_ALERT_THRESHOLD (default 0.2 for 20%), NOTION_DATABASE_ID, and SLACK_CHANNEL in the scheduler Payload Builder node.
- Step 3: Activate scheduler and verify. Update the webhook URL in the scheduler Trigger Main Workflow node to match your main workflow webhook path. Activate both workflows. Send a test POST with _is_itp: true and sample velocity data. Verify the velocity report appears in Notion and the digest with VHS scores appears in Slack.
Before running the pipeline on live data, execute a manual test run with sample input. This validates that all credentials are configured correctly, all API endpoints are reachable, and the output format matches your expectations. The README includes test data examples for this purpose.
Once the test run passes, you can configure the trigger for production use (scheduled, webhook, or event-driven — depending on the blueprint design). Monitor the first few production runs to confirm the pipeline handles real-world data as expected, then let it run.
For technical background on how ForgeWorkflows blueprints are built and tested, see the Blueprint Quality Standard (BQS) methodology and the Inspection and Test Plan (ITP) framework. These documents describe the quality gates every blueprint passes before listing.
Ready to deploy? View the Linear Engineering Velocity Reporter product page for full specifications, pricing, and purchase.
Run a manual test with sample data before switching to production triggers. This catches credential misconfigurations and API endpoint issues before they affect real workflows.
Frequently Asked Questions
What are the 5 Velocity Health Dimensions?+
Throughput trend (completion rate vs 4-week baseline), bottleneck identification (states where issues accumulate), contribution distribution (Gini coefficient measuring work balance), bug/feature ratio (bug-labeled vs feature-labeled issues), and cycle time health (median created-to-completed time plus outlier detection). Each scored 1-10.
When does it trigger an alert?+
When throughput (issues or story points completed) declines more than 20% compared to the 4-week rolling average. The threshold is configurable via DECLINE_ALERT_THRESHOLD. The alert includes root cause analysis and recommended actions in the Slack digest.
How does it differ from Linear Sprint Risk Analyzer?+
LSRA (#46) analyzes the ACTIVE sprint cycle — predicting risk for the current sprint (forward-looking). This product measures historical velocity trends across multiple weeks — tracking team throughput, bottlenecks, and health over time (retrospective). LSRA is per-sprint, LEVR is the executive engineering report.
Can I track story points instead of issue count?+
Yes. Set velocity_metric to "points" in the scheduler Payload Builder. The system will use story point estimates instead of issue count for throughput calculations, baseline comparisons, and contribution distribution.
What is the Gini coefficient?+
A measure of inequality in work distribution. 0.0 means perfectly equal distribution across all contributors. 1.0 means a single person did everything. Above 0.45 signals bus factor risk — the team depends too heavily on one contributor. The Analyst scores this dimension and recommends distribution improvements.
Does it use web scraping?+
No. All data comes from the Linear GraphQL API. No web scraping, no page parsing.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
Related Blueprints
Linear Sprint Risk Analyzer
AI-powered weekly sprint risk analysis that scores velocity deviation, blocked chains, scope creep, and concentration risk from your Linear data — delivered as a Notion brief and Slack digest every Monday.
Linear Backlog Grooming Intelligence v1.0.0
Weekly AI backlog grooming intelligence that scores staleness, orphaned issues, duplicate clusters, blocked chains, and estimate gaps across your Linear backlog.
Feature Request Extractor
Every feature request in Slack becomes a structured Linear issue. Automatically.