Linear Backlog Grooming Intelligence v1.0.0
Weekly AI backlog grooming intelligence that scores staleness, orphaned issues, duplicate clusters, blocked chains, and estimate gaps across your Linear backlog.
Weekly AI backlog grooming intelligence that scores staleness, orphaned issues, duplicate clusters, blocked chains, and estimate gaps across your Linear backlog. Delivers grooming briefs to Notion and priority digests to Slack.
Four Agents. Five Health Dimensions. Notion + Slack Delivery.
Step 1 — Fetcher
Code
Queries Linear GraphQL API for all open issues in your team. Extracts issue metadata: state, assignee, estimate, labels, project, cycle, parent, and blocking relations. Paginated to 250 issues.
Step 2 — Assembler
Code
Computes 5 Backlog Health Dimensions: staleness distribution (age vs threshold), orphan density (no project/cycle/labels), duplicate clusters (Jaccard title similarity), blocked chain depth (max dependency chain), estimate coverage gap (missing estimates). All math pre-computed before LLM.
Step 3 — Analyst
Tier 2 Classification
Scores each dimension 1-10 with evidence from pre-computed metrics. Computes Backlog Health Score (BHS) as the average. Classifies health: HEALTHY (8-10), NEEDS_GROOMING (5-7), CRITICAL (1-4). Generates top 5 grooming priorities with specific issue identifiers. Sonnet 4.6.
Step 4 — Formatter
Tier 3 Creative
Generates two outputs: (1) Notion grooming brief with dimension breakdowns, BHS score, top 5 priority actions. (2) Slack Block Kit digest with BHS score, dimension highlights, priority actions, and context footer. Sonnet 4.6.
What It Does NOT Do
Does not modify Linear issues or backlog — it reads issue data only
Does not replace your product manager — it provides data-driven grooming signals for human decision-making
Does not work with Jira, Asana, or other project tools — Linear GraphQL API only in v1.0
Does not predict future backlog growth — it scores current backlog health from existing issues
Does not guarantee backlog improvement — it identifies grooming opportunities for human follow-up
Does not handle real-time monitoring — it runs weekly aggregate analysis, not per-issue
The Complete Customer Success Bundle
7+ files — workflow JSON (main + scheduler), system prompts, and complete documentation.
Tested. Measured. Documented.
Every metric is ITP-measured. The Linear Backlog Grooming Intelligence blueprint scores five backlog health dimensions — Sonnet 4.6 for analysis and formatting, weekly aggregate cost.
Linear Backlog Grooming Intelligence v1.0.0────────────────────────────────────────Nodes 24 + 3 (scheduler)Agents 4 (Fetcher → Assembler → Analyst → Formatter)Model Sonnet 4.6 (SINGLE-MODEL)Tool A Linear (httpHeaderAuth)Tool B Slack (httpHeaderAuth Bearer)Tool C Notion (httpHeaderAuth Bearer)Trigger Schedule (Monday 8:00 UTC) + WebhookPattern AGGREGATE — weekly backlog groomingBQS 12/12 PASSITP 8/8 variations · 14/14 milestones
What You'll Need
Platform
n8n 2.7.5+
Est. Monthly API Cost
Weekly run cost ~$0.03-0.10/run ($0.13-$0.43/month)
Credentials Required
- ▪Anthropic API
- ▪Linear API (httpHeaderAuth)
- ▪Slack (Bot Token, httpHeaderAuth Bearer)
- ▪Notion (httpHeaderAuth Bearer)
Services
- ▪Linear account (API key, any plan)
- ▪Slack workspace (Bot Token with chat:write scope)
- ▪Notion workspace (integration token)
- ▪Anthropic API key
Setup Track
Quick Start
~15 min
All credentials live, n8n running
Full Setup
1–2 hrs
Needs API config + tables
From Scratch
2–4 hrs
No n8n, no credentials
Linear Backlog Grooming Intelligence v1.0.0 v1.0.0
$199
one-time purchase
What you get:
- ✓Production-ready 24+3 node n8n workflow — import and deploy
- ✓Weekly schedule: fires every Monday at 8:00 UTC (customizable)
- ✓Five backlog health dimensions: staleness distribution, orphan density, duplicate clusters, blocked chain depth, estimate coverage gap
- ✓BHS (Backlog Health Score) 1-10 with HEALTHY / NEEDS_GROOMING / CRITICAL classification
- ✓Top 5 grooming priorities with specific issue identifiers
- ✓Duplicate detection via Jaccard title similarity (configurable threshold)
- ✓Notion grooming brief with dimension breakdowns and priority actions
- ✓Slack Block Kit digest with BHS score and highlights
- ✓Split-workflow pattern: scheduler + main pipeline (both included)
- ✓SINGLE-MODEL: Sonnet 4.6 for analysis and formatting — no Opus needed
- ✓AGGREGATE pattern: one Analyst call per weekly run, not per issue
- ✓ITP 8/8 variations, 14/14 milestones measured
- ✓All sales final after download
Frequently Asked Questions
What are the five backlog health dimensions?+
Staleness Distribution (issues not updated beyond threshold), Orphan Density (issues with no project, cycle, or labels), Duplicate Clusters (groups of similar-title issues), Blocked Chain Depth (longest blocking dependency chain), and Estimate Coverage Gap (fraction of issues missing point estimates).
What is the Backlog Health Score (BHS)?+
The BHS is the average of all 5 dimension scores (each scored 1-10). HEALTHY (8-10) means the backlog is well-groomed. NEEDS_GROOMING (5-7) means some areas need attention. CRITICAL (1-4) means significant grooming debt.
How does this differ from Linear Sprint Risk Analyzer?+
LSRA (#48) assesses sprint-level risk for active cycles: velocity deviation, scope creep, blocked chains, concentration risk. LBGI analyzes overall backlog grooming hygiene: staleness, orphans, duplicates, estimates. Different lens: LSRA looks at current sprint execution; LBGI looks at long-term backlog health.
How does this differ from Feature Request Extractor?+
FRE (#20) creates Linear issues from Slack feature requests. LBGI reads existing Linear issues and scores their grooming health. Different direction: FRE writes to Linear; LBGI reads from Linear.
Can I customize the staleness threshold?+
Yes. Set STALE_THRESHOLD_DAYS in the scheduler Payload Prep node. Default is 30 days. Reduce for fast-moving teams, increase for longer-term backlogs.
How does duplicate detection work?+
The Assembler computes Jaccard similarity between issue titles. Issues with similarity above DUPLICATE_SIMILARITY_THRESHOLD (default 0.7) are grouped into clusters. The Analyst then recommends which duplicates to merge or close.
Does it use web scraping?+
No. All data comes from the Linear GraphQL API. No web_search or external scraping. Fully deterministic and fast.
Why only Sonnet instead of Opus?+
The Fetcher retrieves issue data via Linear GraphQL and the Assembler pre-computes all backlog health metrics. The Analyst receives pre-computed numbers and applies a scoring rubric — classification-tier reasoning that Sonnet 4.6 handles accurately. No deep causal analysis required.
Is there a refund policy?+
All sales are final after download. Review the Blueprint Dependency Matrix and prerequisites before purchase. Questions? Contact support@forgeworkflows.com before buying. Full terms at forgeworkflows.com/legal.
Related Blueprints
Linear Sprint Risk Analyzer
AI-powered weekly sprint risk analysis that scores velocity deviation, blocked chains, scope creep, and concentration risk from your Linear data — delivered as a Notion brief and Slack digest every Monday.
Feature Request Extractor
Every feature request in Slack becomes a structured Linear issue. Automatically.
Slack Standup Summarizer
AI-powered daily standup summarizer that extracts progress, commitments, blockers, and dependencies from your Slack standup channel — delivered as a structured Notion page every morning.