We Built Meeting-to-Project Automation (Here's What Broke)
I wanted to eliminate the 30 minutes I spent after every client meeting turning scattered notes into structured project plans. The solution seemed obvious: connect Claude's Model Context Protocol (MCP) directly to Notion, feed it raw meeting transcripts, and watch structured tasks appear automatically.
What actually happened taught me more about API integrations than any documentation ever could.
The Original Vision
We set out to build a workflow that would take messy meeting notes—the kind with half-sentences, unclear action items, and vague deadlines—and transform them into proper Notion project pages. Complete with task databases, assigned owners, and realistic timelines.
The technical stack looked straightforward: Claude's MCP for the AI reasoning, Notion's API for database creation, and custom prompts to handle the transformation logic. According to McKinsey's 2024 State of AI report, 72% of organizations now use AI in at least one business function, up from 50% in previous years. Meeting automation felt like the obvious next step.
The workflow would work like this: paste meeting notes into Claude, trigger the MCP connection to Notion, and receive a formatted project page with tasks, deadlines, and ownership assignments. No manual formatting, no copy-paste between tools, no administrative overhead.
What Actually Happened
The first integration attempt created 47 duplicate tasks in our test Notion workspace.
Claude's MCP was making multiple API calls to Notion for each task creation—one for the task itself, another for the properties, and a third for the relations. Each call succeeded individually, but the timing created race conditions that duplicated entries. We spent two days debugging before realizing the issue wasn't our prompt engineering—it was how we structured the API sequence.
The second major issue emerged during our first real meeting test. Claude correctly identified action items and owners, but it assigned every deadline to "next Friday." The reasoning model had no concept of actual calendar dates. When we fed it notes from a Monday meeting, "next Friday" meant different things depending on when the workflow ran.
Then came the Notion database schema problems. Our initial setup used a single "Tasks" database with properties for project association. But Claude's MCP couldn't create new database views on the fly—it could only populate existing structures. When a meeting covered multiple projects, tasks ended up scattered across unrelated database entries.
I made a similar mistake myself during our first Stripe product creation—the API call included a recurring parameter set to null, thinking omitting the value was the same as omitting the field. It wasn't. Stripe created two prices: one correct one-time payment at $297, and one spurious monthly subscription at $297. We caught it before a customer was charged monthly for a one-time product, but it took a manual archive in the Stripe Dashboard to fix. Now our factory pipeline never includes the recurring field at all—not null, not false, just absent.
The Technical Solution That Worked
We rebuilt the integration around batch operations instead of individual API calls. Instead of creating tasks one by one, Claude now generates a complete JSON payload for the entire project structure, then makes a single bulk creation request to Notion.
Here's the working MCP configuration:
{
"mcpServers": {
"notion": {
"command": "npx",
"args": ["@modelcontextprotocol/server-notion"],
"env": {
"NOTION_API_KEY": "your_integration_token"
}
}
}
}
The prompt engineering required three specific constraints: explicit date calculations ("if today is Monday, next Friday is [specific date]"), project scope boundaries ("create separate database entries for each distinct project mentioned"), and task granularity rules ("break down any action item longer than one sentence into subtasks").
We also discovered that Claude's reasoning model performs better with meeting notes that include explicit context. Instead of just pasting raw transcripts, we now prepend a brief summary: meeting date, attendees, and primary objective. This gives the model enough context to make intelligent decisions about task priority and ownership.
What We Learned About AI Workflow Integration
**API timing matters more than prompt perfection.** We spent weeks optimizing prompts when the real issue was race conditions between API calls. Batch operations solved problems that no amount of prompt engineering could fix.
**Date handling requires explicit context.** AI models excel at pattern recognition but struggle with temporal reasoning. Any workflow involving deadlines needs explicit date anchoring—not relative references like "next week" or "soon."
**Database schema design constrains AI output.** Claude can populate existing Notion structures brilliantly, but it can't redesign your information architecture on the fly. The database schema becomes the workflow's limiting factor.
The most surprising discovery: meeting quality improved when participants knew the notes would become structured project plans automatically. People started speaking more precisely about deadlines and ownership because they knew their words would translate directly into task assignments.
The Current State
Our working system now processes meeting notes in under 60 seconds and creates properly structured Notion project pages with 95% accuracy. The remaining 5% requires manual cleanup—usually around task priority or deadline adjustments.
We've tested it across 23 different meeting types, from client kickoffs to internal sprint planning. The workflow handles most standard project management scenarios, but it struggles with highly technical discussions or meetings that jump between multiple unrelated topics.
The time savings are real. What used to take 30-45 minutes of manual formatting now happens automatically. But the bigger benefit is consistency—every project starts with the same structured foundation, regardless of who ran the meeting or how detailed their notes were.
Integration Gotchas
**Notion API rate limits hit faster than expected.** During bulk project creation, we exceeded the 3 requests per second limit and had to implement exponential backoff. Plan for rate limiting from day one.
**Claude's MCP doesn't handle Notion relation properties intuitively.** Linking tasks to projects requires explicit database IDs, not just project names. We had to build a lookup table that maps project names to Notion database entries.
**Meeting transcript quality varies dramatically by source.** Zoom's auto-transcription works well for this workflow, but Google Meet's transcripts often miss technical terms and proper nouns that become critical for task assignment.
The workflow also revealed gaps in our existing project management process. When AI started creating consistent task structures, it became obvious which meetings lacked clear action items or ownership assignments. The automation forced us to run better meetings.
What We'd Do Differently
Start with database schema design, not prompt engineering. We wasted weeks optimizing Claude's output when the real constraint was our Notion database structure. Design your information architecture first, then build prompts that populate it effectively.
Build explicit date validation into the workflow. Instead of letting Claude interpret relative dates, we'd create a date normalization step that converts all temporal references to specific calendar dates before task creation.
Test with deliberately bad meeting notes from the beginning. Our early tests used clean, well-structured notes that didn't reflect real meeting chaos. Testing with actual messy transcripts would have revealed edge cases much earlier in development.