Building a Real Estate AI Chatbot: What We Learned
We set out to solve a problem that keeps real estate agents awake at night: leads going cold while they sleep. The goal was simple—build an AI chatbot that could capture property inquiries, qualify prospects, and schedule viewings without human intervention. What we discovered changed how we think about automated lead generation entirely.
The trigger was watching a client lose a qualified buyer because they responded to a Saturday evening inquiry on Monday morning. By then, the prospect had already scheduled three viewings with competitors. According to HubSpot's Sales Trends Report, response rates drop 80% after 5 days (source), but we learned the real damage happens much faster in real estate.
The Architecture We Built
Our first attempt used a monolithic approach—one massive n8n workflow handling everything from initial contact to viewing confirmation. The workflow looked impressive in the editor: 47 nodes connected in a sprawling web of conditional logic.
It broke within the first week.
The problem wasn't technical complexity. The issue was data handoffs. When a prospect asked about school districts, the property recommendation engine needed to wait for the location parser, which needed to wait for the intent classifier. Everything sat idle while one component processed.
I made this mistake myself—built five sequences simultaneously, none finished. Our first Autonomous SDR used a flat 3-agent architecture where research, scoring, and writing all reported to a single orchestrator. It worked on 5 leads. At 50, the scorer sat idle waiting on research that had nothing to do with scoring. Splitting into discrete agents with handoff contracts between them cut end-to-end processing time and made each agent independently testable.
We rebuilt using what ForgeWorkflows calls modular swarm architecture:
- Intake Agent: Captured initial inquiries via Telegram webhook, extracted contact details and property preferences
- Qualification Agent: Scored leads based on budget, timeline, and location preferences stored in Supabase
- Recommendation Agent: Matched qualified prospects to available properties using vector similarity search
- Scheduling Agent: Coordinated viewing appointments through calendar integration
Each agent operated independently with explicit data contracts. When the intake agent finished processing, it triggered the qualification agent with a structured payload. No waiting, no bottlenecks.
What Broke First
The qualification scoring logic failed spectacularly. We built a point-based system: budget match (25 points), location preference (20 points), timeline urgency (15 points). Seemed logical.
Real prospects don't fit point systems. A buyer with a flexible budget but strict school district requirements scored lower than someone with exact budget match but no timeline. The high-scoring leads converted poorly while manually-flagged "low-quality" prospects bought properties within weeks.
We switched to boolean qualification gates instead of numeric scoring. Budget within range? Yes/no. Timeline under six months? Yes/no. Location match? Yes/no. Three yes answers triggered immediate agent notification. Two yes answers went to nurture sequence. One or zero got polite decline.
Conversion rates improved immediately because agents focused on genuinely qualified prospects rather than chasing algorithmic scores that meant nothing.
The Supabase Integration Challenge
Storing conversation history in Supabase seemed straightforward until we hit the relationship mapping problem. Each prospect could inquire about multiple properties. Each property could interest multiple prospects. Each conversation thread needed to maintain context across multiple sessions.
Our initial schema used a flat conversations table with JSON fields for property preferences. Querying became a nightmare. Finding all prospects interested in 3-bedroom homes required parsing JSON across thousands of records.
The solution was proper normalization:
prospectstable: contact information and qualification statuspropertiestable: listing details with searchable attributesintereststable: many-to-many relationship between prospects and propertiesconversationstable: message history linked to prospect ID
This structure enabled fast queries and made conversation context retrieval instant. When a prospect returned after two weeks, the chatbot immediately knew their previous property interests and conversation history.
Telegram vs. Traditional Chat Widgets
We chose Telegram over website chat widgets for a counterintuitive reason: persistence. Website visitors close browser tabs. Telegram conversations stay in the prospect's message list indefinitely.
This created an unexpected behavior pattern. Prospects would start a conversation, get distracted, then return hours or days later to continue exactly where they left off. The chatbot maintained full context, making the delayed conversation feel natural.
Traditional chat widgets lose context when the session ends. Prospects who return start over, creating friction and abandonment. Telegram's persistent conversation model eliminated this problem entirely.
The tradeoff was initial setup complexity. Getting prospects to start a Telegram conversation required more explanation than clicking a website chat button. But once engaged, Telegram users stayed engaged longer and converted at higher rates.
Unexpected Automation Insights
The biggest surprise wasn't technical—it was behavioral. Prospects treated the AI chatbot differently than human agents in ways that improved qualification accuracy.
People disclosed budget constraints more honestly to the bot. They admitted timeline flexibility they wouldn't reveal to human agents. They asked basic questions without embarrassment. This raw honesty made automated qualification more accurate than traditional phone screening.
The chatbot also eliminated the pressure tactics that often backfire in real estate. No pushy follow-ups, no artificial urgency. Prospects could explore options at their own pace while the system captured genuine interest signals.
However, the automation had clear limits. Complex negotiations, emotional concerns about neighborhoods, and family decision dynamics still required human intervention. The chatbot excelled at information gathering and initial matching but couldn't replace relationship building.
For teams handling outbound prospecting alongside inbound chatbot leads, we built the Outbound Prospecting Agent to maintain consistent follow-up across both channels. The setup guide shows how to coordinate automated outreach with chatbot conversations to avoid duplicate contact.
What We'd Do Differently
Start with boolean qualification gates, not scoring algorithms. Point-based systems feel scientific but don't reflect how real estate decisions actually work. Simple yes/no criteria identify qualified prospects more reliably.
Design for conversation persistence from day one. Choose platforms and architectures that maintain context across sessions. Prospects rarely complete qualification in a single conversation, and starting over kills conversion.
Build explicit handoff contracts between automation components. Implicit data passing creates bottlenecks that aren't obvious until you scale. Every agent should know exactly what data format it receives and what format it outputs.