Product

Why we built an intervention shield into our AI host assistant

Full automation is a trap. Here's how our "co-pilot, not auto-pilot" philosophy keeps humans in the loop when it actually matters.

7 min read Published April 22, 2026 Category: Product

Summarize this article with ChatGPT

Short on time? Open ChatGPT with a prefilled prompt that summarizes this post for you in 5 bullets.

Summarize →

The first version of our AI host assistant was fully autonomous. You connected Smoobu, enabled the AI, and it handled every guest message from that point forward. Clean. Simple. Easy to explain.

It took us about three weeks of real-world testing to realize that was a mistake.

Here's what happened. A guest arrived at an apartment and found the door lock wasn't working. They messaged the host at 11 PM. The AI, trained on the apartment guide, replied confidently: "The door unlocks with code 4827 — please try again holding the handle down for 3 seconds." Perfectly logical. Perfectly unhelpful — because the battery in the smart lock had died an hour earlier and no code was going to work.

The host got the first inkling something was wrong only when the guest's third message came through. By then the guest was standing in the rain, furious. The review was 2 stars. The host tried to call us. We called the developer who'd shipped the AI.

And we took a very hard look at the architecture.

The illusion of automation

Here's the thing about fully-automated guest messaging: it works perfectly 95% of the time, and it fails catastrophically 5% of the time. That failure rate is fine if you're a customer-support SaaS and a bad reply generates a ticket. It's not fine if you're a short-term rental host, because the feedback loop is a public one-star review that costs you €3,000 in future bookings.

The industry response has been to double down on AI quality — better models, more training data, tighter prompts. That helps, but it doesn't solve the root problem. The root problem is that the AI doesn't know what it doesn't know.

A guest says "the place is too hot." The AI confidently explains how to use the thermostat. It doesn't know the thermostat was broken last Tuesday and a repair technician was supposed to come but didn't. The AI can't know that. Only the host can.

Full automation assumes the AI has all the information. It never does.

The co-pilot frame

We rebuilt the system around a different premise: the AI is a co-pilot, not an auto-pilot. Its job is to handle the obvious 80% of guest messages — the ones where the apartment guide has the answer, the situation is calm, and the guest just needs a quick reply. Anything outside that 80% gets handed to the host.

Three mechanisms make this work.

1. The intervention shield

The moment a host replies to a guest — from the web app, from the Smoobu app, from any channel — we flag that conversation as "host intervention active" and the AI retreats for 60 minutes. During that hour, no matter how many guest messages arrive, the AI does not reply. The host is in the conversation now; two voices would just confuse the guest.

After 60 minutes without further host activity, the AI tentatively returns. It processes the latest guest message, decides whether to reply or escalate, and proceeds as normal. But it always knows that the host recently touched this thread, and it's cautious about what it says for the rest of the stay.

2. Sentiment-driven escalation

The AI reads every incoming guest message and scores it for emotional signal. Anger, urgency, distress, confusion — each gets a weight. Above a threshold, the AI stops and escalates immediately. No reply. The host gets a Telegram card within seconds.

Thresholds are tuned conservatively: we prefer false-positive escalations (host gets a ping for something the AI could have handled) to false negatives (AI handles something that needed a human). A missed escalation can cost a 5-star review. A false-positive escalation costs the host 10 seconds to dismiss.

3. Unknown-unknown guard

When a guest asks a question that isn't covered by the apartment guide, the AI doesn't improvise. It escalates. "Is there a swimming pool?" — if the guide doesn't mention a pool, the AI won't say "no, there isn't" — it'll escalate, because maybe the guide is just incomplete.

This is the rule that prevents the broken-lock disaster. The AI can only say what the guide knows. If the guide doesn't know, the host decides.

An AI co-host that knows when to step back.

Reply once yourself and the AI pauses for the window you set. No duplicate messages, no bulldozing.

Try it free

How escalations actually reach you

The host-facing surface for all this is Telegram. We chose it over email, SMS, and push notifications because it combines three properties: it reaches you instantly wherever you are, it shows rich cards with actionable buttons, and the "seen" state is honest (unlike email, which you can technically have open without actually reading).

An escalation card looks like this:

🚨 Escalation — Sunset Studio
Guest: Sarah M. (arriving tomorrow)
Message: "The door code you sent doesn't work. I'm outside. Nothing is opening."

Suggested reply: (AI's best guess, for reference)

[ 🔕 Pause 1h ] [ ✅ Solve ] [ 💬 Reply ]

Three buttons. "Pause 1h" silences the AI on this thread for an hour so you can handle it without the AI jumping back in. "Solve" marks it resolved and advances the automation watermark so the thread won't re-escalate. "Reply" opens a swipe-reply text box right in Telegram — your text goes straight to the guest via Smoobu, and the AI pauses for an hour automatically.

Every action you take syncs back to the web app instantly. You can manage the most intense situations from a train, in the garden, or in bed at 2 AM — without ever opening a laptop.

The escalation heartbeat

One more piece. If an escalation sits unresolved for 15 minutes, we re-ping. Another 15, we ping again. Up to three times, then it goes quiet — we trust that you either can't or won't respond, and we don't want to become spam.

This exists because hosts are human. You miss notifications. You silence Telegram during dinner. An important escalation should not die because the first ping arrived at exactly the wrong moment.

What this philosophy costs us

Being a co-pilot is strictly more expensive than being an auto-pilot. Every escalation costs developer attention (we have to tune the escalation logic). Every host interaction costs our infrastructure (we have to round-trip to Telegram and back). Every false-positive escalation costs the host a few seconds of annoyance.

But the math works out. A host who trusts the AI because it knows its limits enables Autopilot on all five apartments. A host who gets burned by an overconfident AI once disables it forever. The value of "I can actually leave this on" dwarfs the cost of occasional extra pings.

Building trust with hosts

We believe the next generation of AI products will be judged not by how much they do autonomously, but by how well they know what they don't know. The intervention shield is our first attempt at that. It's not perfect. But it's the reason our users sleep at night with the AI turned on — and that's the bar we measure ourselves against.

See the co-pilot philosophy in action.

Enable Autopilot when you're ready. Disable it in one click. Watch how the escalations arrive.

Try Virtual Host AI
← Back to all posts