Sponsored by Hiya — Global Leader in Voice Security and Caller Trust

Designing human-in-the-loop AI handoffs

with verifiable context, enabling fast, safe takeovers in clinics

Overview

Patients couldn’t reach clinics, then had to repeat themselves when someone finally picked up. I led cross-functional alignment and prototyped a two-sided system, AI for intake and staff for decisions, with verifiable handoffs, real-time emotion signals, and override controls. It preserved continuity while scaling access.

Role

Product Designer

Team

2 Designers,
1 Researcher

Platform

Healthcare

Timeline

7 months

Context

Voice AI Leader Explores Healthcare Communication

Leading in voice AI and caller trust, Hiya explored healthcare to extend its capabilities into new high-stakes domains, understanding how AI-human collaboration should work in trust-critical patient communication.

200M+

Active Users

150B+

Call Analyzed Annually

#1

in Caller Trust

Challenge

AI Efficiency Must Not Cost Patient-Clinic Trust

Hiya explored healthcare as a trust-critical beachhead. If AI felt robotic to patients or overreached for staff, trust would break fast, driving patients away and pushing clinics to reject AI, limiting Hiya’s path into other high-stakes domains.

If AI sounds robotic

Patients disengage

If AI oversteps

Staff disable support

If handoffs drop context

Patient trust breaks

User Needs

Three Breakpoints in Care Calls: Access, Context, Response

From interviews with patients and clinic staff, I found three recurring breakpoints: patients wait to reach the clinic, lose context and repeat their story, or feel dismissed in rushed responses, turning care into a transaction, not a relationship.

Opportunity

AI Handles Intake, Humans Own Decisions

Staff can’t cover 24/7 demand. AI absorbs routine intake, but only with clear boundaries, I defined what AI can handle, what requires human judgment, and how context carries through handoff so staff can respond without losing continuity.

Framing the problem

How might we design AI-human handoffs

that preserve context and trust?

Concept Testing

Trust Requires Explainable Handoffs and Real-Time Signals

Testing with patients and clinic staff, I defined two trust-critical tradeoffs to prevent mis-triage and de-escalate distress: verifiability over speed at handoff, and actionable emotion signals over static scores during the call.

Strategy

Two-Sided Care, Connected by Verifiable Handoffs

I designed a two-sided system: an AI receptionist for 24/7 intake and a staff coordination tool for complex cases. A verifiable handoff packet transfers context, reasoning, and actions, so staff can take over without losing continuity.

Solution

AI Intake, Staff-Owned Care Decisions

I made the takeover verifiable and actionable with structured intake, handoff proof, live emotion cues, and staff override, so care stays continuous without slowing staff down.

AI Receptionist: Structures concerns upfront, cutting repeats and after-hours backlog

Verifiable Handoff: Transcript, actions, and reasoning in one packet for quick verification

Emotion Signals: Live trends and near-term forecast that suggest the next step, not a score

Staff Control: Confirm, edit, override, keeping decisions accountable and safe

Outcome

Healthcare Validated as a Trust-Critical Use Case

9

Staff sessions

“If AI already captured the patient’s emotional state, I can step in faster and help them feel safe.”
– Staff 1

10

Patient sessions

“I didn’t have to repeat myself. What I needed was already communicated,
and I knew what to do next.”
- Patient 2

1

Use Case Validated

Healthcare emerged as

a strong first use case,

with clear next steps for refinement.

Reflections

Designing Trust for High-Stakes Handoffs

Design leadership in ambiguity

I set direction before the problem was clear. I ran future envisioning and tension mapping across patient and staff journeys, then turned it into decision frameworks that made AI–human trade-offs explicit and aligned stakeholders.

Using AI as a prototyping engine

I used Vapi and ChatGPT to prototype and test assumptions early. AI accelerated scripting and iteration, letting me explore more directions while I kept the system anchored on where human judgment must stay in control.

Making constraints work for trust

A “HIPAA-lite” exploration exposed a failure mode: flexibility without accountability breaks trust. I reframed compliance as a design principle, using baseline safeguards to create credibility for safe AI–human handoffs.

What’s Next

Three Experiments to Move from Concept to Pilot

Real-time support that changes staff behavior

Test if intent cues + emotion trends improve empathy more than raw scores, without slowing staff.

Minimum EHR context for handoff

Find the smallest auto-pulled EHR packet that’s enough to act on before staff return to records.

Wait-time support without staff load

Prototype AI wait chats that reduce anxiety and repeats, with clear escalation and no added work.

Conclusion

A Tangible Operating Model for Trust-Critical Care

Across multiple readouts and an alignment workshop with Hiya teams and leadership, I translated domain exploration into a shared operating model, clear AI vs. human boundaries, and trust guardrails for patient–AI and staff–AI takeovers.

🍵 🐦‍⬛

Let’s talk design or anything that sparks curiosity.

© Rebecca Hsiho, 2026

🍵 🐦‍⬛

Let’s talk design or anything that sparks curiosity.

© Rebecca Hsiho, 2026

🍵 🐦‍⬛

Let’s talk design or anything that sparks curiosity.

© Rebecca Hsiho, 2026