Sponsored by Hiya — Global Leader in Voice Security and Caller Trust

Exploring AI trust in 2030 outpatient clinics

Exploring AI trust in 2030 outpatient clinics

defining guardrails that anchor confidence in care

defining guardrails that anchor confidence in care

Overview

In 2030, AI will be embedded in how patients reach care, yet the phone will remain the clinic’s front door. The question is no longer what AI can do, but how it should behave to be trusted in sensitive contexts. Partnering with Hiya, I explored outpatient clinics as a domain to apply its strengths in voice and identity.

Scope

This project had two phases. During the capstone, our team explored how AI could rebuild patient–clinic trust through research and iterative concepts. Afterward, I synthesized the learnings into a set of trust guardrails and a Framework of Trust, creating a transferable model for AI communication beyond healthcare.

Outcome

Designed an AI receptionist and a staff assistant that triage requests, reassure patients, and pass context to maintain clarity, continuity, and trust in patient–staff communication.

My Role

Defined the design vision and alignment strategy to establish how trust should shape human–AI communication, and developed a Framework of Trust to guide AI design across domains.

Team

2 Designers, 1 Researcher

Timeline

Jan – Aug 2025

Platform

Healthcare Communication (Voice AI)

The Solution at a Glance

Where it Began

Exploring Healthcare as Hiya’s 2030 Entry Point

Reframing where trust matters most

Hiya’s reputation in caller trust and voice security protects millions each year. Building on that base, we reframed the question from what to build next to where trust matters most for voice AI by 2030.

200M+

Active Users

150B+

Calls Analyzed Annually

#1

in Caller Trust

Clinic calls as the first test of care

Outpatient clinics remain phone-first, with 30% of calls for scheduling and no-shows. As care’s first and most emotional touchpoint, they reveal where empathy and accountability slip. With Hiya, we explored how voice AI could rebuild confidence in these moments.

My Role

Framing How AI Builds Trust in 2030 Healthcare

During project: Driving vision and alignment

Led the design vision and alignment for Hiya’s voice-AI exploration in 2030 healthcare, framing future scenarios and prototyping patient–AI–staff handoffs to define how trust should shape human-AI interaction.

After project: Systematizing trust principles

Extended the project beyond healthcare to transform fragmented insights into a cohesive Framework of Trust that defined principles for clear and confident human-AI communication.

Research

Fragmented Communication, Frustrated Care

Fragmented Communication, Frustrated Care

In 2025, patients waited on hold. By 2030, AI takes routine, providers bring reassurance

My Impact

Led tension synthesis to reveal how trust broke across patients and staff.

Framed and ran a workshop that turned tensions into shared focus.

Approach

Uncovering Where Trust Breaks and Why It Matters

We noticed trust often broke not at errors but at moments of silence between patients and staff. Through interviews and workshops, we learned confidence grows when clarity and reassurance move together.

Current Breakdown

2025: Broken Calls, Frayed Connection

We realized breakdowns came not from mistakes but from moments when patients and staff lost each other’s attention. Missed calls and repeated questions showed how small gaps quietly wear down care.

Reach Clinic: Patients on hold, staff overloaded

Patients wait on hold while staff juggle multiple ringing lines and front-desk tasks. Both sides grow anxious as patients feel ignored and staff feel stretched thin.

Get in touch: Patients repeat, staff lack context

Patients repeat the same details across calls while staff dig through disconnected records. Context is lost, conversations stall, and neither side feels in sync.

Unanswered concerns: Patients dismissed, staff overloaded

When messages go unanswered, patients lose clarity and confidence while staff under pressure give rushed or vague replies. Speed replaces care and trust slips away.

Key Tensions

Mapping the Boundaries of Human–AI Collaboration

In a workshop with Hiya’s teams, we reframed patient–staff friction into shared human–AI tensions. Everyday misalignments began to trace the deeper boundaries shaping emotion, understanding, and control.

Clarity vs Ambiguity: Confusion breaks credibility

Everyone needed to know who they were talking to and what came next. Patients wanted clear handoffs, staff needed visibility into AI’s role, and Hiya sought clarity without overexplaining.

Reliability vs Fragmentation: Broken systems break confidence

Everyone needed communication to stay consistent and accurate. Patients wanted info to carry over, staff relied on context, and Hiya worked to route the right details to the right person.

Efficiency vs Empathy: Speed challenges sincerity

Everyone wanted faster, more caring communication. Patients expected quick reassurance, staff needed efficient tools that still felt human, and Hiya aimed to prove automation could feel warm.

Oversight vs Autonomy: Control adds cognitive load

Everyone wanted AI that helped without overstepping. Patients needed to feel humans stayed in charge, staff sought oversight without extra work, and Hiya built accountability that stayed light.

Market Scan

Everyone Solved for Care, Just Not Together

By 2025, we saw each product fix a part of communication: clarity, empathy, context, or oversight, but rarely all at once. Care advanced in fragments, never as one connected system.

Future Roles

2030: AI Handles Routine, Humans Carry Empathy

Building on 2025 tools that automated fragments of care, we explored how responsibility might rebalance by 2030. Routine shifted to AI, while reassurance and judgment remained human.

Opportunity

Designing Seamless Human–AI Handoffs

As routine calls move to AI, staff face fewer but weightier moments of judgment. We saw the next opportunity in designing handoffs where confidence flows seamlessly across patients, AI, and staff.

Frame the Problem

How might we design patient–AI–staff handoffs that sustain trust and empathy across care moments?

How might we design patient–AI–staff handoffs that sustain trust and empathy across care moments?

Design Probes

Probing Communication Across Boundaries

To move from foresight to evidence, we turned each workshop tension into a design probe. Each tested a key handoff to see how confidence could travel through changing roles and context.

Evolving the Design

From Reliable Flow to Emotional Balance

From Reliable Flow to Emotional Balance

Examining how AI and staff share calls while keeping care steady and reassuring

My Impact

Shaped patient-AI dialogues that turned empathy into clearer first contact.

Defined staff handoff and escalation to balance empathy with oversight.

Phase 1

Exploring Clarity and Reliability in Handoffs

Breakdowns often surfaced at the handoff, when patients moved between AI and staff. We examined how clearer context and steadier routines might keep communication consistent across shifting roles.

Testing how clarity and reliability hold across roles

We translated workshop tensions into two probes: Contextual Handoff for clarity, passing emotional context between callers and staff, and Routine Automation for reliability, maintaining flow through follow-ups.

Designing clear and seamless handoffs

We designed an AI receptionist to manage intake, safety, and scheduling while keeping conversations consistent. It revealed how reliability under pressure depends on empathy balanced with clear transitions.

Phase 2

Refining Communication for Emotional Balance

Functional reliability held, but reassurance still cracked. We explored how empathy and oversight could sustain confidence under pressure, and how AI might help them coexist without overstepping in care.

Testing how empathy and oversight coexist

We translated these tensions into two probes: Sentiment Cue for empathy, highlighting tone shifts to guide awareness, and Response Support for oversight, surfacing key details to steady staff judgment.

AI receptionist for patients: Practicing safe empathy under oversight

The AI receptionist showed care within limits, clarifying next steps while keeping context visible. It revealed how emotional reassurance depends on clear scope and transparent boundaries.

AI assistant for staff: Balancing efficiency and empathy under pressure

The AI assistant combined Sentiment Cue and Response Support to keep empathy steady amid urgency. By grounding emotion in context, it showed how balanced judgment sustains confidence in complex calls.

“Routine handoffs kept calls efficient and consistent.

But emotion-heavy moments still slipped through, revealing where trust cracked.”

“Routine handoffs kept calls efficient and consistent. But emotion-heavy moments still slipped through, revealing where trust cracked.”

“Routine handoffs kept calls efficient and consistent. But emotion-heavy moments still slipped through, revealing where trust cracked.”

Integration

From Breakdowns to Guardrails

From Breakdowns to Guardrails

Testing where confidence held or cracked, shaping balanced collaboration

My Impact

Built Vapi prototype to test real dialogues and expose trust cracks.

Translated findings into guardrails for trustworthy human-AI collaboration.

Patient Testing

When Empathy Comforts but Timing Cracks

Simulating patient calls under uncertainty

We recreated urgent calls where patients were anxious and couldn’t recall medication. AI clarified context and relayed details to staff, showing how steady communication can calm uncertainty.

When empathy overreaches and loses credibility

Patients stayed calm when AI remained transparent and carried context forward. When tone felt scripted or safety checks lagged, reassurance turned hollow and care lost credibility.

Staff Testing

When Automation Supports but Judgment Fades

Testing staff walkthrough of urgent call

We ran the same urgent-call scenario with staff. Using AI-shared context and cues, they confirmed details and guided patients toward safe next steps, revealing how oversight can steady routine work.

When automation outpaces judgment

Staff stayed confident when AI handled routine tasks and passed clear context. But when emotion scores lacked clarity or edits locked out discretion, confidence thinned and judgment lost ground.

Synthesis

From Scattered Tensions to Shared Guardrails

Testing revealed consistent patterns of balance. Patients valued warmth that stayed grounded, while staff relied on clarity and control. Together these insights formed four guardrails for confident collaboration.

Transparency

Patients shouldn’t wonder who they’re speaking to; staff need to see how AI decides.

Continuity

Patients shouldn’t feel dropped; staff need smooth carry-over across tasks.

Resonance

Patients expect warmth that feels caring; staff need cues that keep empathy real.

Accountability

Patients deserve accountable care; staff need control to review and override.

Trust Framework

From Guardrails to a Living System

From Guardrails to a Living System

Weaving four principles into one model aligning patients, staff, and AI

My Impact

Synthesized guardrails and tensions into a framework of trust.

Trust Framework

A System for Sustaining Confident Collaboration

Assurance in care isn’t built by single rules, but by coordination. This framework weaves three mechanisms: Flow, Layer, and Balance that keep clarity, empathy, and authority aligned across people and AI.

Mechanisms

Scaling Assurance Across People and AI

Flow: Connecting clarity across handoffs

Trustworthiness held when information and emotion moved as one. Flow captured that precision in handoff, helping patients feel remembered and staff step in smoothly when context changed.

Layer: Attuning support to shifting needs

Assurance deepened when assistance flexed with need. Under stress, warmth surfaced; when focus returned, guidance eased. Layer let care breathe: deep when needed, light when steady.

Balance: Holding empathy and authority in view

Care felt steady only when empathy and authority stayed visible together. When tone grew too personal, trust thinned; when human control faded, judgment slipped. Balance kept both in sight.

Final Concept

Where Principles Meet Practice

Where Principles Meet Practice

Translating Flow, Layer, and Balance into care that keeps patients and staff reassured

My Impact

Refined patient–AI-staff handoff flows from evaluation insights to strengthen reassurance.

Goal

Two Roles, One Continuous Flow

Patients repeated details while staff re-verified without context. The AI receptionist and assistant share one flow, keeping clarity and empathy connected end to end.

AI Receptionist

Starting Flow with Clarity and Care

Patients felt lost repeating symptoms to different people. The AI receptionist recalls context and explains next steps, helping first contact feel clear and reassuring.

Call Assistant

Flow, Layer, and Balance in Action

Before the call: Starting with context through Flow

Staff once started calls blind. AI now shares patient history and response support upfront, cutting repetition and setting a smoother, more confident start.

During the call: Maintaining rhythm through Layer

Under stress, staff missed empathetic tone. AI senses shifts and suggests pacing, keeping empathy measured and care steady.

Sentiment cue

AI flags subtle emotion changes and phrasing gaps, guiding staff to sound genuine without losing clarity.

Task automation

AI completes routines such as scheduling mid-call, keeping patients informed and conversations unbroken.

After the call: Closing with confidence through Balance

Follow-ups often slipped between systems. AI drafts notes for review and override, keeping oversight intact and care accountable.

Outcome

Scaling Trust from Practice to Principles

From Probes to Adoption Guardrails

Translating care insights into a trust framework for future human-AI collaboration

For users

Clarity for Patients, Preparedness for Staff

Patients: One story, clear next step

No more repeating themselves. The AI carried context across calls so each started with clarity and ended with calm about what comes next.

“I didn’t have to repeat myself… what I asked for has already been communicated. And I’m good to go.”
- Patient 2

Staff: Prepared calls, human voice

Staff began calls informed and focused. With light AI guidance, they balanced empathy and control, responding with calm and confidence.

“If an AI has already gone through the patient’s emotional state, … I can ease in and make them feel safe.”
– Staff 1

For Hiya

Adoption Guardrails Beyond Healthcare

Probes showed that quick fixes cracked trust instead of building it. I reframed the learnings as adoption guardrails, baseline conditions for trust. Similar gaps appeared in finance, utilities, and government, showing how these conditions scale beyond clinics.

Reflections

Four Ways Trust Took Shape

Four Ways Trust Took Shape

From framing clarity to grounding reality, each insight defines what to test next

Learnings

Designing for Trust in Complex Systems

Design leadership in ambiguity

I guided the team through 2030 envisioning and tension mapping across patient and staff journeys. It taught me to lead with frameworks so the team could see direction when the path was open.

Mapping the forces of trust

Mapping tensions across patients, staff, Hiya, and experts showed how different needs pull trust apart. Patients sought reassurance, staff needed coherence, and experts split on empathy. Seeing these pulls revealed where to design and balance.

AI tools as exploration partners

Using Vapi and ChatGPT, I turned tensions into quick probes. AI boosted speed and perspective, helping me explore wider and see where human judgment still matters most.

Grounding vision in reality

Exploring a HIPAA-lite path showed that flexibility without responsibility is fragile. Even basic identifiers required compliance, reframing regulation as credibility, the base where system trust begins.

AfterThoughts

Next Experiments to Deepen the Guardrails

If the project continued, we’d prototype key scenarios to test how each guardrail holds in real interactions.

Sentiment support in real time

Test if intent cues and emotion trends lead to more empathetic responses than raw scores.

Contextual handoff from EHR data

See how much auto-generated context is enough before staff return to records.

Patient support during wait time

Explore which AI-led wait chats calm patients without extra work for staff.

Conclusion

A Small Win in Hiya’s 2030 Domain Exploration

Through a series of research presentations and one alignment workshop with Hiya’s cross-functional teams and leadership, we defined how trust should evolve across patient-AI and staff-AI flows, giving Hiya’s 2030 vision a tangible frame for credible, human-centered care.