2026 (Ongoing)

Designing for calibrated trust and verification in agentic AI meeting tools

Project Type

Corporate sponsorship

Duration

January 2026 – Present

Tools

Figma, Elicit, Microsoft CoPilot

Deliverables

Verifiable AI solution prototype

Project Type

Corporate sponsorship

Duration

January 2026 – Present

Tools

Figma, Elicit, Microsoft CoPilot

Deliverables

Verifiable AI solution prototype

Team

Fourward Team: 4 UX Researchers & Designers
Microsoft XSD Team: Principle & UX Researchers

Role

Product Designer & UX Researcher

/ context

/ context

/ context

WIP. Annotated sketch for feature ideation.

Sponsored by Microsoft XSD, I'm leading research and design to understand how workers decide when to trust and when to verify AI-generated meeting outputs.

I'm designing for calibrated trust: verification that matches actual stakes, not blind reliance or constant skepticism.

12

expert interviews conducted

7

user interviews completed

20+

co-design concepts generated

2

prototype focus areas defined

/ the problem

/ the problem

/ the problem

The verification gap: AI generates → users skim → users share

AI meeting tools like Microsoft Teams Copilot can summarize conversations, capture action items, and draft follow-ups. But when outputs go unverified, the consequences compound quietly: summaries become official records even when inaccurate, tasks get missed that were never agreed upon, and leaders make strategic calls on incomplete intel.

Microsoft Teams CoPilot generates AI summaries after user's meetings. These artifacts often go unverified, or very lightly skimmed.

users exhibit blind trust when it comes to microsoft-provided tools.

"No worries! It's CoPilot. It's probably gotten the summary right."

We found that workers are experiencing both overtrust and undertrust when it comes to AI tools, specifically in video conferencing workflows. Undertrust leads to under-utilization, which affects Microsoft's metrics of tool adoption, and overtrust leads to unaligned organizational memories, which is an organizational issue.

This leads to our project goals:

👥 for users

Make verification feel natural and low-effort, matched to actual meeting stakes

🎨 for product

Surface uncertainty signals so users know when to look closely, not just how

🪟 for Microsoft

Address the trust calibration gap that limits enterprise agentic AI adoption

This is an on-going project! I would love to provide more details in a private conversation—please feel free to reach out to hankhg12@uw.edu ☺️

This is an on-going project! I would love to provide more details in a private conversation—please feel free to reach out to hankhg12@uw.edu ☺️

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2026 by yours truly

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2026 by yours truly

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2026 by yours truly