2026 (Ongoing)

2026 (Ongoing)

Designing for calibrated trust and verification in agentic AI meeting tools

Project Type

Corporate sponsorship

Duration

January 2026 – Present

Tools

Figma, Elicit, Microsoft CoPilot

Deliverables

Verifiable AI solution prototype

Project Type

Corporate sponsorship

Duration

January 2026 – Present

Tools

Figma, Elicit, Microsoft CoPilot

Deliverables

Verifiable AI solution prototype

Team

Fourward Team: 4 UX Researchers & Designers
Microsoft XSD Team: Principle & UX Researchers

Role

UX Designer & Researcher

/ context

/ context

/ context

Agentic AI is entering the workplace — but trust hasn't caught up.

Objective

Objective

How might we design AI tools / experiences that help humans “trust but verify” by making uncertainty, risk, and evidence transparent?

Scope

Scope

Human oversight and verification in Agentic AI

This is an on-going project! I would love to talk more about it in a private conversation with you 🫶🏻

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2026 by yours truly

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2026 by yours truly

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2026 by yours truly