2026 (Ongoing)

2026 (Ongoing)

Designing for calibrated trust and verifiable Agentic AI for speculative use cases

Project Type

Corporate sponsorship

Duration

On-going

Tools

Figma, Elicit, Microsoft CoPilot

Deliverables

Verifiable AI solution prototype

Project Type

Corporate sponsorship

Duration

On-going

Tools

Figma, Elicit, Microsoft CoPilot

Deliverables

Verifiable AI solution prototype

Team

Fourward Team: 4 UX Researchers & Designers
Microsoft XSD Team: Principle & UX Researchers

Role

Lead UX Designer
UX Researcher

/ Project Overview

/ Project Overview

/ Project Overview

Early studies show that humans are not great at verifying the results of AI solutions.

Early studies show that humans are not great at verifying the results of AI solutions.

Objective

Objective

How might we design AI tools / experiences that help humans “trust but verify” by making uncertainty, risk, and evidence transparent?

Scope

Scope

Human oversight and verification in Agentic AI

This is an on-going project! I would love to talk more about it in a private conversation with you 🫶🏻

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2025 by yours truly

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2025 by yours truly

/ thank you for stopping by!

Be in touch! I promise I won't bite!

© 2025 by yours truly