
WorkshopS
Strategy & Biz Day
Description
Your AI feature works as intended. It also just recommended something harmful to a vulnerable user, automated a decision nobody agreed to, or reinforced a bias nobody checked for.
These are not edge cases.
They are predictable consequences that teams consistently fail to anticipate because they lack a structured way to think about them. Ethics checklists do not survive contact with roadmaps.
This workshop replaces them with something that does. Participants bring a real AI flow from their product and work through a practical process to map where harm can emerge across users, agents, and systems, identify failure points before launch, encode guardrails and escalation paths directly into the workflow, and build a monitoring approach for after deployment.
You leave with a working canvas you can bring into your next cross-functional review on Monday morning.
Who is it for?
Product designers, researchers working on AI features who want a practical method for anticipating harm, not just discussing it. Especially relevant for teams shipping AI that touches sensitive contexts, automated decisions, or vulnerable user groups.
Top 3 Learnings
ASSESSMENT
Facilitated by
