===== Of Guilds and Guardrails: Scaling Responsible AI Through Communities + Architecture ===== * **Speakers**: Ana Gosseen, Kerri‑Leigh Grady, Natasha Brown‑Butler * **Room**: HC 203 * **Time**: Sun 11:20 am – 11:50 am * **Format**: Lecture (30 Min + Q&A) * **Difficulty**: Introductory / Some experience required * **Track**: AI / ML * **Presenter Location**: In-person * **Experience**: umpteenth time speaking ==== Description: ==== Responsible AI isn’t something you bolt on at the end — it’s a craft. This talk reframes responsible AI as a discipline shaped by **communities, shared standards, and engineering guardrails**, rather than top‑down policy mandates. Drawing inspiration from the Linux Foundation’s Responsible AI Pathways, the speakers explore how the concept of **guilds** — communities built around mastery, accountability, and shared practice — maps naturally onto responsible AI work. They show how open‑systems principles such as transparency, reproducibility, least privilege, observability, explicit ownership, and durable documentation create AI systems that are trustworthy because they are *well made*. Through practical examples, the session demonstrates how these guardrails help teams navigate challenges like opacity, unclear accountability, and rapidly evolving capabilities. The result is not slower innovation, but AI systems that remain understandable, auditable, and aligned with organizational values. **Target Audience:** * Anyone * Those who love the Renaissance Faire