AI Safety Evaluations for Human Flourishing
Designing Real-World Governance
The Imperial Hotel, Janpath Ln, Janpath, Connaught Place, New Delhi.
Overview
India’s convening marks the fourth iteration of the global AI summits. Critical questions persist: Can voluntary and soft governance mechanisms ensure that rapidly advancing AI systems serve human well-being equitably across regions? How do we translate global frameworks into locally resonant yet scalable practice? And who defines the standards of safety, trust, and flourishing that frontier AI should uphold?
AI Safety Evaluations for Human Flourishing, a private breakfast dialogue co-hosted by AI Safety Connect and Humane Intelligence PBC, with support from the Bill and Melinda Gates Foundation, brings together researchers, policymakers, and builders on the margins of the India AI Impact Summit to explore practical, equitable methodologies for evaluating AI systems.
The conversation builds on two complementary initiatives emerging from the Summit’s Safe & Trusted AI Working Group:
The Expert Engagement Group (EEG) on Frontier AI Model Usage and Voluntary Commitments, which analyzes where current approaches succeed and fall short in domains like multilingual evaluation, children’s safety, and incident monitoring; and
The newly launched Global South AI Safety Research Network, led by Digital Futures Lab and the Centre for Responsible AI - IIT Madras, which aims to embed frontier AI evaluation and accountability in diverse local contexts.
Together, these initiatives explore how voluntary commitments and research networks can reinforce one another in order to link the ethical intent of frontier AI governance with the empirical rigor of human impact measurement. Participants will surface implementation challenges at both technical and institutional levels and co-develop practical pathways for ensuring that AI safety evaluation frameworks advance not just system reliability, but human flourishing.
Time: 8:00am to 10:00am
Location: The Imperial Hotel, Janpath Ln, Janpath, Connaught Place, New Delhi, Delhi 110001
Agenda
7:30 - 8:00 am
Registration (30 mins)
WELCOME & OPENING REMARKS
Mr. Nicolas Miailhe
Dr. Rumman Chowdhury
8:00 - 8:10 am
AI Safety Connect and invited opening speakers will open the event, framing the need to discuss safe and trusted AI through risk, security, and technical contexts.
SPECIAL ADDRESS
H.E. Mr. Philip Thigo
Mr. Stephen Clare
10 mins
A special address for scene setting ahead of discussions (3min each)8:20 - 8:25
Transition Time
(5mins)
FRAMING PROVOCATION #1
(5 mins + 2 mins transition)
Ms. Kalika Bali
Kalika Bali, Chair of the Expert Engagement Group on Frontier AI Model Usage/ Voluntary Commitments, will present the EEG’s core findings on the case for, and the constraints facing, safety commitments.
8:25 - 8:32 am
9:50- 10:00 am
Closing Remarks (10 mins)
8:53 - 9:00 am
Ashta Kapoor, Co-Founder of the Aapti Institute, will discuss high-risk use cases where safety commitments could prevent harms, focusing on multilingual and multicultural safety evaluation, children’s online safety, and incident response mechanisms.
9:00 - 9:20 am
Group Discussion (20 mins)
The breakfast will conclude with a brief closing reflection.
8:32 - 8:52 am
Group Discussion (20 mins)
8:52 - 8:53 am
Transition Time (20 mins)
FRAMING PROVOCATION #2
(5 mins + 2 mins transition)
Urvashi Aneja, Founder of the Digital Futures Lab, will reflect on what credible pathways forward could look like for industry, government, and civil society stakeholders through both Indian and global lenses.
Ms. Ashta Kapoor
9:20 - 9:21 am
Transition Time (1 min)
9:21 - 9:28 am
FRAMING PROVOCATION #3
(5 mins + 2 mins transition)
Dr. Urvashi Aneja
9:28 - 9:48 am
Group Discussion (20 mins)
9:48 - 9:50 am
Transition Time (2 mins)
Event Location
The Imperial Hotel, New Delhi.