AI Safety Connect Day | Wednesday 18 February

The Imperial Hotel, New Delhi

Overview

Our flagship event co-hosted with IASEAI brings together approximately 250 policymakers, researchers, and industry leaders for a full day of programming. The event includes panels, lightning talks, and workshops focused on practical coordination mechanisms. Sessions will address multilateral frameworks, technology-enabled governance, and bridging divides between sectors and regions. This is AISC's signature convening during the India Summit.

Time: Main Programme 9:30am - 6:45pm

Dinner & Reception: 6:45pm - 11:30pm

Location: The Imperial Hotel, New Delhi

Capacity: Approximately 250 attendees

Co-hosts: AI Safety Connect and International Association for Safe & Ethical AI (IASEAI)

Agenda

MORNING SESSIONS

Welcome & Opening Remarks: Why India, Why Now? - Framing the need to discuss safe and trusted AI through risk, security, and technical contexts

  • Fireside Chat: Frontier AI: Boom, Bust, or Backlash? - Discussing how the global community can responsibly steer AI for societal wellbeing

  • Panel: Setting the Global AI Safety Risk Agenda - Investigating key pressing safety risks and global priority areas for the multistakeholder AI community

  • Panel: The Last Question: Will We Lose Control of Advanced AI? - Exploring avenues to avoid loss of control of advanced AI systems through technical governance

8:30 - 9:30 am

Registration (60 mins)

SESSION ONE

Mr. Nicolas Miailhe

Mr. Cyrus Hodes

Dr. Eileen Donahoe

9:30 - 9:50 am

WELCOME & OPENING REMARKS

AI Safety Connect and invited opening speakers will open the event, framing the need to discuss safe and trusted AI through risk, security, and technical contexts.

SESSION TWO

SPEAKERS

Prof. Yoshua Bengio

Mr. Jaan Tallinn

Dr. Carina Prunkl

9:55 - 10:40 am

FIRESIDE CHAT ON FRONTIER AI: BOOM, BUST, OR BACKLASH?

(30 mins discussion + 10 mins Q&A)

As AI adoption races, economic pundits wonder whether frontier AI will bring forth a new technological boom or bust. What has been rather unprecedented is the growing backlash — calls to strip AI away from ‘agentic’ tasks and replace rapid development with reasoned development. 

In conversation with Yoshua Bengio, a forefather of AI safety, this fireside chat will discuss how and if the global community can responsibly steer AI so it can be used for purposeful and meaningful societal wellbeing.

10:40 - 11:00 am

Coffee Break

11:00 - 11:10 am

Film Screening

The AI Doc: Or How I Became an Apocaloptimist

SESSION EE1 (PARALLEL SESSION)

Dr. Chloé Touzet

Mr. Bruno Galizzi

11:05 - 11:55 am

Lightning Talk: “From Quantitative Risk Modeling to Safe-by-Design AI Research: Ensuring AI Technologies are Safe”

SESSION THREE

SPEAKERS

Stephanie
Ifayemi

Dr. Gabriela
Ramos

Baroness Joanna
Shields

Ms. Karine Perset

Mr. Stephen Clare

MODERATOR

Dr, Renata Dawn

11:10 - 11:55 am

PANEL: SETTING THE GLOBAL AI SAFETY RISK AGENDA

(35 mins discussion + 10 mins Q&A)

‘Safety’ is used as an umbrella term for myriad concerns AI systems face — from online wellbeing to quelling disinformation to protecting physical critical infrastructures and supply chains. Although the scale of these risks tends to be global in nature, their research, assessment, and mitigation strategies remain national/local at present. 
This panel investigates key pressing safety risks and considers what could be considered as global priority AI safety risks, and how the global and multistakeholder AI community could act on improving understanding and mitigation for these risks.

AFTERNOON SESSIONS

  • Fireside Chat: Reading Farmer's Almanac for AI - Exploring the interplay between AI ambition and the imperative of safety over the long term

  • Panel: Safety Efforts by Frontier AI Developers - Discussing industry approaches to safety with leading frontier companies

  • Panel: Bending the Bell Curve: How Can Middle Powers Shape Global AI Power? - Policy and governance approaches to build middle power agency and coordination

  • Panel: Global AI Safety and India - Examining AI safety and control in India's context, drawing on experiences with digital public infrastructure

  • Panel: Coordinating AI Safety Across Borders - How international coordination mechanisms can enable safe, trusted, and equitable AI across countries

  • Closing Remarks - Synthesis of key takeaways

12:00 - 12:10 pm

Special address

A special address for scene setting ahead of discussions.

Shri Amitabh Kant

SESSION FOUR

SPEAKERS

Prof. Stuart Russell

Prof. Kee-Eung Kim

MODERATOR

Dr. Adam Gleave

Prof. Pulkit Verma

Prof. Sarah Erfani

12:15 - 1:00 pm

PANEL: THE LAST QUESTION: WILL WE LOSE CONTROL OF ADVANCED AI?

As AI systems approach greater autonomy and capabilities, potentially moving into a “superintelligent” classification, the emerging AI governance, security, and risk management tools may not be sufficient to ensure we retain control and maintain their reliability and accountability.

This panel explores, through a technical governance lens, avenues to avoid loss of control of advanced AI systems.

SESSION EE2 (PARALLEL SESSION)

Prof. Virginia Dignum

Prof. Yannis Ioannidis

Dr. Mohan Kankanhalli

Prof. Jeanna Matthews

12:15 - 1:00 pm

ACM TechBrief Launch: “Buy versus Build an LLM: A Decision Framework for Governments”

1:00 - 2:40 pm

Lunch

SESSION EE 3 (WORKSHOP)

Father Paolo Benanti

Ms. Niki Iliadis

Ms. Pauline Charazac

2:25 - 3:40 pm

Workshop: Defining and Governing Unacceptable AI Risks

This workshop builds on the Global Call for AI Red Lines, a statement signed by over 100 prominent leaders, AI experts and Nobel laureates that calls for governments to come to international agreement on clear and verifiable red lines for preventing universally unacceptable risks in 2026.

It brings together government representatives, multilateral organizations, and expert institutions to examine where AI red lines are already forming, to identify areas of convergence across jurisdictions, and explore how these shared constraints can be elevated into coherent global mechanisms for AI governance.

3:40 - 4:00 pm

Coffee Break

SESSION SIX

SPEAKERS

Mr. Natasha Crampton

Ms. Nicole Foster

Mr. Owen Larter

Dr. Chris Meserole

MODERATOR

Mr. Connor Dunlop

2:55 - 3:40 pm

PANEL: SAFETY EFFORTS BY FRONTIER AI DEVELOPERS

Frontier AI developers now sit at the center of the decisions that shape global risk, from model training to deployment choices to evaluation and monitoring responses. 

This panel discusses these industry approaches to safety with leading frontier companies and social-impact ventures working on AI safety.

3:40 - 4:00 pm

Coffee Break

4:00 - 4:10 pm

Special address

H.E. Dick Schoof | Prime Minister of the Netherlands

A special address for scene setting ahead of discussions.

SESSION SEVEN

SPEAKERS

Mr. Denise Wong

Prof. Balaraman Ravindran

MODERATOR

Ms. Imane Bello

Mr. Amlan Mohanty

Prof. Robert Trager

Ms. Gaia Marcus

4:10 - 4:55 pm

PANEL: BENDING THE BELL CURVE: HOW CAN MIDDLE POWERS SHAPE GLOBAL AI POWER?

Middle powers have a strong incentive to demand AGI/ASI safety. They face major risks to their national security and economic capability, yet at the same time, they lack the means to unilaterally influence superpowers to halt their attempts to develop ASI.
This panel explores possible policy and governance approaches to build middle power agency and coordination efforts to recalibrate AI power dynamics for sustainable, equitable, and safe AI development and deployment

SESSION EIGHT

SPEAKERS

Dr. Chinmay Pandya

Mr. Osama Manzar

Dr. Urvashi Aneja

Prof. Hemant Bhargava

MODERATOR

Dr. Mark Nitzberg

5:00 - 5:45 pm

PANEL: GLOBAL AI SAFETY AND INDIA

AI safety debates are often shaped by frontier companies and AI superpowers, while those most exposed to system failures and harms often remain several degrees removed from those agenda-setting and governance decisions.
This panel examines how questions of AI safety and control are playing out in India — a country with immense cultural, linguistic, and socio-economic differences. This panel will draw on India’s experiences with digital public infrastructure, large-scale multilingual AI deployment, and ongoing efforts to advance digital access, rights, and literacy.

SESSION NINE

SPEAKERS

Ms. Anne Marie
Engtoft Meldgaard

Mr. Samir Chhabra

Dr. Mariagrazia Squicciarini

MODERATOR

Dr. Claire Melamed

Mr. Frederic Werner

MODERATOR

2:40 - 2:50 pm

Special address

A special address for scene setting ahead of discussions.

Ms. Lucilla Sioli | Director of the European AI Office, European Commission

5:50 - 6:35 pm

PANEL: COORDINATING AI SAFETY ACROSS BORDERS

No single government or company will be able to control the development and uptake of advanced AI. 
This panel brings together senior leaders from the global community to examine how international coordination mechanisms can build shared language, capacity, and oversight mechanisms to enable safe, trusted, and equitable AI across countries.

6:40 - 6:55 pm

SESSION TEN

MODERATOR

Mr Cyrus Hodes

SPEAKERS

Mr. Robert Opp

CLOSING REMARKS

The day’s events will conclude with a brief synthesis of key takeaways.

6:55 - 11:30 pm

Dinner & Reception

SESSION HR1

1:00 - 2:05 pm

Demonstration Fair: Showcase of AI Safety Tools & Solutions

Mr. Cyrus Hodes

1:00 - 1:07 pm

Dr. Rumman Chowdhury | CEO and Founder, Humane Intelligence

Humane Intelligence delivers an end-to-end evaluation workbench for GenAI, offering modular, human-in-the-loop and automated testing to help organizations move from months-long evaluation cycles to rapid, scalable deployment.

1:10 - 1:17 pm

Dr. Jason Hoelscher-Obermaier | Director of Research, Apart Research

Apart Research will demo a standout project from its recent AI Manipulation Hackathon, which brought together 500+ participants to build tools that measure, detect, and defend against AI manipulation, producing 65+ open-source projects across benchmarks, detection systems, real-world monitoring tools, and novel mitigations.

1:20 - 1:27 pm

Mr. Chris Pease | President and CEO, Foundation for Agentic Networks, AgentiCorp

Physical toys have safety standards; AI toys need them, too. FAN presents a demo of Project NANDA, an agentic platform built to simulate interactions between children and LLM-based toys. Our early results reveal urgent vulnerabilities in how these "smart toys" handle sensitive emotional inputs. Join us to see how we are using simulated agents to proactively protect the psychological health of children in the age of AI.

1:30 - 1:37 pm

Mr. Abhishek Venkateswaran | National Project Officer, UNESCO India

UNESCO, in partnership with LG AI Research, is developing a global MOOC on the Ethics of AI, to be delivered on Coursera. The course aims to help technology professionals treat AI ethics as a practical design choice rather than solely a compliance issue, with capacity-building workshops to follow the launch in Seoul.

1:40 - 1:47 pm

Ms. Rajni Singh | Big Data & AI Senior Manager, Accenture

This demo will walk through how to identify and mitigate GenAI-specific threat surfaces, including prompt injection risks, policy bypass attempts, and unsafe retrieval patterns. It examines how structured knowledge integration through graph-backed retrieval systems improves safety, traceability, and decision reliability in real-world use cases.

1:50 - 1:57 pm

Mr. Connor Dunlop | Director of Strategy, Lucid Computing

Lucid Computing demonstrates how pre-built auditors running inside the Trusted Execution Environment of a GPU/CPU can enable secure, verifiably compliant deployment of AI agents in minutes — showing a practical path from safety policy to provable enforcement at the hardware level.

 

Event Location

The Imperial Hotel, New Delhi.