AI Safety Connect at the Paris AI Action Summit 2025

February 9, 2025 | 8:00AM to 9:00PM

Salons de l'Hôtel des Arts et Métiers, Paris

PAST EVENT

AI Safety Connect at the Paris AI Action Summit 2025

February 9, 2025 | 8:00AM to 9:00PM

Salons de l'Hôtel des Arts et Métiers, Paris

Hosted at the Salon de l'Hôtel des Arts et Métiers in Paris during the 2025 Paris AI Action Summit, the inaugural AI Safety Connect event examined technical AI safety with frontier labs and the governance of AI safety with AI Safety Institutes (AISIs) in the context of international coordination. It also laid out pathways toward a global mapping of AI Safety technologies addressing an array of risks.

AI Safety Connect at the Paris AI Action Summit 2025 was organized with the support of the Mohammed Bin Rashid School of Government (MBRSG) and the Future of Life Institute (FLI).

Opening Remarks From Experts in AI Set the Stage

M.C. Mr. Cyrus Hodes

Founder, AI Safety Connect

Prof. Fadi Salem,

Director of Policy Research, Mohammed Bin Rashid School of Government

Prof. Max Tegmark,

Co-Founder and President, Future of Life Institute

Prof. Qian Xiao,

Vice Dean, Institute for AI International Governance, Tsinghua University

Global Risk and AI Safety Preparedness Mapping Presentation

Cyrus Hodes (AI Safety Connect), Jonathan Claybrough (Center for AI Security), and Charbel-Raphaël Segerie (Center for AI Security), announced the launch of the Global Risk and AI Safety Preparedness (GRASP) project. GRASP is a global mapping of general-purpose AI safety tools and solutions, conducted in partnership with Project SAFE of the Global Partnership on AI (GPAI) and its Tokyo Centre of Expertise.

Roundtable: AI Safety Institutes

Moderated by Yuko Harayama (GPAI), this roundtable convened Abhishek Singh (Indian Ministry of Electronics and IT), Yi Zeng (Beijing Institute of AI Safety and Governance), Wan Sie Lee (Infocomm Media Development Authority of Singapore), Juha Heikkilä (European Commission), and Agnes Delaborde (French National Laboratory of Metrology and Testing) to provide an overview on different characteristics of their national AI safety institutes and highlight opportunities for cross-border coordination and cooperation.

Roundtable: Frontier Labs, AGI and the State of Safety Science

Moderated by Nicholas Dirks (New York Academy of Sciences), this roundtable brought together Chris Meserole (Frontier Model Forum), Michael Sellitto (Anthropic), Katarina Slama (ex-OpenAI), Miles Brundage (ex-OpenAI), and Roman Yampolskiy (University of Louisville) to consider expected near-term advanced AI capabilities, their risk scenarios, and the potential for their safe development and deployment.

Roundtable: Investing in AI Safety

Moderated by Seth Dobrin (1infinty Ventures), this roundtable featured Jaan Tallinn (Skype, Metaplanet), Brandon Goldman (Lionheart Ventures), Ben Cistecky (Temasek), and Nick Fitz (Juniper Ventures) to highlight investment and funding gaps and opportunities in AI safety.

Roundtable: International Cooperation in AI Safety

Moderated by Karine Perset (OECD.AI), this roundtable featured Yoshua Bengio (Mila), Stuart Russell (Center for Human-Compatible AI), Irakli Beridze (UNICRI), Xue Lan (Tsinghua University), and Dawn Song (UC Berkeley) to discuss the role of international institutions and academic dialogues in facilitating international AI safety cooperation.

Presentations of AI Safety Ventures

Various AI safety start-ups, including Nicolas Miailhe (PRISM Eval), Shameek Kundu (AI Verify Foundation), Kristian Rönn (Lucid Computing), April Chin and Oliver Salzmann (Resaro.ai), Gabriel Alfour (Conjecture), and Matija Franklin (Infinitio AI) provided overviews of their products designed to advance technical AI safety solutions.

Media Corner: The Trajectory

Daniel Faggella (Emerj Artificial Intelligence Research) hosted interviews for The Trajectory, his podcast bringing together leading thinkers in policy, academia, and business to discuss the realpolitik on artificial general intelligence and the posthuman transition.

Reception & AI Demonstration Salon

An evening reception-cum-exposition offered attendees a relaxed setting to engage with live demonstrations of AI safety interventions and continue the day's discussions in a more relaxed setting. 

Demonstrations included: 

  • Live cybersecurity demonstrations by CivAI

  • Live AI jailbreaking by PRISM Eval

  • Safer Agentic AI Guidelines by Nell Watson

  • “AI: Unexplainable, Unpredictable, Uncontrollable” by Roman Yampolskiy (University of Louisville)

  • “Life 3.0” by Max Tegmark (Future of Life Institute)

  • “The Darwinian Trap” by Kristian Rönn (Lucid Computing)

  • “Computing Power and the Governance of AI” by Haydn Belfield (Centre for the Study of Existential Risk, University of Cambridge)

  • “Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems” by Saurabh Mishra (Taiyō.AI)

  • “AI Safety Atlas” by Charbel-Raphaël Segerie (Center for AI Security)

  • “Global Consultations for France’s 2025 AI Action Summit” by Caroline Jeanmaire (The Future Society)

  • “A Narrow Path” by Adam Shimi (Control AI)

  • “The Compendium” by Eva Behrens (Conjecture)

Photo Gallery