top of page
White Structure

AI Safety Mexico

An initiative to promote AI alignment and governance in Mexico.

Main topics

agi-seminar-ea.png

Program description

We'll start by exploring what AI systems in general look like today (foundation models) and what they might look like in the future. We will then investigate fundamental problems in alignment such as misspecification and poor generalization of objectives, some examples and how these can lead to unintended or even catastrophic results.

The next half of the course covers four techniques, which attempt to prevent misalignment and the limitations of these techniques, followed by investigations that attempt to understand machine learning systems at a deeper level, including interpretability and foundation agents.

Finally, we will cover two topics at a high level, AI governance and careers in alignment.

703814_“A blue artificial neural network as main object, _xl-1024-v1-0.png

Inspired by ARENA

Join practical sessions based on the ARENA program

AI Safety in México: a pilot survey in Yucatán

A pilot survey conducted in Yucatán, Mexico, aimed at capturing local concerns about AI safety.

Authors: Janeth Valdivia¹,³ | Valeria Ramírez¹ | Silvia Fernández²,³ | Ángel Tenorio³ Alejandro Molina² | Oscar Sánchez¹

1.Universidad Politécnica de Yucatán| 2. Centro de Investigación en Ciencias de Información Geoespacial| 3. AI Safety México Project

bottom of page