AI Safety Training

A database of training programs, courses, conferences, and other events for AI existential safety. Book a free call with AI Safety Quest if you want to get into AI safety!

Open Applications

Deadline not found

Programs Timeline

Exact dates may be inaccurate if they were added before dates were announced, refer to the program websites for reliable information.

Upcoming Table

Self-study

Facilitated courses are usually heavily oversubscribed. However, materials are openly available and lots of other people want to learn, so you can form your own study groups! Pick your preferred course, then introduce yourself in #study-buddies on the AI Alignment Slack to make a group, or go to AI Safety Quest and form a Quest Party.

AI Safety Fundamentals

8 week courses by BlueDot Impact covering much of the foundations of the field and ongoing research directions.

Alignment Forum Curated Sequences

Sequences of blog posts by researchers on the Alignment Forum covering diverse topics.

Arkose's Resources List

Curated and tested list of resources that Arkose sends to AI researchers, excellent for getting a grounding in the problem.

Reading What We Can

Collection of books and articles for a 20 day reading challenge.

CHAI Bibliography

Extensive annotated reading recommendations from the Center for Human-Compatible AI.

Key Phenomena in AI Risk

8 weeks reading curriculum from PIBBSS.ai that 'provides an extended introduction to some key ideas in AI risk, in particular risks from misdirected optimization or 'consequentialist cognition'.

Machine Learning-focused

Machine-learning focused courses for people who want to work on alignment are also available, though take care not to end up drifting into a purely capabilities enhancing role on this track!

Intro to ML Safety

40 hours of recorded lectures, written assignments, coding assignments, and readings by the Center for AI Safety, used in the ML Safety Scholars program.

Alignment Research Engineer Accelerator

An advanced course to skill up in ML engineering to work in technical AI Alignment roles.

Deep Learning Curriculum by Jacob Hilton

An advanced curriculum for getting up to speed with some of the latest developments in deep learning, as of July 2022. It is targeted at people with a strong quantitative background who are familiar with the basics of deep learning, but may otherwise be new to the field.

The Interpretability Toolkit

Many tools to get started and skill up in Interpretability by Alignment Jam. The toolkit includes Quickstart to Mechanistic Interpretability by Neel Nanda.

Levelling Up in AI Safety Research Engineering

A level-based guide for independently upskilling in AI safety research engineering. Aims to give concrete objectives, goals, and resources to help anyone go from zero to hero.

Resources

AI Safety Communities
Living document of online and offline communities.

AI Safety Info
Interactive crowdsourced FAQ on AI Safety.

Alignment Ecosystem Development
Volunteering opportunities for devs and organizers.


© AI Safety Support, released under CC-BY.