Skip To Content

AI Accidents and AI Alignment


$250 Enroll

Full course description

Self-Paced 10 Hours Digital Badge $250


Course Overview

The problem of getting AI models to reliably generate the kind of output we want isn’t restricted to language models; it’s a universal feature of all AI systems. And as those systems become more scaled and more capable, it’s a problem that’s going to get worse, and not better. Ultimately, many frontier AI researchers believe that AI systems with advanced capabilities, but poorly controlled behavior may pose an unprecedented risk to global safety. In this module, we’ll dive into what happens when AI goes wrong, and what the long-term outlook of AI accidents might be.


What topics will be covered

  • Identify the causes of AI accidents.
  • Recognize the factors that increase AI accident risk.
  • List how organizations can reduce AI accident risk.
  • Describe what the future of AI accidents might look like.
  • Explain why an increasing number of AI experts are converging on the view that AI accidents may represent a source of global catastrophic.








  • Mark Beall
  • Edouard Harris
  • Jeremie Harris


Questions & Other Information 

Have questions about this program? Let us know and we'll get back to you!