Applications to Fall 2025 programs are open. Apply now! »

Welcome to our generation's Manhattan Project.

AI systems don’t think like humans. And there’s no guarantee that, as their intelligence exceeds humans and approaches superintelligence, we’ll be able to control or align their motivations. But we must try to build safe superintelligence anyway.

Is this the next space race? The next industrial revolution? Humanity’s last invention? Maybe. Here’s what we know:

  • An OpenAI model and Google DeepMind's DeepThink won a gold medal at the 2025 International Mathematic Olympiad [1]. Previous winners include the "Mozart of Mathematics," Terence Tao; the Romanian president, Nicușor Dan; and Poincare Conjecture solver and the Millennium Prize winner, Grigori Perelman [2].

  • In July 2022, DeepMind's AlphaFold released predicted structures for nearly all catalogued proteins known to science (about 200 million in total) effectively mapping the entire “protein universe.” For over 50 years, scientists had struggled with the challenge of predicting protein structures, but AlphaFold solved the problem just four years after its invention [3].

  • The Future of Life Institute's AI Safety Index (Summer 2025) evaluated seven frontier AI companies and found that none scored higher than a C+ overall on safety measures. One reviewer stated, "I have very low confidence that dangerous capabilities are being detected in time to prevent significant harm," citing minimal investment in third-party evaluations and external audits [4].

Here's what we don't know:

  • How AI learns. Models aren’t designed, they’re grown. Many frontier models have over a trillion parameters [5]. Marginal increases in capability are vastly greater than the marginal increases in interpretability. This is laughably trivial compared to what’s at stake. If we humanity fails, we will be able to explain and control only a small fraction (if any) of malicious behavior in superintelligent models.

So what’s the point? We are ceding more and more influence to AI systems that we don’t understand. There’s also no guarantee that superintelligent systems, once developed, will be aligned with human values. These risks only compound when considering military use of AI and geopolitical tensions around the world.

Our mission is to get the brightest minds at Boston University working on mitigating catastrophic or existential risks from advanced AI [6]. We are researchers. We are policy makers. We are debaters. We are future thinkers.

Sincerely,

AI Safety & Alignment (AISA) at Boston University

August 29, 2025

Your actions matter more than you think.