We are Boston University students dedicated to researching the risks of advanced AI.

AI Safety & Alignment (AISA) at Boston University aims to study anything involving AI safety — from governance [1], to international policy, to mechanistic interpretability [2]

Introductory
Fellowships
Join a select cohort of Socratic-style reading group discussions exploring core questions shaping the future of advanced AI.
Technical AI Safety Track: Learn about the challenges in understanding, evaluating, and aligning advanced AI systems, from RLHF and reward misspecification to mechanistic interpretability, and scalable oversight.
AI Policy & Governance Track: Examine the governance, regulation, and political and economic forces shaping AI development.
Verras Program
AI Safety &
Policy Lab
Join a select cohort of researchers working on technical AI safety and alignment.

Verras fellows will have the opportunity to pursue either an independent research track or a mentorship track, participating in weekly technical discussions, collaborative sessions, and networking opportunities with the broader AI safety community.


Applications for Spring 2026 are open and reviewed on a rolling basis.
Deadline: January 30, 2026 by the EoD

Join a team of interdisciplinary students to collaborate with state legislators across the nation and translate AI safety research into real-world policy insight.
Throughout an entire semester, AISAP teams will work directly with policymakers to analyze emerging AI risks, explain technical concepts, and develop evidence-based recommendations.

Applications for Spring 2026 are now closed.

An Under-Discussed Problem

  • Artificial intelligence (AI) has made extraordinary progress in the last decade [3] and is unlikely to slow down [4].

  • New models are quickly approaching and surpassing human-level performance in a number of fields [5].

  • There is an issue in ensuring alignment [6]: making sure that a model's increasingly intelligent and agentic motivations remain consistent with human intentions and values.

  • Unfortunately, improving AI capabilities often overshadows safety efforts, with safety research receiving only a small fraction of overall resources, leaving potential risks of advanced AI inadequately addressed.

What We Do
  • We think that AI safety needs to be addressed through public policy and technical alignment.

  • We foster a multidisciplinary community for researchers and policy-makers working towards mitigating the relevant risks from advanced AI.

  • We host introductory technical and policy fellowships to cultivate interest and draw highly motivated and talented people toward AI safety.

  • We host the Verras Program to invite highly talented and highly agentic students to rapidly up-skill, solve open problems, and kickstart their careers in technical alignment.

  • We host the AISAP Lab to invite students from interdisciplinary backgrounds to

Our members have participated in research programs with

The group of people who are aware of AI risks is still small.

You are now one of them.

So your actions matter more than you think.

Credit: PauseAI

Get Involved