Applications for Fall 2025 Fellowships are now closed.
We are Boston University students dedicated to researching the risks of advanced AI.
AI Safety & Alignment (AISA) at Boston University aims to study anything involving AI safety — from governance [1], to international policy, to mechanistic interpretability [2].
Featured Events
Towards Superintelligence, Alignment, and Control (and pizza)
Date: Friday, Sept. 12th, 5:30 PM to 8:30 PM
Location: Register to view address!
Description: Open to all Boston University students, faculty, and staff! Learn about neural scaling laws, why we may build machines smarter than humans (in our lifetime), and why this (might be) problematic. No background experience is required. Please invite your friends and join us for dinner!


Alignment Fellowship
Join a select cohort for a Socratic-style reading group discussing AI alignment's most seminal papers. Over nine weeks, fellows will gain a fundamental understanding and application of AI alignment, its purpose, and promising approaches (such as scalable oversight, RLHF, and mechanistic interpretability). This program is intended for students with technical backgrounds in computer science, deep learning, and/or mathematics (linear algebra, multivariable calculus, probability).
Policy Fellowship
Join a small team working to understand and shape the policy landscape around advanced AI systems. Over nine weeks, participants will examine AI governance structures, discuss policy proposals, and explore topics such as global coordination, regulatory design, and the interface between technical constraints and policy. This program is intended for students with who wish to understand or contribute to emerging AI policy efforts.
Designed for the next generation of alignment researchers. This is AISA's flagship research program and most competitive opportunity. Up to 10 scholars will be invited to rapidly up-skill and pursue AI safety careers, with access to funding for research, lectures with leading researchers in alignment, fortnightly dinners and exclusive socials, and inter-university mixers. This program is designed to prepare highly talented and highly agentic students for original safety research and connects them with relevent opportunities and stakeholders. Kickstart your career in AI safety and so much more.
Verras Program
An Under-Discussed Problem
Artificial intelligence (AI) has made extraordinary progress in the last decade [3] and is unlikely to slow down [4].
New models are quickly approaching and surpassing human-level performance in a number of fields [5].
There is an issue in ensuring alignment [6]: making sure that a model's increasingly intelligent and agentic motivations remain consistent with human intentions and values.
Unfortunately, improving AI capabilities often overshadows safety efforts, with safety research receiving only a small fraction of overall resources, leaving potential risks of advanced AI inadequately addressed.
What We Do
We think that AI safety needs to be addressed through public policy and technical alignment.
We foster a multidisciplinary community for all members of the BU community to research relevant risks from advanced AI.
We host introductory alignment and policy fellowships to cultivate interest and draw highly motivated and talented people toward AI Safety.
We host the Verras Program to invite highly talented and highly agentic students to rapidly up-skill, solve open problems, and kickstart their careers in technical alignment.
Get Involved
The group of people who are aware of AI risks is still small.
You are now one of them.
So your actions matter more than you think.

