We are Boston University students dedicated to understanding the risks of advanced AI
Boston University AI Safety Association (AISA) aims to study anything involving AI safety — from governance [1], to international policy, to mechanistic interpretability [2].
Learn about Membership to AISA >>
An Under-Discussed Problem
Artificial intelligence (AI) has made extraordinary progress in the last decade [3] and is unlikely to slow down [4].
New models are quickly approaching and surpassing human-level performance in a number of fields [5].
The real issue lies in ensuring alignment [6]: making sure that a model's increasingly intelligent and agentic motivations remain consistent with human intentions and values.
Unfortunately, improving AI capabilities often overshadows safety efforts, with safety research receiving only a small fraction of overall resources, leaving potential risks of advanced AI inadequately addressed.
What We Do
We think that AI safety needs to be addressed through public policy and technical alignment.
We provide a community for all members of the BU community to discuss and research relevant risks from advanced AI.
We host weekly member meetings to discuss current events and the most effective strategies to deploy AI safety through public policy and technical alignment.
We host a spring semester fellowship to cultivate interest and draw highly motivated and/or talented people toward AI Safety
Get Involved


"The group of people who are aware of AI risks is still small.
You are now one of them.
So your actions matter more than you think."