We are Boston University students dedicated to understanding the risks of advanced AI

Boston University AI Safety Association (AISA) aims to study anything involving AI safety — from governance [1], to international policy, to mechanistic interpretability [2].

An Under-Discussed Problem

  • Artificial intelligence (AI) has made extraordinary progress in the last decade [3] and is unlikely to slow down [4].

  • New models are quickly approaching and surpassing human-level performance in a number of fields [5].

  • There is an issue in ensuring alignment [6]: making sure that a model's increasingly intelligent and agentic motivations remain consistent with human intentions and values.

  • Unfortunately, improving AI capabilities often overshadows safety efforts, with safety research receiving only a small fraction of overall resources, leaving potential risks of advanced AI inadequately addressed.

What We Do

  • We think that AI safety needs to be addressed through public policy and technical alignment.

  • We foster a multidisciplinary community for all members of the BU community to discuss and research relevant risks from advanced AI.

  • We host weekly member meetings to discuss current events and the most effective strategies to deploy AI safety through public policy and technical alignment.

  • We host introductory alignment and policy fellowships to cultivate interest and draw highly motivated and talented people toward AI Safety.

Alignment Fellowship

Join a select cohort for a Socratic-style reading group discussing AI alignment's most seminal papers. Over nine weeks, fellows will gain a fundamental understanding and application of AI alignment, its purpose, and promising approaches (such as scalable oversight, RLHF, and mechanistic interpretability). This program is intended for students with technical backgrounds in computer science, deep learning, and/or mathematics (linear algebra, multivariable calculus, probability).

Policy Fellowship

Join a small team working to understand and shape the policy landscape around advanced AI systems. Over nine weeks, participants will examine AI governance structures, discuss policy proposals, and explore topics such as global coordination, regulatory design, and the interface between technical constraints and policy. This program is intended for students with who wish to understand or contribute to emerging AI policy efforts.

Join a tight-knit community of students learning about AI safety through biweekly dinners, lightning talks, and informal discussions. Connect with peers, mentors, and researchers across universities through inter-university mixers, Slack, and guest lectures. Develop applied skills at workshops, find funding for your research, and sprint at our AI alignment hackathon. Find guidance to kickstart your career in AI safety and so much more.

Membership

Get Involved

The group of people who are aware of AI risks is still small.

You are now one of them.

So your actions matter more than you think.

Credit: PauseAI