Applications to Fall 2025 programs are open. Apply now! »
AI systems continue to become more capable, more integrated into critical infrastructures, and more influential in shaping how we interact with the world. We believe that within approximately ten years, the trajectory of intelligence and humanity will be profoundly shaped by advances in AI.
While these advances hold immense promise for accelerating scientific discovery, improving global coordination, and solving complex problems, they also bring the possibility of catastrophic consequences if they are not aligned with human values and robustly designed for safety.
If AI systems surpass human capabilities across many, if not all, domains, what responsibilities do we have as researchers today to safeguard human agency in the future? If AI is poised to transform civilization, what must developers, policymakers, and researchers understand before making irreversible decisions? How can we foster a culture—across research, policy, and industry—that prioritizes safety and accountability?
AISA works toward answering these questions and preparing those entering the field to ask even better ones, ensuring that the next generation of researchers, policymakers, and engineers are equipped to shape the safe and responsible future of advanced AI.