Superalignment Fast Grants
We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more.
Excited to have made this happen, and very grateful to Eric Schmidt for the support.
Read more and apply by February 18th!
RLHF works great for today's models. But aligning future superhuman models will present fundamentally new challenges.
— Leopold Aschenbrenner (@leopoldasch) December 14, 2023
We need new approaches + scientific understanding.
New researchers can make enormous contributions—and we want to fund you!
Apply by Feb 18! https://t.co/4viPfzANOA pic.twitter.com/GMyZcFRxqG
FOR OUR POSTERITY Newsletter
Join the newsletter to receive the latest updates in your inbox.