SITUATIONAL AWARENESS: The Decade Ahead
Virtually nobody is pricing in what's coming in AI. I wrote an essay series on the AGI strategic picture: from the trendiness in deep learning and counting the OOMs, to the international situation and The Project.
Dwarkesh podcast on SITUATIONAL AWARENESS
My 4.5 hour conversation with Dwarkesh. I had a blast!
Weak-to-strong generalization
A new research direction for superalignment: can we leverage the generalization properties of deep learning to control strong models with weak supervisors?
Superalignment Fast Grants
We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more.
Response to Tyler Cowen on AI risk
AGI will effectively be the most powerful weapon man has ever created. Neither “lockdown forever” nor “let ‘er rip” is a productive response; we can chart a smarter path.
Want to win the AGI race? Solve alignment.
Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.
Nobody’s on the ball on AGI alignment
Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)
Burkean Longtermism
People will not look forward to posterity, who never look backward to their ancestors.
My Favorite Chad Jones Papers
Some of the very best, and most beautiful, economic theory on long-run growth.
What I've Been Reading (June 2021)
Religion, faith and the future, level vs. growth effects, the Cuban Missile Crisis, science fiction, and more.