Appendices to [AI Tools for Existential Security](/research/ai-tools-for-existential-security). Rapid AI progress is the greatest driver of existential risk in the world today. But — if handled correctly — it could also empower humanity to face these challenges.| Forethought
The AI regulator’s toolbox: A list of concrete AI governance practices| adamjones.me
We’re experimenting with publishing more of our internal thoughts publicly. This piece may be less polished than our normal blog articles. Running AI Safety Fundamentals’ AI alignment and AI governance courses, we often have difficulty finding resources that hit our learning objectives well. Where we can find resources, often they’re not focused on what we want, or are hard for […]| BlueDot Impact
AI could bring significant rewards to its creators. However, the average person seems to have wildly inaccurate intuitions about the scale of these rewards. By exploring some conservative estimates of the potential rewards AI companies could expect to see from the automation of human labour, this article tries to convey a grounded sense of ‘woah, this could […]| BlueDot Impact
How might AI-enabled oligarchies arise?| adamjones.me
This article explains key concepts that come up in the context of AI alignment. These terms are only attempts at gesturing at the underlying ideas, and the ideas are what is important. There is no strict consensus on which name should correspond to which idea, and different people use the terms differently.[[1]] This article explains […]| BlueDot Impact