The second part of the AI 2027 timelines model relies primarily on insufficiently evidenced forecasts.| Reflective altruism
The AI 2027 report relies on two models of AI timelines. The first timelines model largely bakes hyperbolic growth into the model structure. The post Exaggerating the risks (Part 19: AI 2027 timelines forecast, time horizon extension) appeared first on Reflective altruism.| Reflective altruism
This post introduces the AI 2027 report.| Reflective altruism
A leading power-seeking theorem due to Benson-Tilsen and Soares does not ground the needed form of instrumental convergence| Reflective altruism
Power-seeking theorems aim to formally demonstrate that artificial agents are likely to seek power in problematic ways. I argue that leading power-seeking theorems do not succeed.| Reflective altruism
Many longtermists think that existential risk mitigation escapes the scope-limiting factors. To what extent is this true? The post The scope of longtermism (Part 5: A case study – Existential risk) appeared first on Reflective altruism.| Reflective altruism
Maarten Boudry and Simon Friederich argue that natural selection may not produce selfish artificial systems| Reflective altruism
Whether to push for an AI pause is a hotly debated question. This post contains some of my thoughts on the issue of AI pause and the discourse that surrounds it. Contents The motivation for an AI p…| Magnus Vinding
This post continues my investigation of biorisk from LLMs by looking at a recent redteaming study from the RAND Corporation.| Reflective altruism