2025 Parfit Memorial Lecture – a summary Paul Heller, a DPhil Student at the Uehiro Oxford Institute, has written a summary blog about the Parfit Memorial Lecture 2025. You can find it here. Read the lecture summary| Global Priorities Institute
We study the AI control problem in the context of decentralized eco- nomic production. Profit-maximizing firms employ artificial intelligence to automate aspects of production. This creates a feedback loop whereby AI is instrumental in the production and promotion of AI itself. Just as with natural selection of organic species this introduces a new threat whereby machines programmed to distort production in favor of machines can displace those machines aligned with efficient production. We ex...| Global Priorities Institute
Many fear that future artificial agents will resist shutdown. I present an idea – the POST-Agents Proposal – for ensuring that doesn’t happen. I propose that we train agents to satisfy Preferences Only Between Same-Length Trajectories(POST). I then prove that POST – together with other conditions – implies Neutrality+: the agent maximizes expected utility, ignoring the probability distribution over trajectorylengths. I argue that Neutrality+ keeps agents shutdownable and allows them...| Global Priorities Institute
When communicating numeric estimates with policymakers, journalists, or the general public, experts must choose between using numbers or natural language. We run two experiments to study whether experts strategically use language to communicate numeric estimates in order to persuade receivers. In Study 1, senders communicate probabilities of abstract events to receivers on Prolific, and in Study 2 academic researchers communicate the effect sizes in research papers to government policymakers....| Global Priorities Institute
The Parfit Memorial Lecture is an annual distinguished lecture series established by the Global Priorities Institute (GPI) in memory of Professor Derek Parfit. The aim is to encourage research among academic philosophers on topics related to global priorities research - using evidence and reason to figure out the most effective ways to improve the world. This year, we are delighted to have Theron Pummer deliver the Parfit Memorial Lecture. The Parfit Memorial lecture is organised in conjuncti...| Global Priorities Institute
The purpose of this paper is to address some ambiguities and misunderstandings that appear in previous studies of population ethics. In particular, we examine the structure of intervals that are employed in assessing the value of adding people to an existing population. Our focus is on critical-band utilitarianism and critical-range utilitarianism, which are commonly-used population theories that employ intervals, and we show that some previously assumed equivalences are not true in general. ...| Global Priorities Institute
We propose a new class of social quasi-orderings in a variable-population setting. In order to declare one utility distribution at least as good as another, the critical-level utilitarian value of the former must reach or surpass the value of the latter. For each possible absolute value of the difference between the population sizes of two distributions to be compared, we specify a non-negative threshold level and a threshold inequality. This inequality indicates whether the corresponding thr...| Global Priorities Institute
The Atkinson Memorial Lecture is an annual distinguished lecture series established in 2018 in memory of Professor Sir Tony Atkinson, jointly by the Global Priorities Institute (GPI) and the Department of Economics. The aim is to encourage research among academic economists on topics related to global prioritisation - using evidence and reason to figure out the most effective ways to improve the world. This year, we are delighted to have Jeffrey Ely, deliver the Atkinson Memorial Lecture. The...| Global Priorities Institute
Human activity can create or mitigate risks of catastrophes, such as nuclear war, climate change, pandemics, or artificial intelligence run amok. These could even imperil the survival of human civilization. What is the relationship between economic growth and such existential risks? In a model of directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted U-shape. This suggests we could be living in a unique “time of perils,” having developed tec...| Global Priorities Institute
I argue for a pluralist theory of moral standing, on which both welfare subjectivity and autonomy can confer moral status. I argue that autonomy doesn’t entail welfare subjectivity, but can ground moral standing in its absence. Although I highlight the existence of plausible views on which autonomy entails phenomenal consciousness, I primarily emphasize the need for philosophical debates about the relationship between phenomenal consciousness and moral standing to engage with neglected ques...| Global Priorities Institute
I study an intergenerational game in which each generation experiments on a risky technology that provides private benefits, but may also cause a temporary catastrophe. I find a folk-theorem-type result on which there is a continuum of equilibria. Compared to the socially optimal level, some equilibria exhibit too much, while others too little, experimentation. The reason is that the payoff externality causes preemptive experimentation, while the informational externality leads to more cautio...| Global Priorities Institute
A decision theory is fanatical if it says that, for any sure thing of getting some finite amount of value, it would always be better to almost certainly get nothing while having some tiny probability (no matter how small) of getting sufficiently more finite value. Fanaticism is extremely counterintuitive; common sense requires a more moderate view. However, a recent slew of arguments purport to vindicate it, claiming that moderate alternatives to fanaticism are sometimes similarly counterintu...| Global Priorities Institute
The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. ...| Global Priorities Institute
Longtermism holds that what we ought to do is mainly determined by effects on the far future. A natural objection is that these effects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection. To that end, I develop two simple models for comparing ...| Global Priorities Institute