Redirecting...| askell.io
Lately I’ve been thinking about AI ethics and the norms we should want the field to adopt. It’s fairly common for AI ethicists to focus on harmful consequences of AI systems. While this is useful, we shouldn’t conflate arguments that AI systems have harmful consequences with arguments about what we should do. Arguments about what should do have to consider far more factors than arguments focused solely on harmful consequences.| Amanda Askell
People price gouge when they buy goods during an emergency in order to re-sell them for a higher price. Why does price gouging feel wrong to us? In this post I consider a couple of possible reasons and argue that price gouging feels wrong because when people price gouge others, only two kinds of people can buy a scarce good: the rich and the desparate. So it makes prior inequalities between people more salient. I call this “shooting the messenger of inequality” and argue that doing this i...| Amanda Askell
Sometimes information that makes a prediction more accurate can make that prediction feel less fair. In this post, I explore some possible causal principles that could be beneath this kind of intuition, but argue that these principles are inconsistent with out intuitions in other cases. I then argue that our intuitions may reflect a desire to move towards more “predictive equality” in order to mitigate some of the negative social effects that come from making predictions based on properti...| Amanda Askell
In this post I argue that attempts to reduce bias in AI decision-making face the problem of practical locality—we are limited in what we can do because the actions available to us depend on the society we find ourselves in—and the problem of epistemic locality—we are limited in what we can do because ethical views evolve over time and vary across regions. Both problems have consequences for work on AI bias, and the epistemic locality problem highlights the important links between AI bia...| Amanda Askell
Something is “robustly tolerable” if it performs adequately under a wide range of circumstances, including unexpectedly bad circumstances. In this post, I argue that when the costs of failure are high, it’s better for something to be robustly tolerable even if this means taking a hit on performance or agility.| Amanda Askell
Embracing the kind of aggressive curiosity of sharks seems to be a good way of getting better at arguing. But it can have a chilling effect on discourse and friendships. In this post, I explain what I mean by shark curiosity, and how we can strike the right balance between nurturing and testing new ideas.| Amanda Askell
We sometimes assume that seeing someone fail implies that they are doing something wrong, but I argue that the ideal rate at which our plans should fail is often quite high. I note that this has consequences in politics and ethics that are often underappreciated.| Amanda Askell
There is a longstanding debate about whether deliberation prevents us from making any predictions about actions. In this post I will argue for a weaker thesis, namely that deliberation limits our ability to predict actions.| Amanda Askell
It’s possible to agree with the content of a piece of writing but but to think that the conclusions that many readers might draw from it are wrong. I think it’s useful to distinguish between these before criticizing the writing of others.| Amanda Askell