A common anti-AI argument is that AI is inherently dangerous because the ability to improve AI research by using AI products makes a scenario where “recursive self-improvement” leads to a near-infinite amount of improvement in weeks, if not faster. Some versions of the argument say that if there is even a 0.01% chance this happens, the consequences would be so disastrous that it’s worth worrying about despite the low probability. I will address that version of the argument.