I describe a threat model by which AI R&D capabilities could cause harm, a specific capability threshold at which this risk becomes unacceptable, early warning signs for detecting that threshold, and the protective measures needed to continue development safely past that threshold. I recommend that labs start measuring for the warning signs today. If they observe them, they should pause AI development unless they have implemented the protective measures.