Would the development of autonomous, powerful super-intelligent AIs (ASI) automatically lead to disaster for humans – or to unprecedented wisdom – or to something else? In an increasingly unpredictable world, what might help make life-serving outcomes from ASI more likely?