Would the development of autonomous, powerful super-intelligent AIs (ASI) automatically lead to disaster for humans – or to unprecedented wisdom – or to something else? In an increasingly unpredictable world, what might help make life-serving outcomes from ASI more likely? | Random Communications from an Evolutionary Edge
Tom Atlee| Random Communications from an Evolutionary Edge
Dear friends,| Random Communications from an Evolutionary Edge