Read the full transcript here. Is AI that's both superintelligent and aligned even possible? Does increased intelligence necessarily entail decreased controllability? What's the difference between "safe" and "under control"? There seems to be a fundamental tension between autonomy and control, so is it conceivable that we could create superintelligent AIs that are both autonomous enough to do things that matter and also controllable enough for us to manage them? Is general intelligence needed...