A case for why persuasive AI might pose risks somewhat distinct from the normal power-seeking alignment failure scenarios. …| www.lesswrong.com