So I chanced by this post on LessWrong titled “Reward is Not Enough”. It is a rebuttal to the arguments presented in the paper “Reward is Enough” by Silver et al, which hypothesises that intelligence and its associated attributes are a result of reward maximization of an agent acting in its environment. Given the popularity of the LessWrong Forum, the pseudo-persuasive appeal of the article aided by technical jargon and for the sake of rationality, the post merits a deeper examination.