I’ve joked in the past that what really makes LLMs work is our tendency to see faces on toast, but there’s a more serious point there about how much of our perception of the ability of models to “understand”, “reason”, “follow instructions” etc is in reality projection. We’ve evolved to read intention into the behaviour … Continue reading "Clever Hans Couldn’t Really Do Arithmetic, and LLMs Don’t Really Understand."