We argue that neither of these ways of thinking are accurate, insofar as both lying and hallucinating require some concern with the truth of their statements, whereas LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing. ChatGPT is bullshitIn my last post I looked at how LLMs lack a world model, or a set of rules that enables them to do “intelligent” things properly, such as play chess or respond coh...