LLM failures to reason, as documented in Apple’s Illusion of Thinking paper, are really only part of a much deeper problem| garymarcus.substack.com
I’m impressed by large language models. So why can't they get the basics of poker right?| www.natesilver.net