It's interesting to see scientific people categorically reject the notion that LLMs "think". People write them off as "fancy autocomplete" or regurgitating their source material, and conclude that they do something categorically different than what humans can do. I think that's wrong.