The fact it takes skill to find good educational examples for fact-checking should tell you something| The End(s) of Argument
I don't know why this didn't occur to me until now.| The End(s) of Argument
There's plenty of ways to prompt LLMs wrongly, but they do solve one problem quite well| The End(s) of Argument
A little etymology, and a reminder that once you have the source, it might be time to jump out of AI| The End(s) of Argument
Sora 2 shows that "looking for clues" is useless information literacy. Can we now focus on durable understandings?| The End(s) of Argument
Paid LLMs have gotten quite good at computation, but AI Mode isn't designed to do that| The End(s) of Argument
Using an evidence-focused follow-up for definitional concerns| The End(s) of Argument
I don't think there's a huge audience for this yet, but there will be one soon.| mikecaulfield.substack.com
An example of how an evidence-focused follow-up can make LLMs much smarter| The End(s) of Argument
How pushing the claim or artifact to the LLM for analysis can bring insights you weren't expecting| The End(s) of Argument
Three essential moves to using AI for verification and contextualization, with a bit of LLM-specific guidance| The End(s) of Argument
Sometimes the best thing an LLM can do for you is misunderstand you| mikecaulfield.substack.com
"Why do you say that?" can be just as instructive with an LLM as it is in real conversation. Also: I make you learn Stalnaker.| mikecaulfield.substack.com
An explanation of why AI can be helpful with sourcing and contextualization and why it sometimes doesn't look that way in retrospect| mikecaulfield.substack.com
I've been holding back on sharing this until I could present this more fully, but let's YOLO it.| The End(s) of Argument
Cutting through Russian disinformation with thirteen extra words| The End(s) of Argument
A short demonstration| The End(s) of Argument
A little venting about what educational explanations are for...| The End(s) of Argument
A very minimal test with some fairly clear results. (The answer is yes, it's bad)| The End(s) of Argument
Answer: No.| The End(s) of Argument
We're this far into reasoners and neither hypesters nor skeptics really understand their significance. Also: Read Toulmin.| mikecaulfield.substack.com
Many "errors" in search-assisted LLMs are not errors at all, but the result of an investigation aborted too soon. Here's how to up your LLM-based verification game by going to round two.| mikecaulfield.substack.com
People use misinformation to maintain beliefs more often than change them| mikecaulfield.substack.com