"Why do you say that?" can be just as instructive with an LLM as it is in real conversation. Also: I make you learn Stalnaker.| mikecaulfield.substack.com
An explanation of why AI can be helpful with sourcing and contextualization and why it sometimes doesn't look that way in retrospect| mikecaulfield.substack.com
I've been holding back on sharing this until I could present this more fully, but let's YOLO it.| The End(s) of Argument
Cutting through Russian disinformation with thirteen extra words| The End(s) of Argument
A short demonstration| The End(s) of Argument
A little venting about what educational explanations are for...| The End(s) of Argument
A very minimal test with some fairly clear results. (The answer is yes, it's bad)| The End(s) of Argument
Answer: No.| The End(s) of Argument
A simplification about AI with real-time search integration that will help you get more out of it| The End(s) of Argument
Just as there is "search" and there is "search process" there is a process around AI-assisted investigation too -- and we should teach it| The End(s) of Argument
A couple notes on an interesting problem and some possible educational approaches to it| The End(s) of Argument
Search might be a better approach for learning about the world, but the right comparisons matter| The End(s) of Argument
A couple tricks| The End(s) of Argument
Making up pretend code to to solve the real problem of having to pull data a bit at a time| The End(s) of Argument
If you're a good teacher you'll soon be a good prompter too| The End(s) of Argument
I finally do a decent video, and maybe also get closer to the core value| The End(s) of Argument
From LLMs to the Gulf of Tonkin, distinctions matter. Also: a note about "radar clutter".| The End(s) of Argument
AI searches citing Grok hallucinations is not the future we want| The End(s) of Argument
The world's best AI fact-checking tool is now available to all as a completely free GPT| The End(s) of Argument
We're this far into reasoners and neither hypesters nor skeptics really understand their significance. Also: Read Toulmin.| mikecaulfield.substack.com
Looking for the structure underneath the noise. Click to read The End(s) of Argument, by Mike Caulfield, a Substack publication with thousands of subscribers.| mikecaulfield.substack.com
Many "errors" in search-assisted LLMs are not errors at all, but the result of an investigation aborted too soon. Here's how to up your LLM-based verification game by going to round two.| mikecaulfield.substack.com
People use misinformation to maintain beliefs more often than change them| mikecaulfield.substack.com