Of all the AI buzzwords out there, the word “model” would seem free of hyperbole compared to “superintelligent” or the “singularity.” Yet this innocuous-seeming word can mean two contradictory things, and AI companies are deliberately muddling the line between them. A recent Harvard/MIT study of simulating planetary orbits illustrates the contrast between what scientists consider a “world model” and what AI enthusiasts think transformers are generating.| Still Water Lab
When he died last Wednesday, artist Mel Bochner left a body of work that’s gained in relevance in a half century defined by information—even more so in the age of AI.| Still Water Lab
Diversity without cacophony Group discussions on controversial subjects can open students to more viewpoints, but they can also result in the usual suspects—sometimes the most thoughtful students, but often just the loudmouths—dominating the conversation. So I was intrigued when Greg Nelson and Rotem Landesman, my collaborators on a course that examines in-depth the impact of … Pooling ideas for an AI ethics policy Read More »| Still Water Lab
When a tech reporter for the NY Times outsourced her decisions for a week to ChatGPT, she complained that “AI made me basic.” But it turns out the math behind generative AI can lead to results that are blandly average or wildly inaccurate.| Still Water Lab