What are they used for? Where can you find them? And what kind of information do they actually store?| Haystack
Researchers are teaching giant language models how to "see" to help them understand the world.| MIT Technology Review
OpenAI has extended GPT-3 with two new models that combine NLP with image recognition to give its AI a better understanding of everyday concepts.| MIT Technology Review
Tools like GPT-3 are stunningly good, but they feed on the cesspits of the internet. How can we make them safe for the public to actually use?| MIT Technology Review
No one knew how popular OpenAI’s DALL-E would be in 2022, and no one knows where its rise will leave us.| MIT Technology Review
Hundreds of scientists around the world are working together to understand one of the most powerful emerging technologies before it’s too late.| MIT Technology Review
Exclusive conversations that take us behind the scenes of a cultural phenomenon.| MIT Technology Review
Thirty years ago, Hinton’s belief in neural networks was contrarian. Now it’s hard to find anyone who disagrees, he says.| MIT Technology Review
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.| MIT Technology Review
Get a head start with our four big bets for 2023.| MIT Technology Review
Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann. Introduction Annette Zimmermann, guest editor GPT-3, a powerful, 175 billion parameter language model developed recently by OpenAI, has been galvanizing public debate and controversy. As the MIT Technology Review puts| Daily Nous - news for & about the philosophy profession
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.| MIT Technology Review