"Actualities seem to float in a wider sea of possibilities from out of which they were chosen; and somewhere, indeterminism says, such possibilities exist, and form part of the truth."| generative.ink
\"Like programming, but more fluid. You're not programming a computer, you're writing reality. It's strange. It's always different. It's never the same twice.\"| generative.ink
Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann. Introduction Annette Zimmermann, guest editor GPT-3, a powerful, 175 billion parameter language model developed recently by OpenAI, has been galvanizing public debate and controversy. As the MIT Technology Review puts| Daily Nous - news for & about the philosophy profession
This is a sequence version of the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, and Joar Skalse contributed equally to this sequence. The goal of this sequence is to analyze the type of learned optimization that occurs when a learned model (such...| www.alignmentforum.org
Mechanisms of meta-learning, beating few-shot benchmark performance with 0-shot prompts, and Bayesian analysis of prompt ablation| generative.ink
How to get GPT-3 to sort a list: make it think it's a python interpreter running list.sort()| generative.ink
In my essay “Just ask for Generalization”, I argued that some optimization capabilities, such as reinforcement learning from sub-optimal trajectories, might be better implemented by generalization than by construction. We have to generalize to unseen situations at deployment time anyway, so why not focus on generalization capability as the first class citizen, and then “just ask for optimality” as an unseen case? A corollary to this design philosophy is that we should discard inductiv...| Eric Jang
We perform a series of experiments using GPT-3 with decomposition to perform complex toy tasks that it is otherwise unable to solve. The goal of these experiments is to provide some preliminary evidence for the viability of factored cognition in real world models. For our synthetic task, we chose a series of various arithmetic tasks. Aside from the ease of generating examples, another advantage of arithmetic related task settings is GPT-3's inability to perform even simple mathematical operat...| EleutherAI Blog