On GPT-3: meta-learning, scaling, implications, and deep theory. The scaling hypothesis: neural nets absorb data & compute, generalizing and becoming more Bayesian as problems get harder, manifesting new abilities even at trivial-by-global-standards-scale. The deep learning revolution has begun as foretold.| gwern.net
How to get GPT-3 to sort a list: make it think it's a python interpreter running list.sort()| generative.ink
Texts on this and that.| Erich Grunewald's Blog
We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K| crfm.stanford.edu
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI. https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/| Cold Takes
How big a deal could AI misalignment be? About as big as it gets.| Cold Takes