With new neural network architectures popping up every now and then, it’s hard to keep track of them all. Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) can be a bit overwhelming at first. So I decided to compose a cheat sheet containing many of those architectures. Most of these are neural networks, some are completely […]| The Asimov Institute
Discussions: Hacker News (65 points, 4 comments), Reddit r/MachineLearning (29 points, 3 comments) Translations: Arabic, Chinese (Simplified) 1, Chinese (Simplified) 2, French 1, French 2, Italian, Japanese, Korean, Persian, Russian, Spanish 1, Spanish 2, Vietnamese Watch: MIT’s Deep Learning State of the Art lecture referencing this post Featured in courses at Stanford, Harvard, MIT, Princeton, CMU and others Update: This post has now become a book! Check out LLM-book.com which contains (C...| jalammar.github.io
The Open Philanthropy Project recommended a grant of $30 million ($10 million per year for 3 years) in general support to OpenAI. This grant initiates a partnership between the Open Philanthropy Project and OpenAI, in which Holden Karnofsky (Open Philanthropy’s Executive Director, "Holden" throughout this page) will join OpenAI's Board of Directors and, jointly with…| Open Philanthropy
PASTA: Process for Automating Scientific and Technological Advancement.| Cold Takes
Why would we program AI that wants to harm us? Because we might not know how to do otherwise.| Cold Takes
We, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions.| Cold Takes
What the best available forecasting methods say - and why there's no "expert field" for this topic.| Cold Takes
Hypothetical stories where the world tries, but fails, to avert a global disaster.| Cold Takes
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI. https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/| Cold Takes
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?| Cold Takes
Organisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation's mission or even increase existential risk.| 80,000 Hours
A few ways we might get very powerful AI systems to be safe.| Cold Takes
Four analogies for why "We don't see any misbehavior by this AI" isn't enough.| Cold Takes
Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it comes up.| Cold Takes
The "most important century" series of blog posts argues that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar| Cold Takes
How big a deal could AI misalignment be? About as big as it gets.| Cold Takes
An outline of how I form detailed opinions on topics: by exploring hypotheses and writing about them, not by undirected reading.| Cold Takes