The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start. (AMBER, a philanthropist interested in a more reliable Internet, and CORAL, a computer security professional, are at a conference hotel together discussing what Coral insists is a difficult and important issue: the difficulty of building “secure” […]| Machine Intelligence Research Institute
Suppose that frontier AI development is centralized to a single project under tight international controls, with all other development banned internationally. By far the likeliest outcome of this is that we all die. A centralized group of international researchers —...| intelligence.org
Back in May, we announced that Eliezer Yudkowsky and Nate Soares’s new book If Anyone Builds It, Everyone Dies was coming out in September. At long last, the book is here! US and UK books, respectively. IfAnyoneBuildsIt.com Read on for info about reading groups, ways to help, and updates on coverage the book has received […] The post “If Anyone Builds It, Everyone Dies” release day! appeared first on Machine Intelligence Research Institute.| Machine Intelligence Research Institute
Suppose that frontier AI development is centralized to a single project under tight international controls, with all other development banned internationally. By far the likeliest outcome of this is that we all die. A centralized group of international researchers — a “CERN for AI” — can’t align superintelligence any more than decentralized organizations can. Centralization […]| Machine Intelligence Research Institute
The Machine Intelligence Research Institute’s 2015 winter fundraising drive begins today, December 1! Our current progress: Donate Now The drive will run for the month of December, and will help support MIRI’s research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact. MIRI’s Research Focus The field of AI […]| intelligence.org
(Epistemic status: attempting to clear up a misunderstanding about points I have attempted to make in the past. This post is not intended as an argument for those points.) I have long said that the lion’s share of the AI alignment problem seems to me to be about pointing powerful cognition at anything at all, rather […]| Machine Intelligence Research Institute
If Anyone Builds It, Everyone Dies As we announced last month, Eliezer and Nate have a book coming out this September: If Anyone Builds It, Everyone Dies. This is MIRI’s major attempt to warn the policy world and the general public about AI. Preorders are live now, and are exceptionally helpful. Preorder Bonus: We’re hosting […] The post MIRI Newsletter #123 appeared first on Machine Intelligence Research Institute.| Machine Intelligence Research Institute
We’re currently in the process of locking in advertisements for the September launch of If Anyone Builds It, Everyone Dies, and we’re interested in your ideas! If you have graphic design chops, and would like to try your hand at creating promotional material for If Anyone Builds It, Everyone Dies, we’ll be accepting submissions in […] The post IABIED: Advertisement design competition appeared first on Machine Intelligence Research Institute.| Machine Intelligence Research Institute
I think more people should say what they actually believe about AI dangers, loudly and often. Even (and perhaps especially) if you work in AI policy. I’ve been beating this drum for a few years now. I have a whole spiel about how your conversation-partner will react very differently if you share your concerns while […] The post A case for courage, when speaking of AI danger appeared first on Machine Intelligence Research Institute.| Machine Intelligence Research Institute
Nate and Eliezer’s forthcoming book has been getting a remarkably strong reception. I was under the impression that there are many people who find the extinction threat from AI credible, but that far fewer of them would be willing to say so publicly, especially by endorsing a book with an unapologetically blunt title like If […]| Machine Intelligence Research Institute
Risks from Learned Optimization in Advanced ML Systems Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant This paper is available on arXiv, the AI Alignment Forum, and LessWrong. Abstract: We analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a... Read more »| Machine Intelligence Research Institute
What is the function of a fire alarm? One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building. In the classic experiment by Latane and Darley in 1968, eight groups of […]| Machine Intelligence Research Institute