Updated: April 2016, May 2016, and July 2016 To inform the Open Philanthropy Project’s investigation of potential risks from advanced artificial intelligence, I (Luke Muehlhauser) conducted a short study of what we know so far about likely timelines for the development of advanced artificial intelligence (AI) capabilities. 1 What are we trying to forecast? From […]| Open Philanthropy
We have updated our thinking on this subject since this page was published. For our most current content on this topic, see this blog post. This is a writeup of a shallow investigation, a brief look at an area that we use to decide how to prioritize further research. In a nutshell What is the problem? It […]| Open Philanthropy
It seems to me that AI and machine learning research is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. It seems to me that AI and machine learning research is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science.1 In particular, I believe that this research may lead eventually to the development of transformative AI, which we have roughly and conceptually defined ...| Open Philanthropy
One of our core values is our tolerance for philanthropic “risk.” Our overarching goal is to do as much good as we can, and as part of that, we’re open to supporting work that has a high risk of failing to accomplish its goals. We’re even open to supporting work that is more than 90% […]| Open Philanthropy
Updated: September 2016 To inform the Open Philanthropy Project’s investigation of potential risks from advanced artificial intelligence, and in particular to improve our thinking about AI timelines, I (Luke Muehlhauser) conducted a short study of what we should learn from past AI forecasts and seasons of optimism and pessimism in the field. 1 Key impressions In addition […]| Open Philanthropy