Updated: April 2016, May 2016, and July 2016 To inform the Open Philanthropy Project’s investigation of potential risks from advanced artificial intelligence, I (Luke Muehlhauser) conducted a short study of what we know so far about likely timelines for the development of advanced artificial intelligence (AI) capabilities. 1 What are we trying to forecast? From […]| Open Philanthropy
Open Philanthropy's mission is to give as effectively as we can and share our findings so that anyone can build on our work.| Open Philanthropy
Open Philanthropy recommended a grant of $9,709,000 to Helen Keller International to support vitamin A supplementation work, due to its status as a GiveWell top charity. We followed the recommendation of GiveWell staff regarding how to allocate grantmaking between GiveWell top charities. Read GiveWell’s review of Helen Keller International's vitamin A supplementation program to learn…| Open Philanthropy
We are excited to announce the launch of our new Abundance and Growth Fund, which will spend at least $120 million over the next three years to accelerate economic growth and boost scientific and technological progress while lowering the cost of living. We believe that scientific and technological progress have been the central drivers of […]| Open Philanthropy
We have updated our thinking on this subject since this page was published. For our most current content on this topic, see this blog post. This is a writeup of a shallow investigation, a brief look at an area that we use to decide how to prioritize further research. In a nutshell What is the problem? It […]| Open Philanthropy
It seems to me that AI and machine learning research is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. It seems to me that AI and machine learning research is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science.1 In particular, I believe that this research may lead eventually to the development of transformative AI, which we have roughly and conceptually defined ...| Open Philanthropy
One of our core values is our tolerance for philanthropic “risk.” Our overarching goal is to do as much good as we can, and as part of that, we’re open to supporting work that has a high risk of failing to accomplish its goals. We’re even open to supporting work that is more than 90% […]| Open Philanthropy
We are interested in research and strategic work that could reduce risks and improve preparedness.| Open Philanthropy
Updated: September 2016 To inform the Open Philanthropy Project’s investigation of potential risks from advanced artificial intelligence, and in particular to improve our thinking about AI timelines, I (Luke Muehlhauser) conducted a short study of what we should learn from past AI forecasts and seasons of optimism and pessimism in the field. 1 Key impressions In addition […]| Open Philanthropy
This is Part 0 of a four-part report — see links to Part 1. Part 2. Part 3, and a folder with more materials. Abstract In the next few decades we may develop AI that can automate ~all cognitive tasks and dramatically transform the world. By contrast, today the capabilities and impact of AI are much […]| Open Philanthropy
This report evaluates the likelihood of ‘explosive growth’, meaning > 30% annual growth of gross world product (GWP), occurring by 2100. Although frontier GDP/capita growth has been constant for 150 years, over the last 10,000 years GWP growth has accelerated significantly. Endogenous growth theory, together with the empirical fact of the demographic transition, can explain […]| Open Philanthropy
We believe it’s important for philanthropists to make deliberate, long-term commitments to causes.| Open Philanthropy
Note: As of March 2025, Innovation Policy has been merged into our Abundance & Growth focus area.| Open Philanthropy
The Open Philanthropy Project recommended a grant of $30 million ($10 million per year for 3 years) in general support to OpenAI. This grant initiates a partnership between the Open Philanthropy Project and OpenAI, in which Holden Karnofsky (Open Philanthropy’s Executive Director, "Holden" throughout this page) will join OpenAI's Board of Directors and, jointly with…| Open Philanthropy
About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals[1]Many […]| Open Philanthropy
In principle, we try to find the best giving opportunities by comparing many possibilities. However, many of the comparisons we’d like to make hinge on very debatable, uncertain questions. For example: Some people think that animals such as chickens have essentially no moral significance compared to that of humans; others think that they should be […]| Open Philanthropy