An illustration of a sequence of events where rogue replicating agents emerge and cause harm.| metr.org
Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic. The old sorcererHas finally gone away!Now the spirits he controlsShall| SITUATIONAL AWARENESS
Image Source: DALL-E By guest contributor Akash Wasil, an AI policy researcher and incoming Security Studies Program (SSP) student. U.S. policymakers are sprinting to bolster the federal governmen…| Georgetown Security Studies Review
About a year ago, a few months after I publicly took a stand with many other peers to warn the public of the dangers related…| Yoshua Bengio
A web app to help you write an email to a politician. Convince them to Pause AI!| PauseAI
Educational resources (videos, articles, books) about AI risks and AI alignment| PauseAI
Why AI is a risk for the future of our existence, and why we need to pause development.| PauseAI
The history of technology suggests that the greatest risks come not from the tech, but from the people who control it| www.aisnakeoil.com
I have been hearing many arguments from different people regarding catastrophic AI risks. I wanted to clarify these arguments, first for myself, because I would really like to be convinced that we need not worry. Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over...| Yoshua Bengio
In the last couple of months, we have seen a lot of people and companies sharing and open-sourcing various kinds of LLMs and datasets, which is awesome.| magazine.sebastianraschka.com