We’re introducing the availability of Llama 2, the next generation of our open source large language model.| ai.meta.com
When a dangerous model is deployed, it will pose misalignment and misuse risks. Even before dangerous models exist, deploying models on dangerous paths can accelerate and diffuse progress toward dangerous models.| ailabwatch.org
AI researchers already use a range of evaluation benchmarks to identify unwanted behaviours in AI systems, such as AI systems making misleading statements, biased decisions, or repeating...| Google DeepMind
In this post, we argue that AI labs should ensure that powerful AIs are controlled. That is, labs should make sure that the safety measures they appl…| www.alignmentforum.org
An update prepared for the UK AI Safety Summit Introduction Microsoft welcomes the opportunity to share information about how we are advancing responsible artificial intelligence (AI), including by implementing voluntary commitments that we and others made at the White House convening in July.[1] Visibility into our policies and how we put them into practice helps...| Microsoft On the Issues