Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.| Center for Security and Emerging Technology
Michael C. Horowitz, Jun 27, 2025 — Placing AI in a nuclear framework inflates expectations and distracts from practical, sector-specific governance.| AI Frontiers
In an op-ed article published in Lawfare, CSET’s Lauren Kahn discusses the increasing integration of Artificial Intelligence (AI) in military operations globally and the need for effective governance to avoid potential mishaps and escalation.| Center for Security and Emerging Technology
A core question in policy debates around artificial intelligence is whether federal agencies can use their existing authorities to govern AI or if the government needs new legal powers to manage the technology. The authors argue that relying on existing authorities is the most effective approach to promoting the safe development and deployment of AI systems, at least in the near term. This report outlines a process for identifying existing legal authorities that could apply to AI and highligh...| Center for Security and Emerging Technology