In this article, we explore the growing trend of human-AI relationships, what AI actually is, the risks of anthropomorphizing it, and how we can navigate this treacherous road humanity is speeding down… safely. Six weeks after starting a relationship with a chatbot named Eliza, a Belgian man comm| Open Ethics Initiative
As the global race for AI dominance cranks up, what are the potential risks when superpowers like the U.S. prioritize winning over AI transparency and safety? And while ongoing advances in artificial intelligence do benefit society, will unregulated innovation serve our good or speed up humanity’s downfall? In this article, we examine the potential impact […]| Open Ethics Initiative
Understanding the autonomy level of a system is crucial for making informed decisions about its integration, design, and operation. It allows you to determine how much human intervention is required, the necessary level of oversight, and the decision-making authority the system should have. By knowi| Open Ethics Initiative
The EU AI Act makes transparency central to compliance, requiring companies to disclose key AI system information. Beyond regulation, transparency builds trust, drives innovation, and creates a competitive advantage. This text highlights why it matters, what to disclose, and how to start adopting tr| Open Ethics Initiative
The self-driving cars ability to accurately decide on which lives to save and which to sacrifice very much depends on its ability to detect each and every one of those lives to begin with. Is there a risk of self-driving cars identifying some pedestrians but not others? The short answer is yes, there is. While […]| Open Ethics Initiative
Deploying AI in production brings some challenges and trade-offs between safety and efficiency. Humans are setting up, tuning, testing, and using the AI systems, providing feedback and guidance to the machines. How can we ensure that the human input is reliable, ethical, and consistent? How can we balance the human effort and the machine autonomy? […]| Open Ethics Initiative
Neutrality can be seen as a desirable goal for AI ethics, as it can promote fairness, justice and impartiality. However, neutrality can also be seen as an impossible or undesirable goal for AI ethics, as it can ignore the complexity, diversity and contextuality of human values and situations.| Open Ethics Initiative
But what can go wrong? We’ll take a look on how people relate to AVs, we’ll assess the risks of self-driving cars through the looking-glass of ethics & immerse ourselves into an ethical design approach as a potential solution to everything we’re about to cover.. This second part of the article will also address some […]| Open Ethics Initiative
What decision does a self-driving car take in case of imminent, fatal accident? What happens in case of data poisoning if someone hacked the system and hijacked your car? And moreover, do you trust the technology enough to buy a self-driving car to begin with? All these questions come up every single time at the […]| Open Ethics Initiative
When asked about privacy in the digital space, the usual answer seems to be a scoff. Sometimes it is even followed by a taunting sneer with a tint of “there is nothing we can do about it though” look. But why is that? A View on Privacy – The Scoff Here are the common […]| Open Ethics Initiative