maarten.boudry@ugent.be| research.flw.ugent.be
Papers I learned from (Part 7: Essays on longtermism)| Reflective altruism
Richard Pettigrew has argued that some versions of risk-averse longtermism may recommend hastening human extinction. Nikhil Venkatesh and Kacper Kowalczyk reply to Pettigrew.| Reflective altruism
What effect does risk aversion have on the case for existential risk mitigation? Surprisingly, Richard Pettigrew argues that risk aversion may recommend working to hasten rather than avoid human extinction.| Reflective altruism
Is there anything that can be said in favor of pure temporal discounting? A recent paper by Harry Lloyd shows one way that the case might be made.| Reflective altruism
I received my PhD from UC Berkeley where I was advised by| people.eecs.berkeley.edu
The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This discourse, however, often neglects the serious possibility of AI x-risks manifesting incrementally through a series of smaller yet in...| arXiv.org
For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment...| arXiv.org