Published on August 28, 2025 11:26 AM GMT Elaborating on my comment here in a top-line post. The alignment problem is usually framed as an alignment of moral norms. In other words, how can we teach an agent how it ought to act in a given situation such that its actions align with human values. In this way, it learns actions that produce good outcomes where good is evaluated in some moral sense. In the domain of morality there is a familiar is-ought gap. Namely, there's no way to derive ho...