Modern machine-learning models, such as neural networks, are often referred to as “black boxes” because they are so complex that even the researchers who design them can’t fully understand how they make predictions. To provide some insights, researchers use explanation methods that seek to describe individual model decisions. For example, they may highlight words in […] The post Unpacking black-box models appeared first on MIT Schwarzman College of Computing.