"Explainable AI" (xAi) or "explainability" is when you design and build systems that can explain their decisions. Turns out I do this right now.| Shattered Illusion by Chris Kenst
The Worlds I See is a very good story and now I have some thoughts to share.| Shattered Illusion by Chris Kenst
This is a blog post about the paper “Restricting the Flow: Information Bottlenecks for Attribution” by Karl Schulz, Leon Sixt, Federico Tombari and Tim Landgraf published at ICLR 2020. Introduction With the current trend to applying Neural Networks to more and more domains, the question on the explainability of these models is getting more attention. While more traditional machine learning approaches like decision trees and Random Forest incorporate some kind of interpretability based on ...| Sven Elflein