Interpretability in machine learning. A dream comes true
One of the principal limitations in the application of complex machine learning algorithms in real business cases, it is mainly related to its interpretability. Indeed, it is common to think that machine learning algorithms act like “Block Box” not capable to give explanation about their decision process.
In this paper, the authors of the University of Washington introduce a new framework capable to obtain an interpretation of the decision process behind the machine learning algorithms.
More in details, the authors show how to use a combination of “simple” machine learning algorithm based on rules trained to understand the decision process behind complex algorithm.
This framework has been recently adopted by Humanativa in order to offer to our clients’ machine learning algorithm that can explain their decisions.