KDD Papers

Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking

Gabriele Tolomei (Yahoo);Fabrizio Silvestri (Facebook);Andrew Haines (Yahoo Inc);Mounia Lalmas (Yahoo)


Machine-learned models are often described as “black boxes”. In many real-world applications, however, models may have to sacrifice predictive power in favour of human-interpretability. When this is the case, feature engineering becomes a crucial albeit expensive task, requiring manual and time-consuming analysis. In addition, whereas some features are inherently static as they represent properties that are fixed (e.g., the age of an individual), other capture characteristics that could be adjusted (e.g., the daily amount of carbohydrates taken). Nonetheless, once a model is learned from the data, each prediction it makes on new instances is irreversible, therefore assuming every instance to be a static point located in the chosen feature space. There are many circumstances, instead, where it is important to understand (i) why a model outputs a certain prediction on a given instance, (ii) which adjustable features of that instance should be modi