Explainable Matrix - Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
Mário Popolin Neto, Fernando Paulovich
External link (DOI)
View presentation:2020-10-28T14:30:00ZGMT-0600Change your timezone on the schedule page
2020-10-28T14:30:00Z

Fast forward
Direct link to video on YouTube: https://youtu.be/qlthySP_mwA
Keywords
Random forest visualization, logic rules visualization, classification model interpretability, explainable artificial intelligence
Abstract
Over the past decades, classification models have proven to be essential machine learning tools given their potential and applicability in various domains. In these years, the north of the majority of the researchers had been to improve quality metrics, notwithstanding the lack of information about models' decisions such metrics convey. Recently, this paradigm has shifted, and strategies beyond tables and numbers to assist in interpreting models' decisions are increasing in importance. Part of this trend, visualization techniques have been extensively used to support the interpretability of classification models, with a significant focus on rule-based techniques. Despite the advances, the existing approaches present limitations in terms of visual scalability, and the visualization of large and complex models, such as the ones produced by the Random Forest (RF) technique, remains a challenge. In this paper, we propose Explainable Matrix (ExMatrix), a novel visualization method for RF interpretability that can handle models with massive quantities of rules. It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates, enabling the analysis of entire models and auditing classification results. ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.