AI Total: A Visualization Tool for Making Sense of Security ML Models in an Imperfect World of Production Data

Awalin Sopan, Konstantin Berlin

View presentation:2021-10-27T15:05:00ZGMT-0600Change your timezone on the schedule page
2021-10-27T15:05:00Z
Exemplar figure, described by caption below
AI Total Landing page showing model performance
Fast forward

Direct link to video on YouTube: https://youtu.be/_piS1Ov3bU4

Abstract

The metrics measured while developing machine learning models are not enough to evaluate the models’ performance in the operational level, especially ML models for cyber security with ever changing new attack vectors. Usually, it is also hard to understand initially if the fundamental problem is in the model performance or if there are data issues that are causing problems in the evaluation. With this in mind, we developed a visualization system that would allow the users to quickly identify and diagnose issues with current model deployment, from model performance to data issues that prevent accurate evaluation of the model. Our application enables our security data science team to have a situational awareness of the system and quickly investigate any problems. While designing our system, we considered all the common issues we see in production. In this paper, we will describe this application, its regular usage, and some of the special example cases when it was proved valuable for introspecting our models.