Where Can We Help? A Visual Analytics Approach to Diagnosing and Improving Semantic Segmentation of Movable Objects

Wenbin He, Lincan Zou, Shekar Arvind Kumar, Liang Gou, Liu Ren

View presentation:2021-10-28T15:30:00ZGMT-0600Change your timezone on the schedule page
Exemplar figure, described by caption below
We propose the first visual analytics framework for diagnosing and improving deep semantic segmentation models over critical objects in autonomous driving. Our approach focuses on analyzing models' performance with respect to objects' spatial and contextual information, such as position, size, and interaction with the surrounding context. Moreover, our approach can identify models' potential vulnerabilities regarding objects' spatial information and derive actionable insights to improve models' accuracy and spatial robustness.
Fast forward

Direct link to video on YouTube: https://youtu.be/8vEHnlzLMes


Semantic segmentation is a critical component in autonomous driving and has to be thoroughly evaluated due to safety concerns. Deep neural network (DNN) based semantic segmentation models are widely used in autonomous driving. However, it is challenging to evaluate DNN-based models due to their black-box-like nature, and it is even more difficult to assess model performance for crucial objects, such as lost cargos and pedestrians, in autonomous driving applications. In this work, we propose VASS, a Visual Analytics approach to diagnosing and improving the accuracy and robustness of Semantic Segmentation models, especially for critical objects moving in various driving scenes. The key component of our approach is a context-aware spatial representation learning that extracts important spatial information of objects, such as position, size, and aspect ratio, with respect to given scene contexts. Based on this spatial representation, we first use it to create visual summarization to analyze models’ performance. We then use it to guide the generation of adversarial examples to evaluate models’ spatial robustness and obtain actionable insights. We demonstrate the effectiveness of VASS via two case studies of lost cargo detection and pedestrian detection in autonomous driving. For both cases, we show quantitative evaluation on the improvement of models’ performance with actionable insights obtained from VASS.