Kaspars Sudars, Ivars Namatēvs, and Kaspars Ozols. 2022. Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach. Journal of Imaging, 8(2), 30 pp. MDPI, https://doi.org/10.3390/jimaging8020030

Bibtex citation:
@article{11621_2022,
author = {Kaspars Sudars and Ivars Namatēvs and and Kaspars Ozols},
title = {Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach},
journal = {Journal of Imaging},
volume = {8},
issue = {2},
pages = {30},
publisher = {MDPI},
url = {https://doi.org/10.3390/jimaging8020030},
year = {2022}
}

Abstract: Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels’ compression. Then, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating an original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high- and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network’s precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads to a 2% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.

URL: https://doi.org/10.3390/jimaging8020030

Full text: jimaging-08-00030-v2

Scopus search