Maksims Ivanovs; Beate Banga; Valters Abolins; Krisjanis Nesenbergs. Methods for Explaining CNN-Based BCI: A Survey of Recent Application. 2022 IEEE 16th International Scientific Conference on Informatics (Informatics), IEEE, 2022.

Bibtex citāts:
@inproceedings{13773_2022,
author = {Maksims Ivanovs; Beate Banga; Valters Abolins; Krisjanis Nesenbergs},
title = {Methods for Explaining CNN-Based BCI: A Survey of Recent Application},
journal = {2022 IEEE 16th International Scientific Conference on Informatics (Informatics)},
publisher = {IEEE},
year = {2022}
}

Anotācija: Convolutional neural networks (CNN) have achieved state-of-the-art results in many Brain-Computer Interface (BCI) tasks, yet their applications in real-world scenarios and attempts at further optimizing them may be hindered by their non-transparent, black box-like nature. While there has been ex-tensive research on the intersection of the fields of explainable artificial intelligence (AI) and computer vision on explaining CNN for image classification, it is an open question how commonly the methods for explaining CNNs are used when CNNs are a part of a BCI setup. In the present study, we survey BCI studies from 2020 to 2022 that deploy CNNs to find out how many of them use explainable AI methods for better understanding of CNNs and which such methods are used in particular. Our findings are that explainable AI methods were used in 13.7 percent of the surveyed publications, and the majority of the studies in which these methods were used employed the t-distributed stochastic neighbour embedding (t-SNE) method.

URL: https://doi.org/10.1109/Informatics57926.2022.10083473

Scopus meklēšana