Ivars Namatēvs, Kaspars Sudars, Artūrs Ņikuļins, Anda Slaidiņa, Laura Neimane, Oskars Radziņš. Towards Explainability of the Latent Space by Disentangled Representation Learning. Information Technology and Management Science, 26(1), 7 pp. RTU press, 2023.

Bibtex citation:
@article{15685_2023,
author = {Ivars Namatēvs and Kaspars Sudars and Artūrs Ņikuļins and Anda Slaidiņa and Laura Neimane and Oskars Radziņš},
title = {Towards Explainability of the Latent Space by Disentangled Representation Learning},
journal = { Information Technology and Management Science},
volume = {26},
issue = {1},
pages = {7},
publisher = {RTU press},
year = {2023}
}

Abstract: Deep neural networks are widely used in computer vision for image classification, segmentation and generation. They are also often criticised as “black boxes” because their decisionmaking process is often not interpretable by humans. However, learning explainable representations that explicitly disentangle the underlying mechanisms that structure observational data is still a challenge. To further explore the latent space and achieve generic processing, we propose a pipeline for discovering the explainable directions in the latent space of generative models. Since the latent space contains semantically meaningful directions and can be explained, we propose a pipeline to fully resolve the representation of the latent space. It consists of a Dirichlet encoder, conditional deterministic diffusion, a group-swap and a latent traversal module. We believe that this study provides an insight into the advancement of research explaining the disentanglement of neural networks in the community.

URL: https://itms-journals.rtu.lv/article/view/itms-2023-0006