Skip to main content
Article
MDVA-GAN: multi-domain visual attribution generative adversarial networks
Neural Computing and Applications
  • Muhammad Nawaz, COMSATS University Islamabad
  • Feras Al-Obeidat, Zayed University
  • Abdallah Tubaishat, Zayed University
  • Tehseen Zia, COMSATS University Islamabad
  • Fahad Maqbool, University of Sargodha
  • Alvaro Rocha, Universidade de Lisboa
ORCID Identifiers

0000-0001-8176-3373

Document Type
Article
Publication Date
1-1-2022
Abstract

Some pixels of an input image have thick information and give insights about a particular category during classification decisions. Visualization of these pixels is a well-studied problem in computer vision, called visual attribution (VA), which helps radiologists to recognize abnormalities and identify a particular disease in the medical image. In recent years, several classification-based techniques for domain-specific attribute visualization have been proposed, but these techniques can only highlight a small subset of most discriminative features. Therefore, their generated VA maps are inadequate to visualize all effects in an input image. Due to recent advancements in generative models, generative model-based VA techniques are introduced which generate efficient VA maps and visualize all affected regions. To deal the issue, generative adversarial network-based VA techniques are recently proposed, where the researchers leverage the advances in domain adaption techniques to learn a map for abnormal-to-normal medical image translation. As these approaches rely on a two-domain translation model, it would require training as many models as number of diseases in a medical dataset, which is a tedious and compute-intensive task. In this work, we introduce a unified multi-domain VA model that generates a VA map of more than one disease at a time. The proposed unified model gets images from a particular domain and its domain label as input to generate VA map and visualize all the affected regions by that particular disease. Experiments on the CheXpert dataset, which is a publicly available multi-disease chest radiograph dataset, and the TBX11K dataset show that the proposed model generates identical results.

Publisher
Springer Science and Business Media LLC
Disciplines
Keywords
  • Abnormal-to-normal translation,
  • Change map,
  • Chest X-ray,
  • Generative adversarial network,
  • Tuberculosis,
  • Visual attribution
Scopus ID
85123921344
Indexed in Scopus
Yes
Open Access
No
https://doi.org/10.1007/s00521-022-06969-0
Citation Information
Muhammad Nawaz, Feras Al-Obeidat, Abdallah Tubaishat, Tehseen Zia, et al.. "MDVA-GAN: multi-domain visual attribution generative adversarial networks" Neural Computing and Applications (2022)
Available at: http://works.bepress.com/feras-al-obeidat/56/