Skip to main content
Article
Focusnet++: Attentive aggregated transformations for efficient and accurate medical image segmentation
Proceedings - International Symposium on Biomedical Imaging
  • Chaitanya Kaul, University of Glasgow
  • Nick Pears, University of York
  • Hang Dai, Mohamed Bin Zayed University of Artificial Intelligence
  • Roderick Murray-Smith, University of Glasgow
  • Suresh Manandhar, NAAMII
Document Type
Conference Proceeding
Abstract

We propose a new residual block for convolutional neural networks and demonstrate its state-of-the-art performance in medical image segmentation. We combine attention mechanisms with group convolutions to create our group attention mechanism, which forms the fundamental building block of our network, FocusNet++. We employ a hybrid loss based on balanced cross entropy, Tversky loss and the adaptive logarithmic loss to enhance the performance along with fast convergence. Our results show that FocusNet++ achieves state-of-the-art results across various benchmark metrics for the ISIC 2018 melanoma segmentation and the cell nuclei segmentation datasets with fewer parameters and FLOPs.

DOI
10.1109/ISBI48211.2021.9433918
Publication Date
4-13-2021
Keywords
  • Group Attention,
  • Medical Image Segmentation,
  • Residual Learning
Comments

IR Deposit conditions: non-described

Citation Information
C. Kaul, N. Pears, H. Dai, R. Murray-Smith and S. Manandhar, "Focusnet++: Attentive Aggregated Transformations For Efficient And Accurate Medical Image Segmentation," in 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), 2021, pp. 1042-1046, doi: 10.1109/ISBI48211.2021.9433918.