Skip to main content
Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes for Medical Image Super-Resolution
  • Mariana-Iuliana Georgescu, University of Bucharest, Romania
  • Radu Tudor Ionescu, University of Bucharest, Romania
  • Andreea-Iuliana Miron, Colţea Hospital Romania, Romania
  • Olivian Savencu, Colţea Hospital Romania, Romania
  • Nicolae Verga, Colţea Hospital Romania, Romania & University Politehnica of Bucharest, Romania & Mohamed bin Zayed University of Artificial Intelligence
  • Nicolae-Cătălin Ristea, University Politehnica of Bucharest, Romania
  • Fahad Shabaz Khan, Linköping University, Sweden & Mohamed bin Zayed University of Artificial Intelligence
Document Type

Super-resolving medical images can help physicians in providing more accurate diagnostics. In many situations, computed tomography (CT) or magnetic resonance imaging (MRI) techniques output several scans (modes) during a single investigation, which can jointly be used (in a multimodal fashion) to further boost the quality of super-resolution results. To this end, we propose a novel multimodal multi-head convolutional attention module to super-resolve CT and MRI scans. Our attention module uses the convolution operation to perform joint spatial-channel attention on multiple concatenated input tensors, where the kernel (receptive field) size controls the reduction rate of the spatial attention and the number of convolutional filters controls the reduction rate of the channel attention, respectively. We introduce multiple attention heads, each head having a distinct receptive field size corresponding to a particular reduction rate for the spatial attention. We integrate our multimodal multi-head convolutional attention (MMHCA) into two deep neural architectures for super-resolution and conduct experiments on three data sets. Our empirical results show the superiority of our attention module over the state-of-the-art attention mechanisms used in super-resolution. Moreover, we conduct an ablation study to assess the impact of the components involved in our attention module, e.g. the number of inputs or the number of heads. © 2022, CC BY-NC-SA.

Publication Date
  • Computerized tomography,
  • Diagnosis,
  • Magnetic resonance imaging,
  • Medical imaging,
  • Optical resolving power,
  • Attention mechanisms,
  • Image super resolutions,
  • Kernel size,
  • Multi-head attention,
  • Multi-modal,
  • Neural-networks,
  • Receptive field sizes,
  • Reduction rate,
  • Spatial attention,
  • Superresolution,
  • Convolution,
  • Computer Vision and Pattern Recognition (cs.CV),
  • Image and Video Processing (eess.IV),
  • Machine Learning (cs.LG)

Preprint: arXiv

Archived with thanks to arXiv

Preprint License: CC by NC-SA 4.0

Uploaded 19 July 2022

Citation Information
M.I. Georgescu, et al, "Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes for Medical Image Super-Resolution", Apr 2022, arXiv:2204.04218