Skip to main content
Article
D3Former: Debiased Dual Distilled Transformer for Incremental Learning
arXiv
  • Abdelrahman Mohamed, Mohamed bin Zayed University of Artificial Intelligence
  • Rushali Grandhe, Mohamed bin Zayed University of Artificial Intelligence
  • K.J. Joseph, Indian Institute of Technology, Hyderabad, India
  • Salman Khan, Mohamed bin Zayed University of Artificial Intelligence & Australian National University, Australia
  • Fahad Shahbaz Khan, Mohamed bin Zayed University of Artificial Intelligence & Linköping University, Sweden
Document Type
Article
Abstract

Class incremental learning (CIL) involves learning a classification model where groups of new classes are encountered in every learning phase. The goal is to learn a unified model performant on all the classes observed so far. Given the recent popularity of Vision Transformers (ViTs) in conventional classification settings, an interesting question is to study their continual learning behaviour. In this work, we develop a Debiased Dual Distilled Transformer for CIL dubbed D3Former. The proposed model leverages a hybrid nested ViT design to ensure data efficiency and scalability to small as well as large datasets. In contrast to a recent ViT based CIL approach, our D3Former does not dynamically expand its architecture when new tasks are learned and remains suitable for a large number of incremental tasks. The improved CIL behaviour of D3Former owes to two fundamental changes to the ViT design. First, we treat the incremental learning as a long-tail classification problem where the majority samples from new classes vastly outnumber the limited exemplars available for old classes. To avoid biasness against the minority old classes, we propose to dynamically adjust logits to emphasize on retaining the representations relevant to old tasks. Second, we propose to preserve the configuration of spatial attention maps as the learning progresses across tasks. This helps in reducing catastrophic forgetting via constraining the model to retain the attention on the most discriminative regions. D3Former obtains favorable results on incremental versions of CIFAR-100, MNIST, SVHN, and ImageNet datasets. Code is available at https://tinyurl.com/d3former. Copyright © 2022, The Authors. All rights reserved.

DOI
10.48550/arXiv.2208.00777
Publication Date
7-25-2022
Keywords
  • Learning systems,
  • Machine learning
Comments

IR Deposit conditions: non-described

Citation Information
A. Mohamed, R. Grandhe, K.J. Joseph, S. Khan and F. Khan, "D3Former: Debiased Dual Distilled Transformer for Incremental Learning", 2022, doi: 10.48550/arXiv.2208.00777