Skip to main content
Article
Synthesizing the Unseen for Zero-Shot Object Detection
Lecture Notes in Computer Science
  • Nasir Hayat, Inception Institute of Artificial Intelligence
  • Munawar Hayat, Inception Institute of Artificial Intelligence & Mohamed bin Zayed University of Artificial Intelligence
  • Shafin Rahman, North South University, Dhaka, Bangladesh
  • Salman Khan, Inception Institute of Artificial Intelligence & Mohamed bin Zayed University of Artificial Intelligence
  • Syed Waqas Zamir, Inception Institute of Artificial Intelligence
  • Fahad Shahbaz Khan, Inception Institute of Artificial Intelligence & Mohamed bin Zayed University of Artificial Intelligence
Document Type
Conference Proceeding
Abstract

The existing zero-shot detection approaches project visual features to the semantic domain for seen objects, hoping to map unseen objects to their corresponding semantics during inference. However, since the unseen objects are never visualized during training, the detection model is skewed towards seen content, thereby labeling unseen as background or a seen class. In this work, we propose to synthesize visual features for unseen classes, so that the model learns both seen and unseen objects in the visual domain. Consequently, the major challenge becomes, how to accurately synthesize unseen objects merely using their class semantics? Towards this ambitious goal, we propose a novel generative model that uses class-semantics to not only generate the features but also to discriminatively separate them. Further, using a unified model, we ensure the synthesized features have high diversity that represents the intra-class differences and variable localization precision in the detected bounding boxes. We test our approach on three object detection benchmarks, PASCAL VOC, MSCOCO, and ILSVRC detection, under both conventional and generalized settings, showing impressive gains over the state-of-the-art methods. Our codes are available at https://github.com/nasir6/zero_shot_detection. © 2021, Springer Nature Switzerland AG.

DOI
10.1007/978-3-030-69535-4_10
Publication Date
2-25-2021
Keywords
  • Computer vision,
  • Feature extraction,
  • Object recognition,
  • Semantics,
  • Bounding box,
  • Detection models,
  • Generative model,
  • Semantic domains,
  • Shot detection,
  • State-of-the-art methods,
  • Unified Modeling,
  • Visual feature,
  • Object detection
Comments

IR Deposit conditions:

OA version (pathway a): Accepted version

12 month embargo

Must link to published article

Set statement to accompany deposit

Citation Information
N. Hayat, M. Hayat, S. Rahman, S. Khan, S.W. Zamir, and F.S. Khan, "Synthesizing the Unseen for Zero-Shot Object Detection", in Computer Vision – ACCV 2020, in Lecture Notes in Computer Science, Feb 2021, vol 12624, pp. 155-170, https://doi.org/10.1007/978-3-030-69535-4_10