Skip to main content
Article
Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation Learning
arXiv
  • Zhiqiang Shen, Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence & University of California, Berkeley
  • Zechun Liu, Carnegie Mellon University
  • Zhuang Liu, University of California, Berkeley
  • Marios Savvides, Carnegie Mellon University
  • Trevor Darrell, University of California, Berkeley
  • Eric Xing, Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence
Document Type
Article
Abstract

The recently advanced unsupervised learning approaches use the siamese-like framework to compare two “views” from the same image for learning representations. Making the two views distinctive is a core to guarantee that unsupervised methods can learn meaningful information. However, such frameworks are sometimes fragile on overfitting if the augmentations used for generating two views are not strong enough, causing the over-confident issue on the training data. This drawback hinders the model from learning subtle variance and fine-grained information. To address this, in this work we aim to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs through mixing the input data space, to further work collaboratively for the input and loss spaces. Despite its conceptual simplicity, we show empirically that with the solution - Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space. Extensive experiments are conducted on CIFAR-10, CIFAR-100, STL-10, Tiny ImageNet and standard ImageNet with popular unsupervised methods SimCLR, BYOL, MoCo V1&V2, SwAV, etc. Our proposed image mixture and label assignment strategy can obtain consistent improvement by 1∼3% following exactly the same hyperparameters and training procedures of the base methods. Code is publicly available at https://github.com/szq0214/Un-Mix. Copyright © 2020, The Authors. All rights reserved.

Publication Date
1-1-2020
Keywords
  • binary alloys; image enhancement; mixtures; molybdenum alloys; Unsupervised learning; Degree of similarity; Fine grained; Image mixture; Label space; Learn+; Overfitting; Training data; Two views; Unsupervised method; Visual representations; Cobalt alloys; Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Machine Learning (cs.LG)
Disciplines
Comments

Preprint: arXiv

Citation Information
Z. Shen, Z. Liu, Z. Liu, M. Savvides, T. Darrell, and E. Xing, "Un-mix: rethinking image mixtures for unsupervised visual representation learning," 2020, arXiv:2003.05438