Skip to main content
A novel unsupervised camera-aware domain adaptation framework for person re-identification
Faculty of Engineering and Information Sciences - Papers: Part B
  • Lei Qi
  • Lei Wang, University of Wollongong
  • Jing Huo
  • Luping Zhou, University of Wollongong
  • Yinghuan Shi
  • Yang Gao
Publication Date
Publication Details

Qi, L., Wang, L., Huo, J., Zhou, L., Shi, Y. & Gao, Y. (2019). A novel unsupervised camera-aware domain adaptation framework for person re-identification. Proceedings of the IEEE International Conference on Computer Vision (pp. 8079-8088).


© 2019 IEEE. Unsupervised cross-domain person re-identification (Re-ID) faces two key issues. One is the data distribution discrepancy between source and target domains, and the other is the lack of discriminative information in target domain. From the perspective of representation learning, this paper proposes a novel end-to-end deep domain adaptation framework to address them. For the first issue, we highlight the presence of camera-level sub-domains as a unique characteristic in person Re-ID, and develop a 'camera-aware' domain adaptation method via adversarial learning. With this method, the learned representation reduces distribution discrepancy not only between source and target domains but also across all cameras. For the second issue, we exploit the temporal continuity in each camera of target domain to create discriminative information. This is implemented by dynamically generating online triplets within each batch, in order to maximally take advantage of the steadily improved representation in training process. Together, the above two methods give rise to a new unsupervised domain adaptation framework for person Re-ID. Extensive experiments and ablation studies conducted on benchmark datasets demonstrate its superiority and interesting properties.

Citation Information
Lei Qi, Lei Wang, Jing Huo, Luping Zhou, et al.. "A novel unsupervised camera-aware domain adaptation framework for person re-identification" (2019)
Available at: