![](https://d3ilqtpdwi981i.cloudfront.net/Ep9faT8JDKv11CVFw3kxj9xwzB0=/425x550/smart/https://bepress-attached-resources.s3.amazonaws.com/uploads/b5/be/6a/b5be6ac0-e61f-4b63-a3b1-5c3d5d74f6c6/thumbnail_2dc67497-64fa-4613-bf19-d989a77b67ae.jpg)
The need to address the scarcity of task-specific annotated data has resulted in concerted efforts in recent years for specific settings such as zero-shot learning (ZSL) and domain generalization (DG), to separately address the issues of semantic shift and domain shift, respectively. However, real-world applications often do not have constrained settings and necessitate handling unseen classes in unseen domains – a setting called Zero-shot Domain Generalization, which presents the issues of domain and semantic shifts simultaneously. In this work, we propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains as well as class-specific semantic text-based representations to a common latent space. In particular, our method jointly strives for the following objectives: (i) aligning the multimodal cues from visual and text-based semantic concepts; (ii) partitioning the common latent space according to the domain-agnostic class-level semantic concepts; and (iii) learning a domain invariance w.r.t. the visual-semantic joint distribution for generalizing to unseen classes in unseen domains. Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods, with significant gains on difficult domains like quickdraw and sketch. © 2021, CC BY.
- Computer Vision and Pattern Recognition (cs.CV)
Preprint: arXiv
Archived with thanks to arXiv
Preprint License: CC BY 4.0
Uploaded 25 March 2022