Skip to main content
Article
Encoding Invariances in Deep Generative Models
arXiv
  • Viraj Shah, Iowa State University
  • Ameya Joshi, Iowa State University
  • Sambuddha Ghosal, Iowa State University
  • Balaji Pokuri, Iowa State University
  • Soumik Sarkar, Iowa State University
  • Baskar Ganapathysubramanian, Iowa State University
  • Chinmay Hegde, Iowa State University
Document Type
Article
Publication Version
Submitted Manuscript
Publication Date
6-4-2019
Abstract

Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions. However, in several applications, training samples obey invariances that are \textit{a priori} known; for example, in complex physics simulations, the training data obey universal laws encoded as well-defined mathematical equations. In this paper, we propose a new generative modeling approach, InvNet, that can efficiently model data spaces with known invariances. We devise an adversarial training algorithm to encode them into data distribution. We validate our framework in three experimental settings: generating images with fixed motifs; solving nonlinear partial differential equations (PDEs); and reconstructing two-phase microstructures with desired statistical properties. We complement our experiments with several theoretical results.

Comments

This is a pre-print of the article Shah, Viraj, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, and Chinmay Hegde. "Encoding Invariances in Deep Generative Models." arXiv preprint arXiv:1906.01626 (2019). Posted with permission.

Copyright Owner
The Authors
Language
en
File Format
application/pdf
Citation Information
Viraj Shah, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, et al.. "Encoding Invariances in Deep Generative Models" arXiv (2019)
Available at: http://works.bepress.com/baskar-ganapathysubramanian/87/