Skip to main content
Article
Visual Entailment Task for Visually-Grounded Language Learning
Computer Science and Engineering Faculty Publications
  • Ning Xie, Wright State University - Main Campus
  • Farley Lai
  • Derek Doran, Wright State University - Main Campus
  • Asim Kadav
Document Type
Article
Publication Date
1-1-2019
Disciplines
Abstract

We introduce a new inference task - Visual Entailment (VE) - which differs from traditional Textual Entailment (TE) tasks whereby a premise is defined by an image, rather than a natural language sentence as in TE tasks. A novel dataset SNLI-VE (publicly available at https://github.com/necla-ml/SNLI-VE) is proposed for VE tasks based on the Stanford Natural Language Inference corpus and Flickr30k. We introduce a differentiable architecture called the Explainable Visual Entailment model (EVE) to tackle the VE problem. EVE and several other state-of-the-art visual question answering (VQA) based models are evaluated on the SNLI-VE dataset, facilitating grounded language understanding and providing insights on how modern VQA based models perform

Citation Information
Ning Xie, Farley Lai, Derek Doran and Asim Kadav. "Visual Entailment Task for Visually-Grounded Language Learning" (2019)
Available at: http://works.bepress.com/derek_doran/65/