Combining different input modalities beyond text is a key challenge for natural language processing. Previous work has been inconclusive as to the true utility of images as a supplementary information source for text classification tasks, motivating this large-scale human study of labelling performance given text-only, images-only, or both text and images. To this end, we create a new dataset accompanied with a novel annotation method - Japanese Entity Labeling with Dynamic Annotation - to deepen our understanding of the effectiveness of images for multi-modal text classification. By performing careful comparative analysis of human performance and the performance of state-of-the-art multi-modal text classification models, we gain valuable insights into differences between human and model performance, and the conditions under which images are beneficial for text classification.
- Datasets,
- multi-modality,
- natural language processing,
- neural networks,
- text classification
IR Deposit conditions:
OA version (pathway a) Accepted version
No embargo
Publisher copyright and source must be acknowledged
Must link to publisher version with statement that this is the definitive version and DOI
Must state that version on repository is the authors version
Set statement to accompany deposit (see policy)