Skip to main content
Article
Can You Answer This? - Exploring Zero-Shot QA Generalization Capabilities in Large Language Models
Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
  • Saptarshi Sengupta, Pennsylvania State University
  • Shreya Ghosh, Pennsylvania State University
  • Preslav Nakov, Mohamed Bin Zayed University of Artificial Intelligence
  • Prasenjit Mitra, Pennsylvania State University
Document Type
Conference Proceeding
Abstract

The buzz around Transformer-based Language Models (TLMs) such as BERT, RoBERTa, etc. is well-founded owing to their impressive results on an array of tasks. However, when applied to areas needing specialized knowledge (closed-domain), such as medical, finance, etc. their performance takes drastic hits, sometimes more than their older recurrent/convolutional counterparts. In this paper, we explore zero-shot capabilities of large language models for extractive Question Answering. Our objective is to examine the performance change in the face of domain drift, i.e., when the target domain data is vastly different in semantic and statistical properties from the source domain, in an attempt to explain the subsequent behavior. To this end, we present two studies in this paper while planning further experiments later down the road. Our findings indicate flaws in the current generation of TLMs limiting their performance on closed-domain tasks.

DOI
10.1609/aaai.v37i13.27019
Publication Date
6-27-2023
Keywords
  • Natural Language Processing,
  • Zero-Shot Learning,
  • Extractive Question Answering
Comments

Copyright by AAAI

IR conditions described in AAAI Open Journal System About Page

Archived thanks to AAAI

Uploaded 28 November 2023

Citation Information
S. Sengupta, S. Ghosh, P. Nakov, and P. Mitra, “Can You Answer This? – Exploring Zero-Shot QA Generalization Capabilities in Large Language Models (Student Abstract)”, AAAI, vol. 37, no. 13, pp. 16318-16319, Sep. 2023. doi:10.1609/aaai.v37i13.27019