Skip to main content
Article
GNN is a Counter? Revisiting GNN for Question Answering
arXiv
  • Kuan Wang, Georgia Institute of Technology
  • Yuyu Zhang, Georgia Institute of Technology
  • Dlyi Yang, Georgia Institute of Technology
  • Le Song, BioMap & Mohamed bin Zayed University of Artificial Intelligence
  • Tao Qin, Microsoft Research Asia
Document Type
Article
Abstract

Question Answering (QA) has been a long-standing research topic in AI and NLP fields, and a wealth of studies have been conducted to attempt to equip QA systems with human-level reasoning capability. To approximate the complicated human reasoning process, state-of-the-art QA systems commonly use pre-trained language models (LMs) to access knowledge encoded in LMs together with elaborately designed modules based on Graph Neural Networks (GNNs) to perform reasoning over knowledge graphs (KGs). However, many problems remain open regarding the reasoning functionality of these GNN-based modules. Can these GNN-based modules really perform a complex reasoning process? Are they under-or over-complicated for QA? To open the black box of GNN and investigate these problems, we dissect state-of-the-art GNN modules for QA and analyze their reasoning capability. We discover that even a very simple graph neural counter can outperform all the existing GNN modules on CommonsenseQA and OpenBookQA, two popular QA benchmark datasets which heavily rely on knowledge-aware reasoning. Our work reveals that existing knowledge-aware GNN modules may only carry out some simple reasoning such as counting. It remains a challenging open problem to build comprehensive reasoning modules for knowledge-powered QA. Copyright © 2021, The Authors. All rights reserved.

DOI
10.48550/arXiv.2110.03192
Publication Date
10-7-2021
Disciplines
Comments

Preprint: arXiv

Citation Information
K. Wang, Y. Zhang, D. Yang, L. Song, and T. Qin, "GNN is a counter? Revisiting GNN for question answering," 2021, arXiv:2110.03192