Skip to main content
Article
Distinguishing between fake news and satire with transformers
Expert Systems with Applications
  • Jwen Fai Low, Université McGill
  • Benjamin C.M. Fung, Université McGill
  • Farkhund Iqbal, Zayed University
  • Shih Chia Huang, National Taipei University of Technology
Document Type
Article
Publication Date
1-1-2022
Abstract

Indiscriminate elimination of harmful fake news risks destroying satirical news, which can be benign or even beneficial, because both types of news share highly similar textual cues. In this work we applied a recent development in neural network architecture, transformers, to the task of separating satirical news from fake news. Transformers have hitherto not been applied to this specific problem. Our evaluation results on a publicly available and carefully curated dataset show that the performance from a classifier framework built around a DistilBERT architecture performed better than existing machine-learning approaches. Additional improvement over baseline DistilBERT was achieved through the use of non-standard tokenization schemes as well as varying the pre-training and text pre-processing strategies. The improvement over existing approaches stands at 0.0429 (5.2%) in F1 and 0.0522 (6.4%) in accuracy. Further evaluation on two additional datasets shows our framework's ability to generalize across datasets without diminished performance.

Publisher
Elsevier BV
Disciplines
Keywords
  • BERT,
  • Classification,
  • Deep learning,
  • DistilBERT,
  • Fake news,
  • Sarcasm,
  • Satire,
  • Transformers
Scopus ID
85114793463
Indexed in Scopus
Yes
Open Access
No
https://doi.org/10.1016/j.eswa.2021.115824
Citation Information
Jwen Fai Low, Benjamin C.M. Fung, Farkhund Iqbal and Shih Chia Huang. "Distinguishing between fake news and satire with transformers" Expert Systems with Applications Vol. 187 (2022) ISSN: <a href="https://v2.sherpa.ac.uk/id/publication/issn/0957-4174" target="_blank">0957-4174</a>
Available at: http://works.bepress.com/farkhund-iqbal/184/