Skip to main content
Article
Unsupervised Lexical Substitution with Decontextualised Embeddings
arXiv
  • Takashi Wada, School of Computing and Information Systems, The University of Melbourne, Australia & RIKEN Center for Advanced Intelligence Project (AIP), Japan
  • Timothy Baldwin, School of Computing and Information Systems, The University of Melbourne, Australia & Mohamed bin Zayed University of Artificial Intelligence
  • Yuji Matsumoto, RIKEN Center for Advanced Intelligence Project (AIP), Japan
  • Jey Han Lau, School of Computing and Information Systems, The University of Melbourne, Australia
Document Type
Article
Abstract

We propose a new unsupervised method for lexical substitution using pre-trained language models. Compared to previous approaches that use the generative capability of language models to predict substitutes, our method retrieves substitutes based on the similarity of contextualised and decontextualised word embeddings, i.e. the average contextual representation of a word in multiple contexts. We conduct experiments in English and Italian, and show that our method substantially outperforms strong baselines and establishes a new state-of-the-art without any explicit supervision or fine-tuning. We further show that our method performs particularly well at predicting low-frequency substitutes, and also generates a diverse list of substitute candidates, reducing morphophonetic or morphosyntactic biases induced by article-noun agreement. © 2022, CC BY.

DOI
10.48550/arXiv.2209.08236
Publication Date
9-17-2022
Keywords
  • Computational linguistics
Comments

Preprint: arXiv

Archived with thanks to arXiv

Preprint License: CC by 4.0

Uploaded 12 October 2022

Citation Information
T. Wada, T. Baldwin, Y. Matsumoto, and J.H. Lau, "Unsupervised Lexical Substitution with Decontextualised Embeddings", 2022, doi:10.48550/arXiv.2209.08236