Skip to main content
Article
Progressive generation of long text with pretrained language models
arXiv
  • Bowen Tan, Carnegie Mellon University, United States
  • Zichao Yang, Carnegie Mellon University, United States
  • Maruan Al-Shedivat, Carnegie Mellon University, United States
  • Eric P. Xing, Carnegie Mellon University, United States & Petuum Inc. & Mohamed bin Zayed University of Artificial Intelligence
  • Zhiting Hu, Carnegie Mellon University, United States & UC San Diego, United States
Document Type
Article
Abstract

Large-scale language models (LMs) pretrained on massive corpora of text, such as GPT-2, are powerful open-domain text generators. However, as our systematic examination reveals, it is still challenging for such models to generate coherent long passages of text (e.g., 1000 tokens), especially when the models are fine-tuned to the target domain on a small corpus. Previous planning-then-generation methods also fall short of producing such long text in various domains. To overcome the limitations, we propose a simple but effective method of generating text in a progressive manner, inspired by generating images from low to high resolution. Our method first produces domain-specific content keywords and then progressively refines them into complete passages in multiple stages. The simple design allows our approach to take advantage of pretrained LMs at each stage and effectively adapt to any target domain given only a small set of examples. We conduct a comprehensive empirical study with a broad set of evaluation metrics, and show that our approach significantly improves upon the fine-tuned large LMs and various planning-then-generation methods in terms of quality and sample efficiency. Human evaluation also validates that our model generations are more coherent.1 Copyright © 2020, The Authors. All rights reserved.

DOI
10.48550/arXiv.2006.15720
Publication Date
6-26-2020
Keywords
  • Computation and Language (cs.CL),
  • Machine Learning (cs.LG)
Comments

IR Deposit conditions: non-described

Preprint: arXiv

Citation Information
B. Tan, Z. Yang, M. Al-Shedivat, E. Xing, and Z. Hu, " Progressive generation of long text with pretrained language models", arXiv, Jun. 2020, doi: 10.48550/arXiv.2006.15720