Skip to main content
Article
RLPROMPT: Optimizing Discrete Text Prompts With Reinforcement Learning
arXiv
  • Mingkai Deng, Carnegie Mellon University, United States
  • Jianyu Wang, Uc San Diego, United States
  • Cheng-Ping Hsieh, Uc San Diego, United States
  • Yihan Wang, Uc San Diego, United States
  • Han Guo, Carnegie Mellon University, United States
  • Tianmin Shu, MIT, United States
  • Meng Song, UC San Diego, United States
  • Eric Xing, Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence & Petuum Inc.
  • Zhiting Hu, UC San Diego, United States
Document Type
Article
Abstract

Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform diverse NLP tasks, especially when only few downstream data are available. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning soft prompt (e.g., embeddings) which falls short of interpretability, reusability across LMs, and applicability when gradients are not accessible. Discrete prompt, on the other hand, is difficult to optimize, and is often created by "enumeration (e.g., paraphrasing)-then-selection" heuristics that do not explore the prompt space systematically. This paper proposes RLPROMPT, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPROMPT formulates a parameter-efficient policy network that generates the desired discrete prompt after training with reward. To overcome the complexity and stochasticity of reward signals by the large LM environment, we incorporate effective reward stabilization that substantially enhances the training efficiency. RLPROMPT is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating LM prompting may not follow human language patterns. Copyright © 2022, The Authors. All rights reserved.

DOI
10.48550/arXiv.2205.12548
Publication Date
5-25-2022
Keywords
  • Optimization,
  • Reinforcement learning,
  • Text processing,
  • Down-stream,
  • Embeddings,
  • Interpretability,
  • Language model,
  • Modeling environments,
  • Optimization approach,
  • Performance,
  • Policy networks,
  • Reinforcement learnings,
  • Stochasticity
Comments

IR Deposit conditions: non-described

Preprint available on arXiv

Citation Information
M. Deng, et al, "RLPROMPT: Optimizing Discrete Text Prompts With Reinforcement Learning", 2022, arXiv:2205.12548