Learning to Generate Better Than Your LLM
Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning Large Language Models (LLMs) for conditional text generation. In particular, recent LLMs such as ChatGPT and GPT-4 can engage in fluent conversations with users by incorporating RL and feedback from humans. Inspired by learning-to-search algorithms and capitalizing on key properties of text generation, we seek to investigate reinforcement learning algorithms beyond general purpose algorithms such as Proximal policy optimization (PPO). In particular, we extend RL algorithms to allow them to interact with a dynamic black-box guide LLM such as GPT-3 and propose RL with guided feedback (RLGF), a suite of RL algorithms for LLM fine-tuning. We experiment on the IMDB positive review and CommonGen text generation task from the GRUE benchmark. We show that our RL algorithms achieve higher performance than supervised learning (SL) and default PPO baselines, demonstrating the benefit of interaction with the guide LLM. On CommonGen, we not only outperform our SL baselines but also improve beyond PPO across a variety of lexical and semantic metrics beyond the one we optimized for. Notably, on the IMDB dataset, we show that our GPT-2 based policy outperforms the zero-shot GPT-3 oracle, indicating that our algorithms can learn from a powerful, black-box GPT-3 oracle with a simpler, cheaper, and publicly available GPT-2 model while gaining performance.
- Published in:
arXiv - Type:
Article - Authors:
Chang, Jonathan D.; Brantley, Kiante; Ramamurthy, Rajkumar; Misra, Dipendra; Sun, Wen - Year:
2023
Citation information
Chang, Jonathan D.; Brantley, Kiante; Ramamurthy, Rajkumar; Misra, Dipendra; Sun, Wen: Learning to Generate Better Than Your LLM, arXiv, 2023, https://arxiv.org/abs/2306.11816, Chang.etal.2023a,
@Article{Chang.etal.2023a,
author={Chang, Jonathan D.; Brantley, Kiante; Ramamurthy, Rajkumar; Misra, Dipendra; Sun, Wen},
title={Learning to Generate Better Than Your LLM},
journal={arXiv},
url={https://arxiv.org/abs/2306.11816},
year={2023},
abstract={Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning Large Language Models (LLMs) for conditional text generation. In particular, recent LLMs such as ChatGPT and GPT-4 can engage in fluent conversations with users by incorporating RL and feedback from humans. Inspired by learning-to-search algorithms and capitalizing on key properties of text generation, we seek to...}}