Toward large reasoning models: A survey of reinforced reasoning with large language models

迈向大型推理模型:大型语言模型强化推理综述

阅读:5

Abstract

Language has long been an essential tool for human reasoning. The rise of large language models (LLMs) has led to research on their application in complex reasoning tasks. Researchers are exploring the concept of "thought," which represents intermediate reasoning steps, allowing LLMs to emulate humanlike reasoning processes. Recent work has applied reinforcement learning (RL) to train LLMs by searching for high-quality reasoning trajectories through trial-and-error exploration. In parallel, studies also demonstrate that allowing LLMs to "think" with longer chains of intermediate tokens at test time can also substantially improve reasoning accuracy. The combination of training and test-time advancements outlines a path toward large reasoning models. This survey reviews recent progress in LLM reasoning. It covers foundational concepts behind LLMs and the key technical components that contribute to the development of large reasoning models, and it highlights popular open-source projects for building these models. The survey concludes by discussing ongoing challenges and future research directions in this field.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。