当前位置:首页 > 资源 > 论文 > 【论文】DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

【论文】DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

RWYQ阿伟2025-11-12论文10

摘要

        We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeekR1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.

关键词

        论文;AI;DeepSeek;

作者

        DeepSeek-AI;

时间

        未知;

语言

        英文;

格式

        PDF;

大小

        1.26MB;

页数

        P-22;

截图

下载

        百度网盘    夸克网盘

解压密码

        www.awnotebook.com

声明

        本站部分图片、资源、书籍、软件等内容来源于网络,本站所供资料仅供学习之用,任何人不得将之他用或者进行传播,否则应当自行向实际权利人承担法律责任。因本站部分资料来源于其他媒介,如存在没有标注来源或来源标注错误导致侵犯阁下权利之处,敬请告知,我将立即予以处理。请支持正版。


扫描二维码推送至手机访问。

版权声明:本文由阿伟的笔记本发布,如需转载请注明出处。

本文链接:https://www.awnotebook.com/post/789.html

标签: 论文AI

“【论文】DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning” 的相关文章

发表评论

访客

看不清,换一张

◎欢迎参与讨论,请在这里发表您的看法和观点。