Papers
arxiv:2407.00782

Step-Controlled DPO: Leveraging Stepwise Error for Enhanced Mathematical Reasoning

Published on Jun 30
· Submitted by AJZhou on Jul 2
Authors:
,

Abstract

Direct Preference Optimization (DPO) has proven effective at improving the performance of large language models (LLMs) on downstream tasks such as reasoning and alignment. In this work, we propose Step-Controlled DPO (SCDPO), a method for automatically providing stepwise error supervision by creating negative samples of mathematical reasoning rationales that start making errors at a specified step. By applying these samples in DPO training, SCDPO can better align the model to understand reasoning errors and output accurate reasoning steps. We apply SCDPO to both code-integrated and chain-of-thought solutions, empirically showing that it consistently improves the performance compared to naive DPO on three different SFT models, including one existing SFT model and two models we finetuned. Qualitative analysis of the credit assignment of SCDPO and DPO demonstrates the effectiveness of SCDPO at identifying errors in mathematical solutions. We then apply SCDPO to an InternLM2-20B model, resulting in a 20B model that achieves high scores of 88.5% on GSM8K and 58.1% on MATH, rivaling all other open-source LLMs, showing the great potential of our method.

Community

Paper author Paper submitter

Step-Controlled DPO: Leveraging Stepwise Error for
Enhanced Mathematical Reasoning

Paper author Paper submitter

Hi! Your paper is interesting. And I noticed that a few days ago, there was another paper named Step-DPO. I wonder if your work and that one were inspired by the same ideas or if there has been any interaction between your teams, or if the similarities are just a coincidence.

·
Paper author

Thank you for your interest in our work! There is no interaction between our team and the other paper's team. Our work was finished in April, 2024, a few months before the release of the other paper.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.00782 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.00782 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.00782 in a Space README.md to link it from this page.

Collections including this paper 7