Skip to main content
Uncategorized

MathCoder:  Seamless  Code  Integration  in LLMs for Enhanced Mathematical Reasoning

October 5, 2023

Ke Wang1∗  Houxing Ren1∗  Aojun Zhou1∗  Zimu Lu1∗  Sichun Luo 3

Weikang Shi 1∗  Renrui Zhang 1          Linqi Song3         Mingjie Zhan1†  Hongsheng Li1,2

1Multimedia Laboratory (MMLab), The Chinese University of Hong Kong

2Shanghai Artificial Intelligence Laboratory     3City University of Hong Kong

{wangk.gm, renhouxing, aojunzhou, zmjdll}@gmail.com hsli@ee.cuhk.edu.hk

Abstract

The recently released GPT-4 Code Interpreter has demonstrated remarkable proficiency in solving challenging math problems, primarily attributed to its ability to seamlessly reason with natural language, generate code, execute code, and continue reasoning based on the execution output. In this paper, we present a method to fine-tune open-source language models, enabling them to use code for modeling and deriving math equations and, consequently, enhancing their mathematical reasoning abilities. We propose a method of generating novel and high-quality datasets with math problems and their code-based solutions, referred to as MathCodeInstruct. Each solution interleaves natural language, code, and execution results. We also introduce a customized supervised fine-tuning and inference approach. This approach yields the MathCoder models, a family of models capable of generating code-based solutions for solving challenging math problems. Impressively, the MathCoder models achieve state-of-the-art scores among open-source LLMs on the MATH (45.2%) and GSM8K (83.9%) datasets, substantially outperforming other open-source alternatives. Notably, the MathCoder model not only surpasses ChatGPT-3.5 and PaLM-2 on GSM8K and MATH but also outperforms GPT-4 on the competition-level MATH dataset. The dataset and models will be released at https://github.com/mathllm/ MathCoder.

1        INTRODUCTION

Recently, closed-source large language models (LLMs) such as GPT-4 (OpenAI, 2023) and PaLM- 2 (Anil et al., 2023), paired with methods such as Chain-of-Thought (CoT) (Wei et al., 2022) and Program-Aided Language models (PAL) (Gao et al., 2023), have shown remarkable performance on mathematical reasoning tasks. In contrast, current open-source LLMs (Touvron et al., 2023; Penedo et al., 2023; Zhang et al., 2022) still lag significantly behind in this area. Even Llama-2- 70B (Touvron et al., 2023), one of the most potent open-source models, only scores 56.8% and 13.5% respectively on GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021) datasets, remarkably lower than GPT-4 Code Interpreter1, which scores 97% and 69.7% (Zhou et al., 2023a).

To narrow the gap between open-source and closed-source models in math problem solving, recent works, such as the WizardMath (Luo et al., 2023) and RFT (Yuan et al., 2023), have tried to tune open-source models with math problems and CoT solutions, achieving a significant gain in performance compared to their base model, Llama-2. On the other hand, methods such as PAL (Gao et al., 2023), PoT (Chen et al., 2022), and CSV (Zhou et al., 2023a) encourage code usage in solving

Table 1: Comparison with different Instruction-following datasets. 80 The baseline datasets include recent RFT-u13b (Yuan et al., 2023) and 70 WizardMath (Luo et al., 2023).          

Figure 1: Performance comparison between MathCoder, WizardMath, and Llama-1 RFT models with different model sizes.

math problems, showing promising improvements when paired with closed-source models like GPT- 3.5, GPT-4 and GPT-4 Code Interpreter. In particular, GPT-4 Code Interpreter surpasses the previous SOTA by a clear margin. Recent study (Zhou et al., 2023a) shows that this excellent performance can be attributed to its ability to generate and assess the execution results of a chain of code interlaced with natural language reasoning steps.

However, existing open-source models fail to benefit from this sophisticated mechanism since they lag behind closed-source models in both code generation and natural language reasoning. Therefore, we still lack an effective recipe to deliver open-source models to solve math problems in a manner similar to GPT-4 Code Interpreter.

In this paper, leveraging the strengths of GPT-4 Code Interpreter (Zhou et al., 2023a), we introduce a simple yet effective framework, MathCoder, designed to enhance the mathematical reasoning capabilities of open-source models. This framework can be categorized into two parts:

(1) math instruction-following dataset construction and (2) customized supervised fine-tuning. Specifically, the instruction-following dataset, termed as MathCodeInstruct, consists exclusively of 80k math problems and their corresponding solutions. Each solution is interwoven with natural language for reasoning, code for execution, and execution results. The comparison between MathCodeInstruct and other math instruction-tuning datasets is shown in Tab. 1.

MathCodeInstruct is created in two steps. The first step is collecting GPT-4 Code Interpreter- style solutions for the GSM8K and MATH training sets. GSM8K and MATH are two important datasets of math problems for improving and evaluating models’ mathematical abilities, which consist of grade school math word problems and challenging competition mathematics problems, respectively. Using this data, we trained our initial models, termed MathCoder-Initial. The second step is to augment more math problems by using an innovative prompt named problem interpolation, which asks the LLM to generate questions with difficulty levels that fall between the provided MATH and GSM8K problems. This paradigm generates problems that bridge the gap between the grade-school-level problems in GSM8K and the challenging high-school-level problems in MATH, thus enhancing the dataset’s generalization capability. We use MathCoder-Initial to generate solutions for these new problems. Combining this new data with those from the first step, we fine- tune the base Llama-2 models, reaching a score that outperforms the SOTA by a clear margin on GSM8K and MATH. Concurrently with our work, MAmmoTH (Yue et al., 2023) also creates a dataset consisting of math problems and model-generated solutions. However, their solutions consist of either only code or only natural language reasoning steps, which is notably different from our dataset of GPT-4 Code Interpreter-style solutions.

Regarding the supervised fine-tuning stage, we propose an effective training and inference pipeline to ensure that our fine-tuned model can behave in a manner similar to the GPT-4 Code Interpreter. We use special tokens (<|text|>, <|code|>, <|execution|>) to identify if a part of the training data is natural language, code, or execution results. With this deliberately created training corpus, the model learns to generate interleaved natural language and code divided by special tokens. During inference, we can use the special tokens to detect code blocks and utilize Jupyter Notebooks for code execution. We append the result of on-the-fly execution to the previous predictions of the model. Then, the model continues to autoregressively predict the next token based on this new version of the input, which includes the execution result at the end. In this way, the model would be able to “see” the execution results and continue its reasoning accordingly.

Figure 2: The process of dataset creation and model fine-tuning. (a) First, solutions for problems in the GSM8K and MATH datasets are collected from the GPT-4. Then, we fine-tune the CodeLlama-34B model on this data, producing the MathCoder-Initial. New problems are created using our novel prompt (detailed examples in Appendix A), and their solutions are generated using MathCoder-Initial. (b) Finally, the new problems and solutions are combined with the existing training data to create the final dataset, which we use to fine-tune the base Llama-2 model, producing our final MathCoder model.

We use MathCodeInstruct to fine-tune popular open-source Llama-2 and CodeLlama (Rozière et al., 2023) models, creating a family of models named MathCoder. Experimental results show that the models with our proposed dataset and training framework achieve significant improvement on various mathematical reasoning benchmarks, as depicted in Fig. 1.

This paper’s main contributions can be summarized in three key aspects:

  • To the best of our knowledge, this is the first systematic study that explicitly integrates natural language reasoning, code generation, and feedback from execution results into open-source pre-trained large language models, aiming at enhancing their mathematical reasoning abilities.
  • We have constructed a high-quality mathematical instruction tuning dataset, MathCodeInstruct. This dataset comprises existing math problems from GSM8K and MATH, with GPT-4 Code Interpreter-style solutions, and newly formulated ones via our novel problem interpolation prompting strategy.
  • We have produced a family of models, MathCoder. We fine-tune Llama-2 and CodeLlama models on our dataset, producing a family of models with not only high accuracy on the GSM8K and MATH, but also a good performance on other out-of-domain datasets like Mathematics and SimulEq.

2        MATHCODER: SPECIALIZING LLAMA FOR MATHEMATICAL REASONING

In this section, we first introduce the methodology on creating MathCodeInstruct in Sec. 2.1. Subsequently, we detail the supervised fine-tuning (SFT) and inference methods in Sec. 2.2.

2.1        MATHCODEINSTRUCT DATASET

Our MathCodeInstruct dataset can be expressed as D = {D0, D1}, where D0 denotes the seed data and D1 is the data generated with the proposed prompting method, named problem interpolation prompting. Fig. 2 (a) outlines the process for creating MathCodeInstruct Dataset.

Seed data D0. First, we obtain solutions for the GSM8K and MATH training sets from the GPT-4. The data can be expressed in (solution, question) pairs as {(yi, xi)}N . Each solution yi contains

Figure 3: Example of CoT (Wei et al., 2022), PoT (Gao et al., 2023; Chen et al., 2022) and LCE solution with special token. In contrast to CoT, which consists solely of natural language, and PoT, which consists solely of code, our LCE solution intertwines natural language, code, and execution results. <|text|>, <|code|>, and <|execution|> are special tokens that denote natural language, code, and execution results respectively.

three kinds of components: natural language (text) for reasoning L, code for execution C, and execution results E, where L is the natural language reasoning step, C is the Python code the model generates when its reasoning leads to some complex computation that it needs code to solve, and E is the output of the code. E is assessed by the model so a new L can be generated. All three kinds of components are closely chained together in the solutions, with each component influencing the component that comes after. An integral solution yi can be expressed as (L, C, E, L, C, E, …). An example is shown in Fig. 3 (c). We call solutions in this format Natural Language, Code, and Execution (LCE) solutions. We put some case studies in Appendix E to demonstrate the advantage of LCE.

We filter the seed data D0 = ({(yi, xi)}), making sure that each solution yi provides the same answer as the ground truth answer so that the quality of the dataset is further assured. Then, we fine- tune the CodeLlama-34B using the seed data D0, producing our initial MathCoder model, named MathCoder-Initial.

Problem interpolation prompting D1. Using the initial MathCoder model, we can generate LCE solutions for new problems. We observed a large gap in difficulty between grade-school-level GSM8K problems and challenging competition MATH problems. To bridge this gap, we present a novel prompting method (see details in Appendix A), which provides a powerful LLM like GPT-4 with a relatively simple problem drawn from the GSM8K training set, paired with a difficult problem drawn from the MATH, and ask the model to generate a new problem with difficulty between the two. We then use GPT-4 to evaluate the new problems, and the results are shown in Fig. 4. We

Figure 4: Difficulty comparison of interpolation problems against MATH and GSM8K using GPT-4. The evaluation prompt and examples are shown in Appendix B.

can see that 83.2% of the new problems are more difficult than GSM8K, and 95.6% are easier than MATH, indicating that the problems generated in this way are appropriate in difficulty.

We also investigated using only GSM8K to create difficult problems, but we found that the new problems were too similar to the original ones, and the large gap to MATH still exists (more information can be found in Appendix C).

Self-distillation. Given that we do not have ground truth answers for the new problems, we then generate n different LCE solutions as depicted in (Wang et al., 2023a) for each new problem with our initial MathCoder models, keeping only those solutions for which all n answers match (n is set to 3 in this paper), thus ensuring our dataset’s quality. We use MathCoder-Initial here because it demonstrates the potential for effective model distillation using a model much weaker than the powerful closed-source models. As MathCoder-Initial already has an accuracy of 77.3% on GSM8K and 44.0% on MATH, it is plausible that distilling it can produce good results. It also reduces the cost compared to using GPT-4. Some examples can be found in Appendix A.

Combining the new data D1 with the seed data D0 yields the MathCodeInstruct dataset D = {D0, D1}. We fine-tune the base Llama-2 (Touvron et al., 2023) and CodeLlama (Rozière et al., 2023) models using MathCodeInstruct to derive our final MathCoder models. For clarity, we refer to the supervised fine-tuning of base Llama-2 as “MathCoder-L” and that of CodeLlama as “MathCoder-CL”, as shown in Fig. 2 (b).

2.2        SUPERVISED FINE-TUNING AND INFERENCE

Supervised Fine-tuning. In order to identify the three kinds of components in LCE solutions, as illustrated in Fig. 3 (c), we enclose them with special tokens. Reasoning language starts with <|text|>, while math code and execution results start with <|code|> and <|execution|> respectively. All components end with <|endofblock|>. These tokens help the model understand the difference between each component and create LCE solutions during inference. After the special tokens are added, all components are concatenated to form the solution, which is preceded by the original math question to form an instance of training data. In order to make the training more efficient, several instances are concatenated together to form a single input, while cross-question masking is used to ensure only tokens in the same instance are visible.

During supervised fine-tuning, we apply a standard cross-entropy loss following Alpaca (Taori et al., 2023). The loss is only computed on reasoning language and math code since they are the components of the training data generated by the LLM. In particular, we zero-out the loss on tokens from execution results, as the model would not need to predict these tokens.

Inference. After supervised fine-tuning, the model has learned to output natural language and code enclosed by special tokens.  We can identify the end of each component by looking for <|endofblock|>, and determine which component it is by examining the first token of the component. When a code generation is encountered, we utilize a Jupyter Notebook for real-time code execution, allowing the variables defined in previous code blocks to be used in subsequent ones. After execution, the execution results are concatenated following the previous math code block. The model then continues to autoregressively generate the next reasoning language block, forming the chain of thoughts in the LCE format, until it reaches the final answer. This process ensures that the model behaves similarly to the GPT-4 Code Interpreter.

3        EXPERIMENTS

3.1        DATASETS AND IMPLEMENTATION DETAILS

Datasets. We evaluate the MathCoder on five datasets, including two in-domain datasets: GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021); and three out-of-domain datasets: SVAMP (Patel et al., 2021), Mathematics (Saxton et al., 2019), and SimulEq (Kushman et al., 2014). We regard GSM8K and MATH as in-domain because their training sets are used for our supervised fine-tuning, while SVAMP, Mathematics, and SimulEq are out-of-domain because their training sets are not used in our fine-tuning. The extensive assortment of assessment datasets encompasses mathematical challenges from elementary, high school, and collegiate levels, covering various subjects like geometry, formal logic, and even commonsense reasoning. The selection of these datasets aims at providing a thorough evaluation of the models’ ability to generalize to unknown circumstances and diverse fields of mathematics.

Implementation Details. Different base LLMs of varying sizes are tested, including Llama- 2 (7B, 13B, and 70B) and CodeLlama (7B, 13B, and 34B). During training, we use a uniform learning rate of 2 × 105 and a context length of 2048, and we set the batch size as 128 with different ratios of gradient accumulation steps and per-device train batch size, considering the model size. Additionally, we used a cosine scheduler for three epochs in total with a 50-step warm- up period. To efficiently train the computationally intensive models, we simultaneously employ DeepSpeed training with ZeRO-3 stage (Rajbhandari et al., 2020) and flash attention (Dao et al., 2022). The 7B, 13B, and 34B/70B models are trained on 8, 16, and 32 NVIDIA A800 80GB GPUs, respectively. The text-generation-inference framework of Hugging Face is used for inference with greedy decoding and max new tokens of every block set 512, and one to four GPUs are used as needed. We allow up to 32 LCE blocks in every solution.

Baselines. We compare the proposed MathCoders with the following competitive baselines. Closed-Source Models: we consider three closed-source models, including ChatGPT-3.5 Brown et al. (2020), GPT-4 (OpenAI, 2023), GPT-4 Code Interpreter (Zhou et al., 2023a), and PaLM- 2 (Anil et al., 2023). Open-Source Models: we compare with Llama-2 (Touvron et al., 2023), WizardMath (Luo et al., 2023), Llama-1 RFT (Yuan et al., 2023), and Galactica (Taylor et al., 2022).

For baselines, CoT prompting (Wei et al., 2022) and few-shot in-context-learning (Dong et al., 2023) are used to maximize their performance while our MathCoders are always evaluated without extra prompting and under zero-shot setting (Kojima et al., 2023).

3.2        MAIN RESULTS

Comparison between MathCoder and SOTA open-source models. The experiment results in Tab. 2 show that our method outperforms other open-source competitive math-solving models with a clear advantage, achieving state-of-the-art results across all datasets. However, a substantial performance gap still exists compared to the state-of-the-art closed-source method GPT-4 Code Interpreter. Our observations are as follows: (1) MathCoder-L-7B outperforms WizardMath-70B. Even the smallest version of MathCoder, MathCoder-L-7B, outperforms the largest WizardMath model, WizardMath-70B, on three out of five datasets, achieving a significant gain (+4.5%) in the average score, as shown in Tab. 2. This is likely attributed to the fact that WizardMath is trained solely on CoT data, while MathCoder is trained on our proposed LCE solutions. This demonstrates the advantage of using solutions that interleave natural language, code, and execution (LCE blocks), significantly enhancing the model’s ability to perform complex computations. (2) Additionally, it is worth noting that while the code ability of CodeLlama-34B significantly outperforms that of Llama-2-70B, in the case of MathCoder models, we observed that models based on Llama-2-70B (73.1%) can outperform CodeLlama-34B (70.2%). This contrasts with the findings in the concurrent work, MAmmoTH (Yue et al., 2023). The main reason for this disparity might be that Llama-2- 70B exhibits better natural language reasoning ability, and the MathCodeInstruct dataset can enhance language models’ code generation ability for math problem-solving.

Comparison between Llama-2 and CodeLlama. Tab. 3 shows that MathCoder-CL with CodeLlama as the base model brings a substantial improvement compared to MathCoder-L with Llama-2 as the base model. MathCode-CL-7B and MathCoder-CL-13B demonstrate an accuracy improvement of 4.1% and 3.0% respectively, compared to the corresponding MathCoder-L models

Table 2: Model evaluation on in-domain (GSM8K & MATH) and out-of-domain datasets (SVAMP, Mathematics & SimulEq). + incidates improvement w.r.t. the best open source model. SVA. stands for SVAMP, Mat. stands for Mathematics, and Sim. stands for SimulEq.

Table 3: Model performance comparison for MathCoders with CodeLlama and Llama-2 as base.

of the same size. The potentially superior coding and reasoning capability of CodeLlama can be attributed to its additional training on code data (Rozière et al., 2023). This extended training provides CodeLlama with a deeper understanding of programming concepts and patterns, allowing it to excel in coding-related tasks and exhibit more advanced math reasoning abilities.

Comparison among different subjects across various levels. MATH dataset problems are categorized with difficulty levels ranging from 1 to 5, covering seven different math subjects, including algebra, prealgebra, number theory, counting and probability, precalculus, intermediate algebra, and geometry. In Fig. 5, we present the performance comparison of MathCoder-L (7B, 13B) and MathCoder-CL (7B, 13B), grouped by these levels and subjects. More results are shown in Appendix D. We find that MathCoder achieves higher scores in algebra and prealgebra problems. However, when it comes to geometry problems, MathCoder struggles to achieve high scores, especially for problems with higher difficulty levels. This suggests that code plays a more significant role in computationally intensive questions.

3.3        ABLATION STUDY

Analysis of the influence of problem interpolation. We conducted an experiment to study the influence of the portion of MathCodeInstruct questions created using the proposed problem

Figure 5: Performance comparison of MathCoder-L (7B, 13B) and MathCoder-CL (7B, 13B) on the MATH dataset by levels and subjects. We can see that the improved accuracy from MathCoder-L to MathCoder-CL comes primarily from subjects that require precise calculations like algebra and number theory.

Table 4: Influence of the interpolation problems in MathCodeInstruct (as shown in Tab. 1) based on CodeLlama-34B.

interpolation. The experiment uses CodeLlama-34B as the base model. The experimental results in Tab. 4 validate that problem interpolation brings a significant improvement across all five datasets. The results indicate that by employing problem interpolation, we can generate problems with intermediate difficulty levels, thereby increasing the diversity of the problem set. This expands the diversity of the problems and ultimately enhances the performance of the model.

Analysis of actual code execution in the inference stage. We investigate the impact of code execution in the inference stage and report the results in Tab. 5. We conduct this investigation using CodeLlama-34B as the base model and train the models on our 80k MathCodeInstruct dataset. Tab. 5 (#1) and Tab. 5 (#2) use the same model, trained with the cross-entropy loss computed on not only natural language and code, but also the execution results. In this way, this model learns to predict the execution results. In Tab. 5 (#1), the code execution results are predicted by the model itself, while in Tab. 5 (#2), the execution result is returned from a Python code interpreter. From the comparison between Tab. 5 (#1) and Tab. 5 (#2), we can see that Tab. 5 (#1) outperforms Tab. 5 (#2) across all five datasets, showing an improvement of 34.0% in the average accuracy score. This indicates that actual code execution in the inference stage has a significant impact on the model’s performance. This study shows that the model failed to predict correct execution results for many programs and that actually executing the code using an external tool can significantly improve the accuracy while doing complex computations. This finding validates the significance of integrating code execution when solving math problems with LLMs, in line with previous closed-source GPT-4 Code interpreter (Zhou et al., 2023a).

Analysis of execution results in the training stage. Based on the observation that actual code execution contributes a lot to the model’s performance, we investigate not forcing the model to predict the correct execution result. Tab. 5 (#3) is the performance of MathCoder-CL-34B, which ignores execution results when computing the loss, so that the model does not learn to estimate the execution results and the learning task at the supervised fine-tuning stage becomes simpler. Compared to Tab. 5 (#2), Tab. 5 (#3) improves the accuracy across four out of five datasets, resulting in a rise in the average accuracy from 69.1% to 70.2%, which aligns with the hypothesis that by computing the loss only on natural language and code, the model can focus more on the math problem-solving skills itself, thus making the supervised fine-tuning more effective.

Table 5: Ablation study of with/without code execution during inference and of the loss with/without execution results in training stage.

4        RELATED WORK

Instruction Tuning. Instruction tuning is a method of enhancing LLMs’ instruction following abilities, thus aligning language models with more useful objectives and human preferences. A long line of previous works (Ye et al., 2021; Longpre et al., 2023; Sanh et al., 2021; Wang et al., 2022b; Wei et al., 2021; Chung et al., 2022; Longpre et al., 2023) is focused on enhancing LLMs’ instruction following abilities in general. With the emergence of models like GPT-3 and GPT-4, recent studies (Wang et al., 2022a; 2023b; Zhou et al., 2023b; Peng et al., 2023; Xu et al., 2023) have started to utilize synthetic instructions generated by these powerful models to tune smaller models. Compared to these works, our instruction tuning is focused on using high-quality solutions for math problems generated by models to improve our LLM’s math-solving ability. Another related work is presented in (Luo et al., 2023), but their method did not use code to solve math problems, distinguishing our work from theirs.

Mathematical Reasoning. There are various benchmark datasets (Hendrycks et al., 2020; Ling et al., 2017; Hendrycks et al., 2021) to measure a model’s mathematical reasoning abilities. Recently, many works have focused on enhancing LLMs’ ability to solve math problems, reaching high scores on these benchmarks. Many of them apply Chain-of-Thought (Wei et al., 2022; Kojima et al., 2023; Wang et al., 2023a; Fu et al., 2022) to improve LLMs’ multistep reasoning capability. Another line of works (Gao et al., 2023; Chen et al., 2022; Zhou et al., 2023a) utilize code to compensate for LLMs’ limitations in doing complex math computations. Our work takes inspiration from these two lines of work, as we believe both Chain-of-Thought and code generation (Li et al., 2023a; Rozière et al., 2023) are essential to solving math problems. There are also works focused on math-related pre-training (Lewkowycz et al., 2022; Taylor et al., 2022) to improve a model’s general reasoning capability. We combine natural language and code seamlessly in our dataset, thus providing a method to train models more efficiently in solving math problems.

Distillation. Distillation (Hinton et al., 2015) often involves transferring knowledge from a larger, more powerful model to a smaller, weaker one (Taori et al., 2023; Zheng et al., 2023; Cobbe et al., 2021). Recent research (Li et al., 2023b; Wang et al., 2022a; Allen-Zhu & Li, 2020) has demonstrated the plausibility of self-distillation, achieving performance improvements by distilling the model itself. Our approach can also be viewed as a form of self-distillation, as the solutions generated by MathCoder-Initial, which is built on CodeLlama-34B, are used to fine-tune CodeLlama-34B, resulting in MathCoder-CL-34B.

5        CONCLUSION AND LIMITATION

In this paper, we present MathCoder, an open-source large language model designed for math reasoning, bridging the gap between natural language understanding and computational problem- solving. MathCoder incorporates math instruction-following dataset construction. By utilizing the GSM8K and MATH datasets as seed data, we leverage the GPT-4 to generate problems encompassing reasoning, code generation, and program execution. Additionally, we propose a problem interpretation method to create intermediate-level problems. Furthermore, we introduce a customized supervised fine-tuning approach, where the training loss is only applied to natural language and code. Our empirical study demonstrates that MathCoder achieves state-of-the- art performance in five math datasets among open-source LLMs, with scores of 83.9% on the GSM8K dataset and 45.2% on the MATH dataset. It is worth noting that MathCoder outperforms closed-source models like ChatGPT-3.5 and PaLM-2 on the GSM8K and MATH datasets and even outperforms GPT-4 on the MATH dataset.

However, our work does have certain limitations that warrant further exploration in future research. First, since we rely on the GPT-4 for data generation, MathCoder’s capabilities are inherently constrained by the capabilities of this model and unable to solve theorem-proving problems. Additionally, as a series of uni-modal models, MathCoder still faces challenges in solving complex geometry problems, which we acknowledge and plan to address in our future investigations.

References

Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. arXiv preprint arXiv:2012.09816, 2020.

Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.

Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.

Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022.

Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.

Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.

Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness, 2022.

Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. A survey on in-context learning, 2023.

Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting for multi-step reasoning. arXiv preprint arXiv:2210.00720, 2022.

Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764–10799. PMLR, 2023.

Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring massive multitask language understanding. ArXiv, abs/2009.03300, 2020. URL https://api.semanticscholar.org/CorpusID: 221516475.

Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021.

Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531, 2015. URL https://api.semanticscholar.org/ CorpusID:7200347.

Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners, 2023.

Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 271–281, 2014.

Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems, 35:3843–3857, 2022.

Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. Starcoder: may the source be with you!, 2023a.

Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. ArXiv, abs/2308.06259, 2023b. URL https://api.semanticscholar.org/CorpusID:260866107.

Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),

pp. 158–167, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1015. URL https://aclanthology.org/P17-1015.

Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023.

Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023.

OpenAI. Gpt-4 technical report, 2023.

Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math word problems? arXiv preprint arXiv:2103.07191, 2021.

Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023.

Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023.

Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models, 2020.

Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. Code llama: Open foundation models for code, 2023.

Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021.

David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.

Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.

Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085, 2022.

Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.

Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023a. URL https://openreview.net/forum?id=1PL1NIMMrw.

Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022a.

Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705, 2022b.

Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023b.

Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.

Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview. net/forum?id=_VjQlMeSB_J.

Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.

Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. Crossfit: A few-shot learning challenge for cross-task generalization in nlp. arXiv preprint arXiv:2104.08835, 2021.

Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825, 2023.

Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning, 2023.

Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.

Xu Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, and Qizhe Xie. Automatic model selection with large language models for reasoning, 2023.

Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.

Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification. arXiv preprint arXiv:2308.07921, 2023a.

Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023b.

Appendix

A         DATASET EXAMPLES

In this part, we include two examples that show the process of creating MathCodeInstruct. Fig. 6 shows an example with only one LCE block, while Fig. 7 shows an example with three LCE blocks.

B        EXAMPLES OF DIFFICULTY COMPARISON

We show five examples of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct. Fig. 8 and Fig. 9 are two examples that the newly generated interpolation problems are more difficult than the origin GSM8K problems, and Fig. 10 is an example that the origin MATH problem is more difficult than the newly generated interpolation problem. These two situations are the most common (83.2% and 95.6%).

Fig. 11 shows an example that the newly generated interpolation problem ties with the origin GSM8K problem, which situation accounts for 15.3% of all problems.

Fig. 12 shows an uncommon example that the origin GSM8K problem is slightly more difficult than the newly generated interpolation problem according to GPT-4, which situation accounts for less than 3% of all problems.

C        CREATING PROBLEMS USING ONLY GSM8K

Fig. 13, Fig. 14, Fig. 15, Fig. 16 and Fig. 17 are five examples that utilize problems from the train set of GSM8K to generate new problems which are more difficult than the origin ones. Compared with the problems generated by our interpolation method, we can see that the new problems generated in this way are much more similar to the raw GSM8K problems, sometimes just changing the name of some variables or scaling the value. These problems are only slightly more complicated than the raw problems, if not equally difficult, and are still much simpler than those from the MATH dataset.

In contrast to using just GSM8K, introducing problems from the MATH dataset in the interpolation method shows the model (GPT-4 here) a route to generate more challenging problems. Hence, the newly generated problems are similar to the problems in the GSM8K and the problems in the MATH. Consequently, these interpolation problems can narrow the difficulty gap between the two datasets.

D         MORE EXPERIMENT RESULTS

We show the performance comparison of all MathCoders, MathCoder-L (7B, 13B, 70B) and MathCoder-CL (7B, 13B, 34B), on the MATH dataset by levels and subjects in Fig. 18. We can see that the improved accuracy from MathCoder-L to MathCoder-CL comes primarily from subjects requiring precise calculations like algebra, number theory, and counting and probability.

E        CASE STUDY WITH COT, POT AND LCE

We compare our LCE solutions with CoT solutions and PoT solutions. Fig. 19 is an example of a problem in number theory, and Fig. 20 is an example of a problem in algebra. CoT and PoT failed to solve the problem in both cases, but LCE succeeded.

Fig. 21, Fig. 22, and Fig. 23 are three solutions to one problem in geometry. The CoT solutions successfully figured out the coordinates of D, C, and E but failed to calculate the area, while the PoT could not interpret the conditions in the problem. Compared with them, we can see that our LCE solutions not only can utilize the conditions in the problems correctly but also will not make errors in calculation.

Figure 6: An example of the process of creating MathCodeInstruct. Firstly, “Example 1” and “Example 2” are randomly chosen from the train set of GSM8K and MATH respectively. Then a new problem is generated by GPT-4 using the interpolation prompt. Finally, we use our initial MathCoder to generate LCE-style solution for the new problem.
Figure 7: An example of the process of creating MathCodeInstruct. Firstly, “Example 1” and “Example 2” are randomly chosen from the train set of GSM8K and MATH respectively. Then a new problem is generated by GPT-4 using the interpolation prompt. Finally, we use our initial MathCoder to generate LCE-style solution for the new problem.
Figure 8: An example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct. “Problem 2” is in MathCodeInstruct and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”.
Figure 9: An example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct. “Problem 2” is in MathCodeInstruct and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”.
Figure 10: An example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct. “Problem 2” is in MathCodeInstruct and “Problem 1” is the problem from MATH that was used to generate “Problem 2”.
Figure 11: An example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct. “Problem 2” is in MathCodeInstruct, and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”. It is an example of a tie.
Figure 12: A example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct and it is an uncommon example that the problem from GSM8K is slightly more difficult than the interpolation problem. “Problem 2” is in MathCodeInstruct and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”.
Figure 13: An example of using GPT-4 to create problems based only on the problems from GSM8K and then evaluate the complexity of the newly generated problems. “Problem 2” is the new problem, and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”.
Figure 14: An example of using GPT-4 to create problems based only on the problems from GSM8K and then evaluate the complexity of the newly generated problems. “Problem 2” is the new problem, and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”.
Figure 15: An example of using GPT-4 to create problems based only on the problems from GSM8K and then evaluate the complexity of the newly generated problems. “Problem 2” is the new problem, and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”.
Figure 16: An example of using GPT-4 to create problems based only on the problems from GSM8K and then evaluate the complexity of the newly generated problems. “Problem 2” is the new problem, and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”.
Figure 17: An example of using GPT-4 to create problems based only on the problems from GSM8K and then evaluate the complexity of the newly generated problems. “Problem 2” is the new problem, and “Problem 1” is the problem from GSM8K that was used to generate “Problem 2”.
Figure 18: Performance comparison of MathCoeder-L (7B, 13B, 70B) and MathCoder-CL (7B, 13B, 34B) on the MATH dataset by levels and subjects. The improved accuracy from MathCoder-L to MathCoder-CL comes primarily from subjects that require precise calculations like algebra, number theory, and counting and probability.
Figure 19: Example of CoT, PoT and LCE solution with special token. The problem is from the test set of MATH in number theory with id 1191. In contrast to CoT, which consists solely of natural language, and PoT, which consists solely of code, our LCE solution intertwines natural language, code, and execution results.
Figure 20: Example of CoT, PoT and LCE solution with special token. The problem is from the test set of MATH in algebra with id 2477. In contrast to CoT, which consists solely of natural language, and PoT, which consists solely of code, our LCE solution intertwines natural language, code, and execution results.
Figure 21: Example of CoT solution. The problem is from the test set of MATH in geometry with id 500.
Figure 22: Example of PoT solution. The problem is from the test set of MATH in geometry with id 500.
Figure 23: Example of LCE solution with special token. The problem is from the test set of MATH in geometry with id 500. In contrast to CoT, which consists solely of natural language, and PoT, which consists solely of code, our LCE solution intertwines natural language, code, and execution results.