Overview

NVIDIA OpenCodeReasoning is the largest reasoning-based synthetic dataset for competitive programming to date. Comprising 735,255 samples and 28,319 unique problems, it is specialized for Python language and designed for supervised fine-tuning (SFT).

This dataset contains high-quality reasoning-based responses generated by R1 models and integrates problems from 10 major competitive programming platforms including CodeForces, LeetCode, and CodeChef. Released under the CC BY 4.0 license, it is freely available for both commercial and non-commercial use.

Dataset Scale and Composition

Overall Statistics

The dataset contains 735,255 total samples of reasoning-based solutions generated by R1 models, 28,319 unique problems from competitive programming platforms, supports Python language exclusively for focused development, includes 2 data splits (split_0 and split_1), and is stored in efficient Parquet format with 100K-1M file sizes.

Platform-wise Data Distribution

CodeForces leads with 10,069 problems generating 386,948 samples (52.6% of total). CodeChef contributes 3,796 problems with 72,925 samples (9.9%). AIZU provides 2,123 problems for 62,476 samples (8.5%). HackerEarth offers 2,269 problems producing 59,181 samples (8.0%). AtCoder contributes 2,043 problems with 47,222 samples (6.4%). GeeksForGeeks adds 2,667 problems for 37,602 samples (5.1%). Codewars provides 2,493 problems generating 34,326 samples (4.7%). Kattis contributes 1,187 problems with 13,095 samples (1.8%). HackerRank offers 895 problems for 10,955 samples (1.5%). LeetCode provides 777 problems with 10,525 samples (1.4%).

Data Source Details

The original dataset collection includes TACO, APPS, CodeContests, and open-r1/codeforces. Test splits from CodeContests and open-r1/codeforces are excluded to prevent data contamination and ensure strict separation.

Data Field Structure

Key Field Descriptions

The dataset structure includes unique identifiers for each problem, competitive programming problem descriptions (split_0 only), complete reasoning responses from R1 models, extracted code portions from R1 responses, original dataset names, dataset licenses, original dataset split names, competitive programming platform names, problem difficulty labels, and APPS/TACO dataset search indices (split_1 only).

Split Structure

Split_0 contains 567,850 samples with complete problem descriptions included, ready for immediate use, and direct SFT training application. Split_1 includes 167,405 samples with reference-based structure (input = “-“), requires separate loading from TACO/APPS datasets, and optimizes storage space efficiency.

Usage Methods and Implementation

Basic Data Loading

Users can load Split_0 with complete problems included and Split_1 with reference-based structure using the Hugging Face Datasets library. The loading process displays sample counts for each split to verify successful data access.

Split_1 Problem Content Restoration

For Split_1 data restoration, users need to load reference datasets including TACO and APPS. The restoration process involves iterating through Split_1 data, verifying input fields are marked as “-“, confirming dataset sources are either “taco” or “apps”, retrieving original problem content from reference datasets using provided indices, and creating restored items with complete problem descriptions.

Platform-wise Data Analysis

Analysis capabilities include collecting platform statistics, tracking difficulty distributions, calculating total samples per platform, and generating comprehensive reports showing platform contributions and difficulty breakdowns.

R1 Model-Based Reasoning Generation

Reasoning Quality Characteristics

R1 model reasoning features explicit step-by-step thinking processes, detailed problem analysis and solution strategies, comprehensive code implementation and verification processes, and consideration of edge cases and optimization approaches.

Reasoning Response Structure

A typical R1-generated response includes problem analysis explaining the core challenge and applicable algorithms, solution strategy outlining step-by-step approaches and decision criteria, code implementation with detailed explanations and complexity analysis, and test case verification demonstrating correctness with example inputs and outputs.

Data Quality and Characteristics

Quality Management Process

Original data verification ensures problems from 10 major platforms are validated, duplicate problems are removed and normalized, and difficulty distributions are balanced. R1 model generation quality involves logical consistency verification, code executability confirmation, and reasoning process clarity evaluation. License compatibility includes platform-specific license verification, commercial use feasibility review, and redistribution condition specification.

Difficulty Distribution Analysis

The analysis process involves collecting difficulty statistics across all samples, categorizing problems by difficulty levels, calculating percentage distributions, and generating comprehensive difficulty reports showing the dataset’s balanced representation across various complexity levels.

Applications and Use Cases

SFT Model Training

For supervised fine-tuning applications, the dataset enables conversion to standard prompt formats, metadata preservation including source and difficulty information, and preparation of training samples optimized for model learning.

Code Generation Model Evaluation

Evaluation capabilities include syntactic validity checking through AST parsing, main function presence verification, input handling assessment, comment quality evaluation, and comprehensive quality metric calculation across representative sample sets.

OpenCodeReasoning Model Series

Pre-trained Models

NVIDIA has released several models based on this dataset. The OpenCodeReasoning-Nemotron-7B offers efficient mid-size model performance, while the OpenCodeReasoning-Nemotron-32B provides highest performance large model capabilities.

Community Fine-tuned Models

Over 218 derivative models have been trained using this dataset, including popular variants such as SVECTOR-CORPORATION/Spec-Coder-4b-V1 with 11.3K downloads, Mungert/OpenCodeReasoning-Nemotron-32B-IOI-GGUF offering GGUF quantized versions, and ertghiu256/qwen3-4b-code-reasoning providing Qwen-based fine-tuning.

Technical Details

Data Processing Pipeline

The comprehensive processing pipeline includes raw problem loading from all 10 platforms, solution generation using R1 models with detailed reasoning, quality filtering through validation processes, and efficient export to Parquet format with Snappy compression.

Storage Format and Optimization

The dataset utilizes Parquet format with Snappy compression, Apache Arrow compatible schema, approximately 60-70% size reduction through compression, and columnar storage for fast querying performance.

License and Usage Conditions

CC BY 4.0 License

The license permits commercial use with free commercial utilization, modification allowing data changes and processing, distribution enabling redistribution of original and modified versions, and private use for personal and organizational internal applications. Requirements include attribution to NVIDIA developers, license notice displaying CC BY 4.0 license, and change indication when modifications are made.

Individual Dataset License Considerations

Users must verify licenses of original datasets, as each source dataset may have different licensing terms that need to be respected alongside the CC BY 4.0 license of the processed dataset.

Performance Benchmarking

Evaluation Metrics

Assessment includes code correctness through test case pass rates, reasoning quality via logical consistency evaluation, execution efficiency considering time and space complexity optimization, and readability assessing code style and comment quality.

Benchmark Results Framework

The evaluation framework supports correctness assessment through code execution and result comparison, reasoning quality evaluation using structured analysis criteria, comprehensive performance measurement across multiple dimensions, and detailed reporting with actionable insights.

Future Development Directions

Dataset Expansion Plans

Future expansion includes multilingual support adding Java, C++, and JavaScript beyond Python, real-time updates incorporating new competitive programming problems, and community contribution systems enabling collaborative dataset improvement.

Model Performance Enhancement

Improvements focus on more powerful generation models applying R1 successor models, specialized code generation models utilizing domain-specific architectures, human feedback learning integration for enhanced quality, automated code verification systems, reasoning consistency checking mechanisms, and community review systems for quality assurance.

Conclusion

NVIDIA OpenCodeReasoning represents the largest reasoning-based synthetic dataset in competitive programming with 735,255 samples and 28,319 unique problems covering 10 major platforms. The high-quality reasoning based on R1 models and systematic data collection methodology establish this dataset as a new standard for code generation AI development.

Available under the CC BY 4.0 license for free use across educational, research, and commercial applications, the dataset’s practical value and quality are demonstrated by over 218 derivative models and active community utilization. Future enhancements including multilingual support and real-time updates are expected to further advance code reasoning AI development.

Citation Information

@article{ahmad2025opencodereasoning,
      title={OpenCodeReasoning: Advancing Data Distillation for Competitive Coding}, 
      author={Wasi Uddin Ahmad, Sean Narenthiran, Somshubra Majumdar, Aleksander Ficek, Siddhartha Jain, Jocelyn Huang, Vahid Noroozi, Boris Ginsburg},
      year={2025},
      eprint={2504.01943},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.01943}, 
}

References