Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
code
ArXiv:
Libraries:
Datasets
Dask
License:

Preprocessing order / options?

#41
by csiefer - opened

The BigCode team graciously provides the preprocessing scripts used for The Stack here: https://github.com/bigcode-project/bigcode-dataset

My question is: What options were used to run said scripts in the generation of The Stack and in what order were they run? Is this documented somewhere?

Thanks!

BigCode org

The Stack didn't go through that processing, you can find its near-deduplicated version here: https://huggingface.co/datasets/bigcode/the-stack-dedup
The preprocessing you linked was used to build StarCoderData (for StarCoder training) from The Stack, we first run deduplication and language selection, then PII followed by decontamination. You can find more details in the paper: https://huggingface.co/papers/2305.06161

@loubnabnl Thanks! For what it's worth, I'm looking to see how effective find-tuned open (or open-ish) code models can be for a very specific subset of libraries from GitHub (that all utilize one specific library). I want to make sure that data is all cleaned up before the fine tuning, and following the StarCoderData approach seemed very reasonable to me.

BigCode org

You could start directly from StarCoderData and filter it? or maybe the-stack-dedup if you want to include even more languages/adapt the preprocessing

@loubnabnl The data I want to fine-tune on isn't in StarCoderData, hence why I needed to clean it up myself :)

Sign up or log in to comment