Llama v.s. OLMo token counts

#43
by WillHeld - opened

Hi Dolma team!

Thanks for the fantastic work! I've been ingesting Dolma recently and have been assuming that the OLMo and Llama token counts should be ~similar (e.g. under 10% drop, especially since most of OLMo is English).

Thus far this has been true, except for Open Web Math where the token count I'm getting from Llama is ~5B rather than the 12.6B reported in the readme.

Do you all happen to have Llama token counts as well to cross-verify this? I'm trying to figure out whether the discrepancy is that Llama is tokenizing that dataset in particular wildly efficiently, or whether there's some difference in the hosted files and those from which the token count was computed.

Thanks again!

Sign up or log in to comment