Tokenizer not downloading correctly with LFS

#1
by edensn - opened

I have Git LFS and tokenizer.json won't download. This is what's in the file:

version https://git-lfs.github.com/spec/v1
oid sha256:3f289bc05132635a8bc7aca7aa21255efd5e18f3710f43e3cdb96bcd41be4922
size 17525357

The other LFS files downloaded correctly

since the .gitattributes are incorrect, i need to update them to allow tokenizer.json to be downloaded.

the SystemGemmas will be updated

I replaced the template in tokenizer_conig.json of the original model with the one here https://huggingface.co/google/gemma-2-2b-it/discussions/25, and now my system prompt generates twice:

conversations = [
    [
        {"role": "system", "content": "This is a system prompt"},
        {"role": "user", "content": "This is some User Text"}
        ] 
        for doc in docs
    ]
prompts = tokenizer.apply_chat_template(conversations, tokenize=False, add_generation_prompt=True)
print(prompts[0])
<bos>This is a system prompt
<start_of_turn>system
This is a system prompt<end_of_turn>
<start_of_turn>user
This is some User Text<end_of_turn>
<start_of_turn>model

I replaced the template in tokenizer_conig.json of the original model with the one here https://huggingface.co/google/gemma-2-2b-it/discussions/25, and now my system prompt generates twice:

conversations = [
    [
        {"role": "system", "content": "This is a system prompt"},
        {"role": "user", "content": "This is some User Text"}
        ] 
        for doc in docs
    ]
prompts = tokenizer.apply_chat_template(conversations, tokenize=False, add_generation_prompt=True)
print(prompts[0])
<bos>This is a system prompt
<start_of_turn>system
This is a system prompt<end_of_turn>
<start_of_turn>user
This is some User Text<end_of_turn>
<start_of_turn>model

Sounds like the template is parsing system prompt in the beginning of each turn, so I have to disable appending of system prompt in the turns.

Thanks!

@edensn Tokenizers are updated.

Sign up or log in to comment