Tokenizer class LlamaTokenizer does not exist or is not currently imported.

#5
by SaTaBa - opened

Hello, I would love to use this model for my master thesis but unfortunately I get this error when trying to execute the example code. I installed sentencepiece and it is already lowercase, so the fixes that helped others are not working here.
I am happy about any suggestions. Thank you!

Machine Translation Team at Alibaba DAMO Academy org

Please make sure you have cloned the latest model files. You can try to install the latest transformers (v4.31.0) and execute the following example code again:

# pip install accelerate

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-MT/polylm-13b", legacy=False, use_fast=False)

model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-MT/polylm-13b", device_map="auto", trust_remote_code=True)
model.eval()

input_doc = f"Beijing is the capital of China.\nTranslate this sentence from English to Chinese."

inputs = tokenizer(input_doc, return_tensors="pt")

generate_ids = model.generate(inputs.input_ids, attention_mask=inputs.attention_mask, do_sample=False, num_beams=4, max_length=128, early_stopping=True)
decoded = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]

print(f">>> {decoded}")
### results
### Beijing is the capital of China.\nTranslate this sentence from English to Chinese.\\n北京是中华人民共和国的首都。\n ...

Yes, that solved my problem. Turned out I did have an older version of transformers and now it works like a charm! Thank you so much for the help and training this model!

SaTaBa changed discussion status to closed

Sign up or log in to comment