Model not loading from local directory

#108
by mdaniyal214 - opened

I have downloaded all the checkpoints files in my local. And when I upload the model using directory path, it does not load it.

I am running like this:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
print("IMPORTED")

torch.set_default_device("cpu")
tokenizer = AutoTokenizer.from_pretrained("H://CUSTOM_MODEL//HF_MODELS//phi-2", trust_remote_code=True)
print("Tokenizer")

model = AutoModelForCausalLM.from_pretrained("H://CUSTOM_MODEL//HF_MODELS//phi-2", torch_dtype="auto", trust_remote_code=True)

print("LOADED")
print("LOADED")
print("LOADED")

inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)

As you can see I have placed a print statements. It doesn't run anything after model line. The output is like this:
IMPORTED
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Tokenizer

Below is my local model folder
image.png

Hello
were you able to resolve this issue, I am facing the same.

Sign up or log in to comment