runtime error

�█████▉ | 996M/1.12G [00:12<00:01, 78.8MB/s] Downloading model.safetensors: 92%|█████████▏| 1.03G/1.12G [00:12<00:01, 80.6MB/s] Downloading model.safetensors: 94%|█████████▍| 1.05G/1.12G [00:13<00:00, 75.8MB/s] Downloading model.safetensors: 95%|█████████▍| 1.06G/1.12G [00:13<00:00, 76.4MB/s] Downloading model.safetensors: 96%|█████████▌| 1.07G/1.12G [00:13<00:00, 71.8MB/s] Downloading model.safetensors: 97%|█████████▋| 1.08G/1.12G [00:13<00:00, 61.3MB/s] Downloading model.safetensors: 98%|█████████▊| 1.10G/1.12G [00:14<00:00, 63.8MB/s] Downloading model.safetensors: 100%|██████████| 1.12G/1.12G [00:14<00:00, 77.7MB/s] Downloading model.safetensors: 100%|██████████| 1.12G/1.12G [00:14<00:00, 78.5MB/s] Downloading adapter_model.bin: 0%| | 0.00/6.31M [00:00<?, ?B/s] Downloading adapter_model.bin: 100%|██████████| 6.31M/6.31M [00:00<00:00, 28.7MB/s] Downloading adapter_model.bin: 100%|██████████| 6.31M/6.31M [00:00<00:00, 28.4MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 7, in <module> tokenizer = AutoTokenizer.from_pretrained(peft_model_id) File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 769, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2039, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'sksayril/bpt-v-4-Bengali'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'sksayril/bpt-v-4-Bengali' is the correct path to a directory containing all relevant files for a BloomTokenizerFast tokenizer.

Container logs:

Fetching error logs...