runtime error

Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [01:10<01:10, 70.20s/it] Downloading shards: 100%|██████████| 2/2 [01:21<00:00, 35.74s/it] Downloading shards: 100%|██████████| 2/2 [01:21<00:00, 40.91s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|█████ | 1/2 [00:31<00:31, 31.81s/it] Loading checkpoint shards: 100%|██████████| 2/2 [00:41<00:00, 18.98s/it] Loading checkpoint shards: 100%|██████████| 2/2 [00:41<00:00, 20.91s/it] /usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:139: LangChainDeprecationWarning: The class `HuggingFacePipeline` was deprecated in LangChain 0.0.37 and will be removed in 0.3. An updated version of the class exists in the langchain-huggingface package and should be used instead. To use it run `pip install -U langchain-huggingface` and import as `from langchain_huggingface import HuggingFacePipeline`. warn_deprecated( Caching examples at: '/home/user/app/gradio_cached_examples/15' Caching example 1/2 /usr/local/lib/python3.10/site-packages/transformers/generation/utils.py:1249: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation. warnings.warn( Caching example 2/2 /usr/local/lib/python3.10/site-packages/transformers/generation/utils.py:1249: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation. warnings.warn( Traceback (most recent call last): File "/home/user/app/app.py", line 53, in <module> gr.ChatInterface( TypeError: Blocks.launch() got an unexpected keyword argument 'force_download'

Container logs:

Fetching error logs...