runtime error

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] text_encoder/model.safetensors not found Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Loading pipeline components...: 57%|█████▋ | 4/7 [00:27<00:20, 6.78s/it] Loading pipeline components...: 71%|███████▏ | 5/7 [00:30<00:11, 6.00s/it] Loading pipeline components...: 86%|████████▌ | 6/7 [00:32<00:04, 4.76s/it] Loading pipeline components...: 100%|██████████| 7/7 [00:32<00:00, 4.66s/it] /usr/local/lib/python3.10/site-packages/spaces/zero/decorator.py:76: UserWarning: `enable_queue` parameter is now ignored and always set to `True` warnings.warn("`enable_queue` parameter is now ignored and always set to `True`") Traceback (most recent call last): File "/home/user/app/app.py", line 62, in <module> def generate( File "/usr/local/lib/python3.10/site-packages/spaces/zero/decorator.py", line 113, in _GPU client.startup_report() File "/usr/local/lib/python3.10/site-packages/spaces/zero/client.py", line 45, in startup_report raise RuntimeError("Error while initializing ZeroGPU: Unknown") RuntimeError: Error while initializing ZeroGPU: Unknown

Container logs:

Fetching error logs...