runtime error

Exit code: 1. Reason: ages (from sympy->torch->flash-attn) (1.3.0) Downloading einops-0.8.0-py3-none-any.whl (43 kB) Building wheels for collected packages: flash-attn Building wheel for flash-attn (setup.py): started Building wheel for flash-attn (setup.py): finished with status 'done' Created wheel for flash-attn: filename=flash_attn-2.6.3-py3-none-any.whl size=187309225 sha256=237ef9c6157db394e1ddde4ba609a21ebb98382377a27041edc09318801a6f24 Stored in directory: /home/user/.cache/pip/wheels/7e/e3/c3/89c7a2f3c4adc07cd1c675f8bb7b9ad4d18f64a72bccdfe826 Successfully built flash-attn Installing collected packages: einops, flash-attn Successfully installed einops-0.8.0 flash-attn-2.6.3 Loading CLIP Loading VLM's custom vision model Loading tokenizer Loading LLM: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 Downloading shards: 0%| | 0/4 [00:00<?, ?it/s] Downloading shards: 25%|██▌ | 1/4 [00:12<00:36, 12.00s/it] Downloading shards: 50%|█████ | 2/4 [00:24<00:24, 12.12s/it] Downloading shards: 75%|███████▌ | 3/4 [00:36<00:12, 12.35s/it] Downloading shards: 100%|██████████| 4/4 [00:40<00:00, 9.04s/it] Downloading shards: 100%|██████████| 4/4 [00:40<00:00, 10.20s/it] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 5.62it/s] Loading VLM's custom text model The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. Loading image adapter pixtral_model: <class 'NoneType'> pixtral_processor: <class 'NoneType'> Traceback (most recent call last): File "/home/user/app/app.py", line 3, in <module> from joycaption import stream_chat_mod, get_text_model, change_text_model, get_repo_gguf File "/home/user/app/joycaption.py", line 237, in <module> @spaces.GPU() TypeError: spaces.GPU() missing 1 required positional argument: 'func'

Container logs:

Fetching error logs...