Apply for community grant: Academic project (gpu and storage)

#1
by NohTow - opened
IMATAG org

Hello,

We wrote a paper about large language modeling watermarking (to appear on ArXiv) and write watermarking scripts for previous methods along with better detection schemes and multi-bit watermarking based on the Hugging Face library.
We made a working demo for people to try at the paper release (https://huggingface.co/spaces/NohTow/LLM_watermarking). The problem is that the demo is using OPT, which is rather old and might let people think that the generation quality is mostly due to watermarking, whereas it is mostly due to the model itself. When trying to use Llama2 in this space, a timeout is raised when downloading the model (I have access to the weights and correctly setup the token). The demo is working with Llama2 locally.

Would it be possible to have some resources for people to try LLM watermarking ?

Thanks in advance,
Antoine Chaffin.

IMATAG org

Quick update:
The paper appeared on ArXiv and has been featured in the daily papers.

Hi @NohTow , we have assigned a gpu to this space. Note that GPU Grants are provided temporarily and might be removed after some time if the usage is very low.

To learn more about GPUs in Spaces, please check out https://huggingface.co/docs/hub/spaces-gpus

IMATAG org

Hello,
Thanks for the GPU !
However, it seems that the downloading is still timing-out...
I set the token so you can try for yourself, but I could not manage to get it running after different attempts.

@NohTow
Looks like your Space is up now. I made the Llama 2 7B Space before, and the model was downloaded within the time limit. So, maybe the time-out you faced was temporary issue. Let me know if the problem occurred again.

BTW, sharing token is not recommended. So you might want to revoke the token as soon as possible.

IMATAG org

The problem was occuring when generating/detecting, because the model was loaded only at that time.
I corrected that to load the model only once at the beginning. Copying torch_dtype=torch.float16, device_map='auto' from your demo example in the model loading fixed the timeout issue, thanks !

About the token, I am unsure about what you mean. I set the token as a variable for the space, which should only be accessible through the setting of the space, right ?
Without the token, the model cannot be loaded, because the model is private. Is there any other way to load the model ?

@NohTow
Glad to hear the timeout issue has been resolved.
Regarding the token, Space variables are visible to anyone, while Space secrets are not.
https://huggingface.co/docs/hub/spaces-overview#managing-secrets
Currently, your token is set as a Space variable, so you should revoke the current token, generate a new one, and set it as a Space secret.

IMATAG org

I revoked the token and set a new one as a secret !
Thank you very much for all your feedback and help !

NohTow changed discussion status to closed

Sign up or log in to comment