Edit model card

Fine-tuning Llama2-7B-Chat for Privacy Policy Q&A and Summarization

By Chris Puzzo and Christain Jackson

For Comp741/841

README updated: 5/22/24

The basis of this problem is to fine-tune Meta's Llama 2 Transformer using PEFT and QloRA in order to be able to analyze privacy Policies.

Privacy Policies are written to be confusing and extremely technical, so a tool that helps users answer questions and summerize privacy policies can be very useful in knowing how personal data is being used. the HOWTO.md file contains simple instructions on running the tool

Setup

This tool is designed to be used on colab with the huggingface transformers library. For more info check out the model on github. The model was trained using a training code for llama from Maxime Labonne avalible here

There is no requirments file in this repo because it uses dependencies pre-installed on colab.

Usage

This model is used with the huggingface transformers library it is run using the following:

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("ChrisPuzzo/llama-2-7b-privacy")
model = AutoModelForCausalLM.from_pretrained("ChrisPuzzo/llama-2-7b-privacy")

Results

We did some small amouts of testing on the model however we got pretty unconclusive data. As you can see in the rouge excel sheet, the scores weren't great. However we believe that the rouge testinging metric might not be the best way to judge this.

Next Steps

The next steps will be to train this data on another dataset I found summerizing text in privacy policies

Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ChrisPuzzo/llama-2-7b-privacy