Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Overview

This is a LoRA adapter for google/flan-ul2 available on huggingface. It takes as input a text document and outputs a synopsis and document classifier tags.

You can use this to convert your training data into conditional pretraining examples.

import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import math

device_id=0

# Load device map for FLAN_UL2
device_map = {
    'shared': device_id,
    'lm_head': device_id,
    'encoder': device_id,
    'decoder': device_id,
    'decoder.final_layer_norm': device_id,
    'decoder.dropout': device_id
}

# Load peft config for pre-trained checkpoint etc.
peft_model_id="rallio67/condlabeler-alpha"
config = PeftConfig.from_pretrained(peft_model_id)

model_id="google/flan-ul2"

# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map=device_map)
tokenizer = AutoTokenizer.from_pretrained(model_id)

# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map=device_map)
model.eval()

print("Peft model loaded")

def generate_condlabels(input_text):
    inputs = tokenizer([sentence for sentence in input_text], return_tensors="pt", padding=True)
    outputs = model.generate(
        input_ids=inputs["input_ids"].to(device_id),
        attention_mask=inputs["attention_mask"].to(device_id),
        do_sample=True,
        num_return_sequences=1,
        max_new_tokens=80,
        top_p=0.95,
        temperature=0.5,
        penalty_alpha=0.6,
        top_k=5,
        bos_token_id=0,
        eos_token_id=1,
        repetition_penalty=1.0,
        return_dict_in_generate=True,
        output_scores=True,
        use_cache=True
        )  
    # Calculate logprobs and store them along with the output text
    output_tuple = []
    probs = torch.stack(outputs.scores, dim=1).softmax(-1)
    for z, i in enumerate(outputs.sequences):
        out_text = tokenizer.decode(i, skip_special_tokens=False)
        logprobs = sum(math.log(probs[z][counter][k.item()].item() + 0.001) for counter, k in enumerate(i[1:]))
        output_tuple.append((out_text, round(logprobs, 2)))
    return output_tuple

text="""Conditional Pretraining of Large Language Models

Large language models (LLMs), such as OpenAI's ChatGPT and similar chatbot products from other organizations, have recently gained widespread adoption. These models can extend text or respond to instructions in a natural and helpful manner. Despite the core technologies behind LLMs, namely the transformer architecture and the GPT decoder-only causal language model, remaining relatively unchanged for over five years, the surge in popularity of ChatGPT can be largely attributed to recent approaches that better align the output of LLMs with users' and service providers' intentions.

Two primary approaches have been employed to better align large language models with human expectations. The first is known as supervised finetuning (SFT) on natural instructions, while the second is called reinforcement learning from human feedback (RLHF). Both methods aim to improve the performance and usability of LLMs, but they differ in their implementation. SFT involves training the model using labeled datasets that contain natural instructions, which helps the model understand and respond more accurately to user queries. RLHF, on the other hand, is a technique that uses human preferences as a reward signal to fine-tune models. It involves collecting a dataset of human-written demonstrations on prompts, training supervised learning baselines, and then gathering a dataset of human-labeled comparisons between two model outputs on a larger set of prompts. A reward model (RM) is trained on this dataset to predict which output labelers would prefer, and this RM is used as a reward function to fine-tune the GPT-3 policy using the PPO algorithm. However, there is an "alignment tax" associated with this approach, which can result in worse performance in some situations.

A third approach to align language models with human expectations in a more transparent and end-user controllable manner is called Conditional Pretraining. In this method, a large number of pretraining examples are tagged with labels that describe the content using human-understandable classifiers. Content tagging is used in nearly all human generated online information-sharing environments as a way to organize content, and help users find information most relevant to their interests. This labeling can be performed in a mostly unsupervised fashion, utilizing encoder-only or encoder-decoder natural language understanding (NLU) machine learning models.

There are many widely used tags online that help categorize and filter content based on user preferences. "Suitable for work" (SFW) and "not suitable for work" (NSFW) tags are commonly found on sites like Reddit, Imgur, and various online forums. Additionally, book and movie reviews often utilize the "Spoilers" tag to indicate if the review contains information that may negatively impact the enjoyment of the content. User-generated story sites, such as Archive of Our Own (AO3) and FanFiction.net, employ diverse tags to provide clear indications of the content readers can expect within the stories (Figure 1). Furthermore, labels like G, PG, PG-13, and R, have been utilized for decades to inform users about television and movie content.

By leveraging conditional pretraining, language models could be better adapted to users' interests and preferences, resulting in a more aligned and enjoyable experience.

Converting Existing Pretraining Data into Conditional Pretraining Data

The prevailing method for training LLMs involves collecting vast quantities of text from the internet and feeding this minimally processed text into the LLM. The pretraining objective is to predict the subsequent word given all prior words in the training example. Often, the text is divided in a manner that allows documents to be fragmented at any point, such as in the middle of a paragraph. These fragments are then randomly incorporated into larger batches of training examples, typically ranging from 2 to 4 million examples per training step. Although this approach has proven effective, it may not be the most optimal way to train these models.

In contrast, conditional pretraining aims to prepend each training example with a set of descriptive tags and a brief synopsis that accurately represents the text in the training example (Figure 2). These tags and synopses can be efficiently generated using fine tuned NLU models such as BERT or T5. Although there is considerable computational cost associated with processing all the training examples, once the conditional pretraining examples are generated, they become reusable and easily understandable by humans. This approach enhances the training process, resulting in more accurate and user-friendly language models.

Transparency and Accountability

Another significant advantage of conditional pretraining is the transparency of the tags used on documents, which can be easily understood by auditors or end users of the models. At present, the instructions and reward models employed in most LLMs are proprietary and not available for public review. This lack of transparency makes it challenging to comprehend how and why models respond to culturally or politically sensitive topics. Even when there are disagreements among people about how these models should be aligned and what values they should uphold, it is difficult to engage in meaningful discussions or debates on these sensitive topics as long as the values of the organizations developing the LLMs remain concealed or obscured by carefully crafted press releases and position papers.

How to Prepare a Conditional Pretraining Dataset

We have developed a fine tuned LoRA model based on the open source FLAN-UL2 that takes as input about 2000 words of text and outputs the conditional pretraining labels for the document. An example output from this conditional tagging model for a recent news article about LAION in Forbes (article  link) is below. To generate these document tags only text from the body of the article was used.
"""

# Generate outputs for a list of strings
output=generate_condlabels([text])
print(output[0][0])

"""<pad> Synopsis: The document outlines the conditional pretraining of large language models and provides information about the ChatGPT project. Tags: [ human language understanding, conditional pretraining, chatbots, machine learning]</s>"""
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .