Adapters
Safetensors
English
gpt2
music
art
Edit model card

GPT2 Pretrained Lyric Generation Model

This repository contains a pretrained GPT2 model fine-tuned for lyric generation. The model was trained using the Hugging Face's Transformers library.

Model Details

  • Model architecture: GPT2
  • Training data: The datasets were created using the Genius API and are linked in the Model's tags.
  • Training duration: [Mention how long the model was trained]

Usage

The model can be used to generate lyrics. It uses nucleus sampling with a probability threshold of 0.9 for generating the lyrics, which helps in generating more diverse and less repetitive text.

Here is a basic usage example:

from transformers import GPT2LMHeadModel, GPT2Tokenizer

tokenizer = GPT2Tokenizer.from_pretrained("SpartanCinder/GPT2-pretrained-lyric-generation")
model = GPT2LMHeadModel.from_pretrained("SpartanCinder/GPT2-pretrained-lyric-generation")

input_ids = tokenizer.encode("Once upon a time", return_tensors='pt')
output = model.generate(input_ids, max_length=100, num_return_sequences=5, do_sample=True, top_p=0.9)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Downloads last month
0
Safetensors
Model size
124M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Datasets used to train SpartanCinder/GPT2-finetuned-lyric-generation

Space using SpartanCinder/GPT2-finetuned-lyric-generation 1