Rezaul Karim
Update README.md
2ec2d86 verified
|
raw
history blame
No virus
4.93 kB
metadata
library_name: transformers
license: mit
language:
  - en

Model Card for Model ID

https://huggingface.co/rezahf2024/fine_tuned_financial_setiment_analysis_gpt2_model

Model Details

Model Description

This a fine-tuned GPT2 model on the https://huggingface.co/datasets/FinGPT/fingpt-sentiment-train dataset for the down-stream financial sentiment analysis.

label_mapping = { 'LABEL_0': 'mildly positive', 'LABEL_1': 'mildly negative', 'LABEL_2': 'moderately negative', 'LABEL_3': 'moderately positive', 'LABEL_4': 'positive', 'LABEL_5': 'negative', 'LABEL_6': 'neutral', 'LABEL_7': 'strong negative', 'LABEL_8': 'strong positive' }

Model Sources [optional]

Uses

The model is already fine-tuned for downstream financial sentiment analysis tasks.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

from transformers import GPT2Tokenizer

dataset = load_dataset("FinGPT/fingpt-sentiment-train")

tokenizer = GPT2Tokenizer.from_pretrained("gpt2") tokenizer.pad_token = tokenizer.eos_token

def tokenize_function(examples): return tokenizer(examples["input"], padding="max_length", truncation=True)

tokenized_datasets = dataset.map(tokenize_function, batched=True)

from datasets import DatasetDict import random import string

def generate_random_id(): return ''.join(random.choices(string.ascii_lowercase + string.digits, k=10))

unique_outputs = set(dataset['train']['output'])

#label_mapping = {'mildly positive': 0, 'positive': 1, 'strong positive':2, 'moderately positive': 3, 'negative': 4, 'neutral': 5} # Add more mappings as needed label_mapping = {label: index for index, label in enumerate(unique_outputs)}

def transform_dataset(dataset): dataset = dataset.rename_column('input', 'text') dataset = dataset.rename_column('output', 'label_text')

dataset = dataset.remove_columns(['instruction'])

dataset = dataset.add_column('id', [generate_random_id() for _ in range(dataset.num_rows)]) dataset = dataset.add_column('label', [label_mapping[label_text] for label_text in dataset['label_text']])

return dataset

transformed_dataset = DatasetDict({'train': transform_dataset(tokenized_datasets['train'])}) transformed_dataset['train'].set_format(type=None, columns=['id', 'text', 'label', 'label_text', 'input_ids', 'attention_mask'])

train_test_split = transformed_dataset['train'].train_test_split(test_size=0.3, seed=42)

tokenized_datasets['test'] = train_test_split['test'] tokenized_datasets['train'] = train_test_split['train']

small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(100)) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(100))

Fine-tune Procedure

from transformers import GPT2ForSequenceClassification from transformers import TrainingArguments, Trainer

model = GPT2ForSequenceClassification.from_pretrained("gpt2", num_labels=9)

training_args = TrainingArguments( output_dir="test_trainer", #evaluation_strategy="epoch", per_device_train_batch_size=1, # Reduce batch size here per_device_eval_batch_size=1, # Optionally, reduce for evaluation as well gradient_accumulation_steps=4 )

trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, )

trainer.train() trainer.evaluate() trainer.save_model("fine_tuned_finsetiment_model")

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

import evaluate

metric = evaluate.load("accuracy")

def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1)

return metric.compute(predictions=predictions, references=labels)

Results

[More Information Needed]

Summary

Citation [optional]

BibTeX:

[More Information Needed]

Model Card Contact

rezaul.karim.fit@gmail.com