qqp_kz / README.md
CCRss's picture
Update README.md
1ba1821
|
raw
history blame
No virus
3.01 kB
metadata
datasets:
  - CCRss/small-chatgpt-paraphrases-kz
language:
  - kk
library_name: transformers
tags:
  - text-generation-inference
license: mit

Model Overview

The qqp_kz model is paraphrasing tool tailored for the Kazakh language. It is built upon the humarin/chatgpt_paraphraser_on_T5_base model, inheriting its robust architecture and adapting it for the nuances of Kazakh.

Key Features:

  • Language: Specifically designed for paraphrasing in Kazakh.
  • Base Model: Derived from chatgpt_paraphraser_on_T5_base, a proven model in paraphrasing tasks.
  • Tokenizer: Utilizes CCRss/tokenizer_t5_kz for optimal Kazakh language processing.

Data Preprocessing The dataset used for training the qqp_kz model undergoes rigorous preprocessing to ensure compatibility and optimal performance:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("CCRss/tokenizer_t5_kz")

def preprocess_data(example):
    source = example["src"]
    target = example["trg"]
    source_inputs = tokenizer(source, padding="max_length", truncation=True, max_length=128)
    target_inputs = tokenizer(target, padding="max_length", truncation=True, max_length=128)
    return {**source_inputs, **target_inputs, "labels": target_inputs["input_ids"]}

encoded_dataset = dataset.map(preprocess_data)
encoded_dataset.set_format("torch")

Model Training

The model is trained with the following configuration:

from transformers import TrainingArguments, Seq2SeqTrainer

name_of_model = "humarin/chatgpt_paraphraser_on_T5_base"
model = AutoModelForSeq2SeqLM.from_pretrained(name_of_model)

training_args = Seq2SeqTrainingArguments(
    per_device_train_batch_size=21,
    gradient_accumulation_steps=3,
    learning_rate=5e-5,
    save_steps=2000,
    num_train_epochs=3,
    output_dir='./results',
    logging_dir='./logs',
    logging_steps=2000,
    eval_steps=2000,
    evaluation_strategy="steps"
)

trainer = Seq2SeqTrainer(
    model=model,
    args=training_args,
    train_dataset=encoded_dataset['train'],
    eval_dataset=encoded_dataset['valid']
)

trainer.train()

Usage

The qqp_kz model is specifically designed for paraphrasing in the Kazakh language. It is highly suitable for a variety of NLP tasks such as content creation, enhancing translations, and linguistic research.

To utilize the model:

  • Install the transformers library.
  • Load the model using the Hugging Face API.
  • Input your Kazakh text for paraphrasing.

Example Deployment

For a practical demonstration of the model in action, please refer to our Google Colab notebook. This notebook provides a comprehensive example of how to infer with the qqp_kz model.

Contributions and Feedback

We welcome contributions to the qqp_kz model. If you have suggestions, improvements, or encounter any issues, please feel free to open an issue in the repository.