Edit model card

Mukayese: Turkish NLP Strikes Back

Summarization: mukayese/transformer-turkish-summarization

This model is uncased, it was initialized from scratch and trained only the mlsum/tu dataset with no pre-training.

It achieves the following results on the evaluation set:

  • Rouge1: 43.2049
  • Rouge2: 30.7082
  • Rougel: 38.1981
  • Rougelsum: 39.9453

Check this paper for more details on the model and the dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15.0
  • mixed_precision_training: Native AMP
  • label_smoothing_factor: 0.1

Framework versions

  • Transformers 4.11.3
  • Pytorch 1.8.2+cu111
  • Datasets 1.14.0
  • Tokenizers 0.10.3

Citation

@misc{safaya-etal-2022-mukayese,
    title={Mukayese: Turkish NLP Strikes Back},
    author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
    year={2022},
    eprint={2203.01215},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
91
Safetensors
Model size
125M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train mukayese/transformer-turkish-summarization

Evaluation results