Edit model card

m-BERT Indonesian MRC (cased)

m-BERT model fine-tuned on IDK-MRC dataset for answering extractive questions in Indonesian. Please refer to this paper for more details on the model.

Citation Info

@inproceedings{putri-oh-2022-idk,
    title = "{IDK}-{MRC}: Unanswerable Questions for {I}ndonesian Machine Reading Comprehension",
    author = "Putri, Rifki Afina  and
      Oh, Alice",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.emnlp-main.465",
    pages = "6918--6933",
}
Downloads last month
11
Safetensors
Model size
177M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rifkiaputri/mbert-base-id-finetune-idk-mrc

Finetunes
1 model

Dataset used to train rifkiaputri/mbert-base-id-finetune-idk-mrc