Edit model card

SPBERT MLM+WSO (Initialized)

Introduction

Paper: SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs Authors: Hieu Tran, Long Phan, James Anibal, Binh T. Nguyen, Truong-Son Nguyen

How to use

For more details, do check out our Github repo. Here is an example in Pytorch:

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-wso-base')
model = AutoModel.from_pretrained("razent/spbert-mlm-wso-base")
text = "select * where brack_open var_a var_b var_c sep_dot brack_close"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)

or Tensorflow

from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-wso-base')
model = TFAutoModel.from_pretrained("razent/spbert-mlm-wso-base")
text = "select * where brack_open var_a var_b var_c sep_dot brack_close"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)

Citation

@misc{tran2021spbert,
      title={SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs}, 
      author={Hieu Tran and Long Phan and James Anibal and Binh T. Nguyen and Truong-Son Nguyen},
      year={2021},
      eprint={2106.09997},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
27
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.