File size: 3,376 Bytes
40fa809
a983743
 
 
40fa809
48166e6
a983743
0b9ebde
40fa809
a983743
 
 
 
 
 
 
 
d889b40
 
a983743
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
language: en
tags:
- exbert
license: mit
pipeline_tag: feature-extraction
widget:
- text: "<ENT> ER </ENT> crowding has become a wide-spread problem."
---

## KRISSBERT

Entity linking faces significant challenges such as prolific variations and prevalent ambiguities, especially in high-value domains with myriad entities. Standard classification approaches suffer from the annotation bottleneck and cannot effectively handle unseen entities. Zero-shot entity linking has emerged as a promising direction for generalizing to new entities, but it still requires example gold entity mentions during training and canonical descriptions for all entities, both of which are rarely available outside of Wikipedia ([Logeswaran et al., 2019](https://aclanthology.org/P19-1335.pdf); [Wu et al., 2020](https://aclanthology.org/2020.emnlp-main.519.pdf)). We explore Knowledge-RIch Self-Supervision (KRISS) and train a contextual encoder (KRISSBERT) for entity linking, by leveraging readily available unlabeled text and domain knowledge.

This KRISSBERT is initialized with [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) parameters, and then trained using self-supervised examples that are generated by combining [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and the [UMLS](https://www.nlm.nih.gov/research/umls/index.html) ontology. Experiments on seven standard biomedical entity linking datasets show that KRISSBERT attains new state of the art, outperforming prior self-supervised methods by as much as 20 absolute points in accuracy.
See [Zhang et al., 2021](https://arxiv.org/abs/2112.07887) for the details.

Note that some prior work like [BioSyn](https://aclanthology.org/2020.acl-main.335.pdf), [SapBERT](https://aclanthology.org/2021.naacl-main.334.pdf), and their follow-up ([Lai et al., 2021](https://aclanthology.org/2021.findings-emnlp.140.pdf)) claimed to do entity linking, but their systems completely ignore the context of an entity mention, and can only predict a surface form, _**not CUI**_ (See Figure 1 in [BioSyn](https://aclanthology.org/2020.acl-main.335.pdf)). Therefore, they can't disambiguate ambiguous mentions. For instance, given the entity mention "_ER_" in the sentence "*ER crowding has become a wide-spread problem*", their systems predict the nearest entity name (which is also "ER") in the ontology. They can't pinpoint the target entity "*Emergency Room (C0562508)*", because other entities such as "*Estrogen Receptor Gene (C1414461)*" and "*Endoplasmic Reticulum(C0014239)*" also use "ER" as their alias. Without using the context information, their systems can't resolve such ambiguity. Unfortunately, their evaluation considers it a correct prediction, for the reason that "ER" matches one of the aliases of the gold entity, which is problematic. Consequently, the reported results in their papers do not reflect true performance on entity linking.

## Citation

If you find KRISSBERT useful in your research, please cite the following paper:

```latex
@article{krissbert,
  author = {Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon},
  title = {Knowledge-Rich Self-Supervised Entity Linking},
  year = {2021},
  url = {https://arxiv.org/abs/2112.07887},
  eprinttype = {arXiv},
  eprint = {2112.07887},
}
```