File size: 4,778 Bytes
6b8afec
33f9d6c
 
 
 
 
6b8afec
33f9d6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102a360
33f9d6c
 
 
 
 
 
 
 
 
6b8afec
33f9d6c
 
 
d933abc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33f9d6c
 
d933abc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33f9d6c
 
d933abc
 
 
 
 
 
 
 
 
 
33f9d6c
 
 
 
102a360
 
 
 
33f9d6c
 
 
d933abc
 
 
 
 
33f9d6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a7a8a3a
 
 
d933abc
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
---

language:

- ca

license: apache-2.0

tags:

- "catalan"

- "part of speech tagging"

- "pos"

- "CaText"

- "Catalan Textual Corpus"

datasets:

- "universal_dependencies"

metrics:

- f1

inference:
  parameters:
    aggregation_strategy: "first"
    
model-index:
- name: roberta-base-ca-v2-cased-pos
  results:
  - task: 
      type: token-classification 
    dataset:
      type:   universal_dependencies
      name: Ancora-ca-POS
    metrics:
      - name: F1
        type: f1
        value: 0.9896

widget:

- text: "Em dic Lluïsa i visc a Santa Maria del Camí." 

- text: "L'Aina, la Berta i la Norma són molt amigues."

- text: "El Martí llegeix el Cavall Fort."

---

# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Part-of-speech-tagging (POS)

## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
  - [Training Data](#training-data)
  - [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
   - [Variable and Metrics](#variable-and-metrics)
   - [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)

## Model description

The **roberta-base-ca-v2-cased-pos** is a Part-of-speech-tagging (POS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).

## Intended Uses and Limitations

**roberta-base-ca-v2-cased-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases.

## How to Use

Here is how to use this model:

```python
from transformers import pipeline
from pprint import pprint

nlp = pipeline("token-classification", model="projecte-aina/roberta-base-ca-v2-cased-pos")
example = "Em dic Lluïsa i visc a Santa Maria del Camí."

pos_results = nlp(example)
pprint(pos_results)
```
## Training

### Training data
We used the POS dataset in Catalan from the [Universal Dependencies Treebank](https://huggingface.co/datasets/universal_dependencies) we refer to _Ancora-ca-pos_ for training and evaluation.

### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.

## Evaluation

### Variable and Metrics

This model was finetuned maximizing F1 score.

## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-pos_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:

| Model        | Ancora-ca-pos (F1)   | 
| ------------|:-------------|
| roberta-base-ca-v2-cased-pos | **98.96** |
| roberta-base-ca-cased-pos | **98.96** |
| mBERT       | 98.83 |
| XLM-RoBERTa | 98.89 | 

For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).

## Licensing Information

[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)

## Citation Information  
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
    title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
    author = "Armengol-Estap{\'e}, Jordi  and
      Carrino, Casimiro Pio  and
      Rodriguez-Penagos, Carlos  and
      de Gibert Bonet, Ona  and
      Armentano-Oller, Carme  and
      Gonzalez-Agirre, Aitor  and
      Melero, Maite  and
      Villegas, Marta",
    booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.findings-acl.437",
    doi = "10.18653/v1/2021.findings-acl.437",
    pages = "4933--4946",
}
```

### Funding

This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).



## Contributions

[N/A]