nijatzeynalov
commited on
Commit
•
ef82c31
1
Parent(s):
dcfe5dd
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,10 @@
|
|
|
|
1 |
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
```python
|
4 |
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
|
@@ -72,4 +78,13 @@ Result 4:
|
|
72 |
'start': 52,
|
73 |
'end': 82,
|
74 |
'answer': 'abstinent sindrom və psixoloji'}
|
75 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Azerbaijani question answering
|
2 |
|
3 |
+
This model is a fine-tuned version of XLM-RoBERTa (XLM-R) on a Question Answering (QA) dataset in Azerbaijani. XLM-RoBERTa is a powerful multilingual model that supports 100+ languages. Our fine-tuned model leverages XLM-R's language-agnostic capabilities to specifically improve performance on Azerbaijani QA tasks, aiming to provide accurate answers from Azerbaijani text inputs.
|
4 |
+
|
5 |
+
|
6 |
+
# How to Use
|
7 |
+
This model can be loaded and used for prediction using the Hugging Face Transformers library. Below is an example code snippet in Python:
|
8 |
|
9 |
```python
|
10 |
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
|
|
|
78 |
'start': 52,
|
79 |
'end': 82,
|
80 |
'answer': 'abstinent sindrom və psixoloji'}
|
81 |
+
```
|
82 |
+
|
83 |
+
# Limitations and Bias
|
84 |
+
As the model was fine-tuned for only 1 epoch, it may not capture all nuances of the Azerbaijani language or the full complexity of the QA task. Users should be aware of potential biases in the training data which might affect the model's performance on certain types of questions or texts.
|
85 |
+
|
86 |
+
# Ethical Considerations
|
87 |
+
Users are encouraged to use this model responsibly and consider the ethical implications of automated question answering systems, especially in sensitive or high-stakes contexts.
|
88 |
+
|
89 |
+
# Citation
|
90 |
+
Please cite this model as follows:
|