rigonsallauka commited on
Commit
11199ce
1 Parent(s): 5e79352

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -1
README.md CHANGED
@@ -15,4 +15,49 @@ tags:
15
  - medical
16
  - symptom
17
  - extraction
18
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - medical
16
  - symptom
17
  - extraction
18
+ ---
19
+ # Slovenian Medical NER
20
+
21
+ ## Use
22
+ - **Primary Use Case**: This model is designed to extract medical entities such as symptoms, diagnostic tests, and treatments from clinical text in the Slovenian language.
23
+ - **Applications**: Suitable for healthcare professionals, clinical data analysis, and research into medical text processing.
24
+ - **Supported Entity Types**:
25
+ - `PROBLEM`: Diseases, symptoms, and medical conditions.
26
+ - `TEST`: Diagnostic procedures and laboratory tests.
27
+ - `TREATMENT`: Medications, therapies, and other medical interventions.
28
+
29
+ ## Training Data
30
+ - **Data Sources**: Annotated datasets, including clinical data and translations of English medical text into Slovenian.
31
+ - **Data Augmentation**: The training dataset underwent data augmentation techniques to improve the model's ability to generalize to different text structures.
32
+ - **Dataset Split**:
33
+ - **Training Set**: 80%
34
+ - **Validation Set**: 10%
35
+ - **Test Set**: 10%
36
+
37
+ ## Model Training
38
+ - **Training Configuration**:
39
+ - **Optimizer**: AdamW
40
+ - **Learning Rate**: 3e-5
41
+ - **Batch Size**: 64
42
+ - **Epochs**: 200
43
+ - **Loss Function**: Focal Loss to handle class imbalance
44
+ - **Frameworks**: PyTorch, Hugging Face Transformers, SimpleTransformers
45
+
46
+ ## How to Use
47
+ You can easily use this model with the Hugging Face `transformers` library. Here's an example of how to load and use the model for inference:
48
+
49
+ ```python
50
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
51
+ import torch
52
+
53
+ model_name = "rigonsallauka/slovenian_medical_ner"
54
+
55
+ # Load the tokenizer and model
56
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
57
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
58
+
59
+ # Sample text for inference
60
+ text = "Pacient se je pritoževal zaradi hudih glavobolov in slabosti, ki sta trajala dva dni."
61
+
62
+ # Tokenize the input text
63
+ inputs = tokenizer(text, return_tensors="pt")