File size: 2,436 Bytes
4a4def2
 
8454005
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a4def2
 
 
 
 
 
 
8454005
 
4a4def2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9752f9b
 
 
 
 
 
 
4a4def2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: apache-2.0
datasets:
- rigonsallauka/german_ner_dataset
language:
- de
metrics:
- f1
- precision
- recall
- confusion_matrix
base_model:
- google-bert/bert-base-cased
pipeline_tag: token-classification
tags:
- NER
- medical
- symptom
- extraction
- german
---
# German Medical NER

## Use
- **Primary Use Case**: This model is designed to extract medical entities such as symptoms, diagnostic tests, and treatments from clinical text in the German language.
- **Applications**: Suitable for healthcare professionals, clinical data analysis, and research into medical text processing.
- **Supported Entity Types**:
  - `PROBLEM`
: Diseases, symptoms, and medical conditions.
  - `TEST`: Diagnostic procedures and laboratory tests.
  - `TREATMENT`: Medications, therapies, and other medical interventions.

## Training Data
- **Data Sources**: Annotated datasets, including clinical data and translations of English medical text into German.
- **Data Augmentation**: The training dataset underwent data augmentation techniques to improve the model's ability to generalize to different text structures.
- **Dataset Split**:
  - **Training Set**: 80%
  - **Validation Set**: 10%
  - **Test Set**: 10%

## Model Training
- **Training Configuration**:
  - **Optimizer**: AdamW
  - **Learning Rate**: 3e-5
  - **Batch Size**: 64
  - **Epochs**: 200
  - **Loss Function**: Focal Loss to handle class imbalance
- **Frameworks**: PyTorch, Hugging Face Transformers, SimpleTransformers

## Evaluation metrics
- eval_loss = 0.2966328261132536
- f1_score = 0.7869508628049208
- precision = 0.7893554696639308
- recall = 0.7845608617193459


## How to Use
You can easily use this model with the Hugging Face `transformers` library. Here's an example of how to load and use the model for inference:

```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch

model_name = "rigonsallauka/german_medical_ner"

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)

# Sample text for inference
text = "Der Patient klagte über starke Kopfschmerzen und Übelkeit, die seit zwei Tagen anhielten. Zur Linderung der Symptome wurde ihm Paracetamol verschrieben, und er wurde angewiesen, sich auszuruhen und viel Flüssigkeit zu trinken."

# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt")