greek_medical_ner / README.md
rigonsallauka's picture
Update README.md
74a1bff verified
|
raw
history blame
2.58 kB
metadata
license: apache-2.0
datasets:
  - rigonsallauka/greek_ner_dataset
language:
  - el
metrics:
  - f1
  - precision
  - recall
  - confusion_matrix
base_model:
  - google-bert/bert-base-cased
pipeline_tag: token-classification
tags:
  - NER
  - medical
  - symptom
  - extraction
  - greek

Greek Medical NER

Use

  • Primary Use Case: This model is designed to extract medical entities such as symptoms, diagnostic tests, and treatments from clinical text in the Greek language.
  • Applications: Suitable for healthcare professionals, clinical data analysis, and research into medical text processing.
  • Supported Entity Types:
    • PROBLEM: Diseases, symptoms, and medical conditions.
    • TEST: Diagnostic procedures and laboratory tests.
    • TREATMENT: Medications, therapies, and other medical interventions.

Training Data

  • Data Sources: Annotated datasets, including clinical data and translations of English medical text into Greek.
  • Data Augmentation: The training dataset underwent data augmentation techniques to improve the model's ability to generalize to different text structures.
  • Dataset Split:
    • Training Set: 80%
    • Validation Set: 10%
    • Test Set: 10%

Model Training

  • Training Configuration:
    • Optimizer: AdamW
    • Learning Rate: 3e-5
    • Batch Size: 64
    • Epochs: 200
    • Loss Function: Focal Loss to handle class imbalance
  • Frameworks: PyTorch, Hugging Face Transformers, SimpleTransformers

Evaluation metrics

  • eval_loss = 0.4112480320792267
  • f1_score = 0.6910085729376871
  • precision = 0.7068717096148518
  • recall = 0.675841788751424

How to Use

You can easily use this model with the Hugging Face transformers library. Here's an example of how to load and use the model for inference:

from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch

model_name = "rigonsallauka/greek_medical_ner"

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)

# Sample text for inference
text = "Ο ασθενής παραπονέθηκε για έντονους πονοκεφάλους και ναυτία που διαρκούσαν δύο ημέρες. Για την ανακούφιση των συμπτωμάτων, του χορηγήθηκε παρακεταμόλη και του συστήθηκε να ξεκουραστεί και να πίνει πολλά υγρά."

# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt")