File size: 6,734 Bytes
1751969
 
 
 
 
 
 
 
 
 
 
 
c0a8f05
1751969
 
 
 
 
 
 
c0a8f05
1751969
90c26f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0a8f05
90c26f3
 
c0a8f05
90c26f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0a8f05
 
 
90c26f3
 
1751969
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0a8f05
 
90c26f3
 
 
1751969
 
 
 
c0a8f05
1751969
c0a8f05
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# bert-finetuned-ner

This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2301
- Precision: 0.5948
- Recall: 0.6779
- F1: 0.6336
- Accuracy: 0.9265
- Adr Precision: 0.5579
- Adr Recall: 0.6812
- Adr F1: 0.6134
- Disease Precision: 0.2273
- Disease Recall: 0.1562
- Disease F1: 0.1852
- Drug Precision: 0.8136
- Drug Recall: 0.8775
- Drug F1: 0.8443
- Finding Precision: 0.2667
- Finding Recall: 0.2759
- Finding F1: 0.2712
- Symptom Precision: 0.5
- Symptom Recall: 0.0435
- Symptom F1: 0.08
- B-adr Precision: 0.7749
- B-adr Recall: 0.8513
- B-adr F1: 0.8113
- B-disease Precision: 1.0
- B-disease Recall: 0.1562
- B-disease F1: 0.2703
- B-drug Precision: 0.9327
- B-drug Recall: 0.9557
- B-drug F1: 0.9440
- B-finding Precision: 0.5909
- B-finding Recall: 0.4483
- B-finding F1: 0.5098
- B-symptom Precision: 0.5
- B-symptom Recall: 0.0435
- B-symptom F1: 0.08
- I-adr Precision: 0.5725
- I-adr Recall: 0.6782
- I-adr F1: 0.6209
- I-disease Precision: 0.4091
- I-disease Recall: 0.3103
- I-disease F1: 0.3529
- I-drug Precision: 0.8458
- I-drug Recall: 0.8873
- I-drug F1: 0.8660
- I-finding Precision: 0.3529
- I-finding Recall: 0.2222
- I-finding F1: 0.2727
- I-symptom Precision: 0.0
- I-symptom Recall: 0.0
- I-symptom F1: 0.0
- Macro Avg F1: 0.4728
- Weighted Avg F1: 0.7278

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | B-adr Precision | B-adr Recall | B-adr F1 | B-disease Precision | B-disease Recall | B-disease F1 | B-drug Precision | B-drug Recall | B-drug F1 | B-finding Precision | B-finding Recall | B-finding F1 | B-symptom Precision | B-symptom Recall | B-symptom F1 | I-adr Precision | I-adr Recall | I-adr F1 | I-disease Precision | I-disease Recall | I-disease F1 | I-drug Precision | I-drug Recall | I-drug F1 | I-finding Precision | I-finding Recall | I-finding F1 | I-symptom Precision | I-symptom Recall | I-symptom F1 | Macro Avg F1 | Weighted Avg F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|:------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------:|:-------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:------------:|:---------------:|
| No log        | 1.0   | 127  | 0.2653          | 0.5472    | 0.6201 | 0.5814 | 0.9128   | 0.4942        | 0.6376     | 0.5568 | 0.0               | 0.0            | 0.0        | 0.7952         | 0.8186      | 0.8068  | 0.0               | 0.0            | 0.0        | 0.0               | 0.0            | 0.0        | 0.7530          | 0.7731       | 0.7629   | 0.0                 | 0.0              | 0.0          | 0.9179           | 0.8818        | 0.8995    | 0.0                 | 0.0              | 0.0          | 0.0                 | 0.0              | 0.0          | 0.4915          | 0.6325       | 0.5532   | 0.1429              | 0.0345           | 0.0556       | 0.855            | 0.8382        | 0.8465    | 0.3333              | 0.0370           | 0.0667       | 0.0                 | 0.0              | 0.0          | 0.3184       | 0.6587          |
| No log        | 2.0   | 254  | 0.2307          | 0.5896    | 0.6632 | 0.6242 | 0.9254   | 0.5546        | 0.6722     | 0.6077 | 0.2222            | 0.1875         | 0.2034     | 0.8093         | 0.8529      | 0.8305  | 0.2083            | 0.1724         | 0.1887     | 0.0               | 0.0            | 0.0        | 0.7663          | 0.8263       | 0.7952   | 1.0                 | 0.1562           | 0.2703       | 0.9366           | 0.9458        | 0.9412    | 0.625               | 0.3448           | 0.4444       | 0.0                 | 0.0              | 0.0          | 0.5649          | 0.6600       | 0.6088   | 0.2963              | 0.2759           | 0.2857       | 0.8495           | 0.8578        | 0.8537    | 0.3846              | 0.1852           | 0.25         | 0.0                 | 0.0              | 0.0          | 0.4449       | 0.7127          |
| No log        | 3.0   | 381  | 0.2301          | 0.5948    | 0.6779 | 0.6336 | 0.9265   | 0.5579        | 0.6812     | 0.6134 | 0.2273            | 0.1562         | 0.1852     | 0.8136         | 0.8775      | 0.8443  | 0.2667            | 0.2759         | 0.2712     | 0.5               | 0.0435         | 0.08       | 0.7749          | 0.8513       | 0.8113   | 1.0                 | 0.1562           | 0.2703       | 0.9327           | 0.9557        | 0.9440    | 0.5909              | 0.4483           | 0.5098       | 0.5                 | 0.0435           | 0.08         | 0.5725          | 0.6782       | 0.6209   | 0.4091              | 0.3103           | 0.3529       | 0.8458           | 0.8873        | 0.8660    | 0.3529              | 0.2222           | 0.2727       | 0.0                 | 0.0              | 0.0          | 0.4728       | 0.7278          |


### Framework versions

- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0