justin13barrett commited on
Commit
07ed740
1 Parent(s): 8ba0385

Upload TFBertForSequenceClassification

Browse files
Files changed (3) hide show
  1. README.md +68 -0
  2. config.json +0 -0
  3. tf_model.h5 +3 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: bert-base-multilingual-cased
4
+ tags:
5
+ - generated_from_keras_callback
6
+ model-index:
7
+ - name: bert-base-multilingual-cased-finetuned-openalex-topic-classification-title-abstract
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
+ probably proofread and complete it, then remove this comment. -->
13
+
14
+ # bert-base-multilingual-cased-finetuned-openalex-topic-classification-title-abstract
15
+
16
+ This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Train Loss: 2.4942
19
+ - Train Categorical Accuracy: 0.0002
20
+ - Train Top 2 Categorical Accuracy: 0.0005
21
+ - Train Top 10 Categorical Accuracy: 0.0024
22
+ - Validation Loss: 3.0737
23
+ - Validation Categorical Accuracy: 0.0003
24
+ - Validation Top 2 Categorical Accuracy: 0.0006
25
+ - Validation Top 10 Categorical Accuracy: 0.0028
26
+ - Train Accuracy: 0.4846
27
+ - Epoch: 7
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 6e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 6e-05, 'decay_steps': 335420, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 500, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
47
+ - training_precision: float32
48
+
49
+ ### Training results
50
+
51
+ | Train Loss | Train Categorical Accuracy | Train Top 2 Categorical Accuracy | Train Top 10 Categorical Accuracy | Validation Loss | Validation Categorical Accuracy | Validation Top 2 Categorical Accuracy | Validation Top 10 Categorical Accuracy | Train Accuracy | Epoch |
52
+ |:----------:|:--------------------------:|:--------------------------------:|:---------------------------------:|:---------------:|:-------------------------------:|:-------------------------------------:|:--------------------------------------:|:--------------:|:-----:|
53
+ | 4.8075 | 0.0001 | 0.0002 | 0.0020 | 3.6686 | 0.0000 | 0.0001 | 0.0017 | 0.3839 | 0 |
54
+ | 3.4867 | 0.0002 | 0.0004 | 0.0028 | 3.3360 | 0.0001 | 0.0002 | 0.0014 | 0.4337 | 1 |
55
+ | 3.1865 | 0.0002 | 0.0004 | 0.0027 | 3.2005 | 0.0002 | 0.0005 | 0.0033 | 0.4556 | 2 |
56
+ | 2.9969 | 0.0002 | 0.0005 | 0.0027 | 3.1379 | 0.0001 | 0.0002 | 0.0014 | 0.4675 | 3 |
57
+ | 2.8489 | 0.0002 | 0.0004 | 0.0025 | 3.0900 | 0.0002 | 0.0005 | 0.0031 | 0.4746 | 4 |
58
+ | 2.7212 | 0.0002 | 0.0005 | 0.0025 | 3.0744 | 0.0002 | 0.0003 | 0.0021 | 0.4799 | 5 |
59
+ | 2.6035 | 0.0002 | 0.0004 | 0.0025 | 3.0660 | 0.0002 | 0.0004 | 0.0023 | 0.4831 | 6 |
60
+ | 2.4942 | 0.0002 | 0.0005 | 0.0024 | 3.0737 | 0.0003 | 0.0006 | 0.0028 | 0.4846 | 7 |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - Transformers 4.35.2
66
+ - TensorFlow 2.13.0
67
+ - Datasets 2.15.0
68
+ - Tokenizers 0.15.0
config.json ADDED
The diff for this file is too large to render. See raw diff
 
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc897e33f5489a3c79b4aa5b3a42df3fd765270b08acaedc659bd2347d132002
3
+ size 725608348