nguyennghia0902 commited on
Commit
1146e1e
1 Parent(s): 5a8ade1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -6,6 +6,8 @@ tags:
6
  model-index:
7
  - name: textming_proj01_electra
8
  results: []
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
@@ -13,7 +15,7 @@ probably proofread and complete it, then remove this comment. -->
13
 
14
  # textming_proj01_electra
15
 
16
- This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
  - Train Loss: 0.4494
19
  - Train Accuracy: 0.7976
@@ -22,8 +24,7 @@ It achieves the following results on the evaluation set:
22
  - Epoch: 4
23
 
24
  ## Model description
25
-
26
- More information needed
27
 
28
  ## Intended uses & limitations
29
 
@@ -57,4 +58,4 @@ The following hyperparameters were used during training:
57
  - Transformers 4.39.3
58
  - TensorFlow 2.15.0
59
  - Datasets 2.18.0
60
- - Tokenizers 0.15.2
 
6
  model-index:
7
  - name: textming_proj01_electra
8
  results: []
9
+ language:
10
+ - vi
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
 
15
 
16
  # textming_proj01_electra
17
 
18
+ This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on [Vietnamese dataset - Kaggle](https://www.kaggle.com/datasets/duyminhnguyentran/csc15105).
19
  It achieves the following results on the evaluation set:
20
  - Train Loss: 0.4494
21
  - Train Accuracy: 0.7976
 
24
  - Epoch: 4
25
 
26
  ## Model description
27
+ This model is fine-tuned by Bùi Nguyên Nghĩa - 19120600@student.hcmus.edu.vn in [Kaggle](https://www.kaggle.com/code/nguynnghabi/training-electra)
 
28
 
29
  ## Intended uses & limitations
30
 
 
58
  - Transformers 4.39.3
59
  - TensorFlow 2.15.0
60
  - Datasets 2.18.0
61
+ - Tokenizers 0.15.2