End of training
Browse files
README.md
CHANGED
@@ -7,23 +7,8 @@ metrics:
|
|
7 |
- accuracy
|
8 |
- f1
|
9 |
model-index:
|
10 |
-
- name:
|
11 |
results: []
|
12 |
-
datasets:
|
13 |
-
- FinGPT/fingpt-sentiment-train
|
14 |
-
language:
|
15 |
-
- en
|
16 |
-
widget:
|
17 |
-
- text: "$KTOS: Kratos Defense and Security awarded a $39 million sole-source contract for Geolocation Global Support Service"
|
18 |
-
example_title: "Example 1"
|
19 |
-
- text: "$Google parent Alphabet Inc. reported revenue and earnings that fell short of analysts' expectations, showing the company's search advertising juggernaut was not immune to a slowdown in the digital ad market. The shares fell more than 6%."
|
20 |
-
example_title: "Example 2"
|
21 |
-
- text: "$LJPC - La Jolla Pharma to reassess development of LJPC-401"
|
22 |
-
example_title: "Example 3"
|
23 |
-
- text: "Watch $MARK over 43c in after-hours for continuation targeting the 50c area initially"
|
24 |
-
example title: "Example 4"
|
25 |
-
- text: "$RCII: Rent-A-Center provides update - March revenues were off by about 5% versus last year"
|
26 |
-
example title: "Example 5"
|
27 |
---
|
28 |
|
29 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -31,11 +16,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
31 |
|
32 |
# financial-twhin-bert-large-7labels
|
33 |
|
34 |
-
This model is a fine-tuned version of [Twitter/twhin-bert-large](https://huggingface.co/Twitter/twhin-bert-large) on
|
35 |
It achieves the following results on the evaluation set:
|
36 |
-
- Loss: 0.
|
37 |
-
- Accuracy: 0.
|
38 |
-
- F1: 0.
|
39 |
|
40 |
## Model description
|
41 |
|
@@ -54,21 +39,30 @@ More information needed
|
|
54 |
### Training hyperparameters
|
55 |
|
56 |
The following hyperparameters were used during training:
|
57 |
-
- learning_rate:
|
58 |
-
- train_batch_size:
|
59 |
- eval_batch_size: 8
|
60 |
-
- seed:
|
61 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
62 |
- lr_scheduler_type: linear
|
63 |
-
-
|
|
|
64 |
|
65 |
### Training results
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
|
68 |
|
69 |
### Framework versions
|
70 |
|
71 |
-
- Transformers 4.
|
72 |
-
- Pytorch 2.1
|
73 |
-
- Datasets 2.
|
74 |
-
- Tokenizers 0.
|
|
|
7 |
- accuracy
|
8 |
- f1
|
9 |
model-index:
|
10 |
+
- name: financial-twhin-bert-large-7labels
|
11 |
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
16 |
|
17 |
# financial-twhin-bert-large-7labels
|
18 |
|
19 |
+
This model is a fine-tuned version of [Twitter/twhin-bert-large](https://huggingface.co/Twitter/twhin-bert-large) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.3040
|
22 |
+
- Accuracy: 0.8968
|
23 |
+
- F1: 0.8916
|
24 |
|
25 |
## Model description
|
26 |
|
|
|
39 |
### Training hyperparameters
|
40 |
|
41 |
The following hyperparameters were used during training:
|
42 |
+
- learning_rate: 2.1732582582331977e-05
|
43 |
+
- train_batch_size: 16
|
44 |
- eval_batch_size: 8
|
45 |
+
- seed: 1203
|
46 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
47 |
- lr_scheduler_type: linear
|
48 |
+
- lr_scheduler_warmup_ratio: 0.1
|
49 |
+
- num_epochs: 2
|
50 |
|
51 |
### Training results
|
52 |
|
53 |
+
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|
54 |
+
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
|
55 |
+
| 0.9592 | 0.3272 | 1500 | 0.6466 | 0.7665 | 0.7503 |
|
56 |
+
| 0.4705 | 0.6545 | 3000 | 0.3785 | 0.8674 | 0.8528 |
|
57 |
+
| 0.4196 | 0.9817 | 4500 | 0.5830 | 0.7892 | 0.7775 |
|
58 |
+
| 0.3403 | 1.3089 | 6000 | 0.3683 | 0.8767 | 0.8728 |
|
59 |
+
| 0.2962 | 1.6361 | 7500 | 0.3288 | 0.8889 | 0.8904 |
|
60 |
+
| 0.272 | 1.9634 | 9000 | 0.3040 | 0.8968 | 0.8916 |
|
61 |
|
62 |
|
63 |
### Framework versions
|
64 |
|
65 |
+
- Transformers 4.40.1
|
66 |
+
- Pytorch 2.2.1+cu121
|
67 |
+
- Datasets 2.19.0
|
68 |
+
- Tokenizers 0.19.1
|
runs/May01_20-51-07_6c6ca94f901c/events.out.tfevents.1714596756.6c6ca94f901c.200.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a320892b5a44335900eff2e464532415cfc8a4a7e15482795ba0d0e939aad637
|
3 |
+
size 9536
|