jonathanagustin commited on
Commit
43dc2e5
1 Parent(s): 1cf8426

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +261 -58
README.md CHANGED
@@ -1,78 +1,281 @@
1
  ---
2
- tags:
3
- - generated_from_trainer
4
- datasets:
5
- - squad_v2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  model-index:
7
- - name: bert-finetuned-uncased-squad_v2
8
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
 
14
- # bert-finetuned-uncased-squad_v2
15
 
16
- This model was trained from scratch on the squad_v2 dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: 1.1459
19
 
20
- ## Model description
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
25
 
26
- More information needed
27
 
28
- ## Training and evaluation data
29
 
30
- More information needed
31
 
32
- ## Training procedure
 
 
 
 
 
33
 
34
- ### Training hyperparameters
35
 
36
- The following hyperparameters were used during training:
37
- - learning_rate: 2e-05
38
- - train_batch_size: 64
39
- - eval_batch_size: 64
40
- - seed: 42
41
- - gradient_accumulation_steps: 4
42
- - total_train_batch_size: 256
43
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
- - lr_scheduler_type: linear
45
- - num_epochs: 4
46
 
47
- ### Training results
 
 
48
 
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | 3.2307 | 0.2 | 100 | 1.8959 |
52
- | 1.9581 | 0.39 | 200 | 1.4856 |
53
- | 1.6358 | 0.59 | 300 | 1.3948 |
54
- | 1.4964 | 0.78 | 400 | 1.2934 |
55
- | 1.4169 | 0.98 | 500 | 1.2605 |
56
- | 1.327 | 1.18 | 600 | 1.2218 |
57
- | 1.2763 | 1.37 | 700 | 1.2539 |
58
- | 1.2755 | 1.57 | 800 | 1.2090 |
59
- | 1.251 | 1.76 | 900 | 1.2041 |
60
- | 1.229 | 1.96 | 1000 | 1.2159 |
61
- | 1.1921 | 2.16 | 1100 | 1.1828 |
62
- | 1.1926 | 2.35 | 1200 | 1.2120 |
63
- | 1.1606 | 2.55 | 1300 | 1.1737 |
64
- | 1.1486 | 2.75 | 1400 | 1.1469 |
65
- | 1.1195 | 2.94 | 1500 | 1.1459 |
66
- | 1.0883 | 3.14 | 1600 | 1.1570 |
67
- | 1.0526 | 3.33 | 1700 | 1.1771 |
68
- | 1.0611 | 3.53 | 1800 | 1.1740 |
69
- | 1.0521 | 3.73 | 1900 | 1.1596 |
70
- | 1.0476 | 3.92 | 2000 | 1.1538 |
71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
- ### Framework versions
74
 
75
- - Transformers 4.34.1
76
- - Pytorch 2.1.0+cu118
77
- - Datasets 2.14.5
78
- - Tokenizers 0.14.1
 
1
  ---
2
+ language: en
3
+ license: mit
4
+ model_details: "\n ## Abstract\n This model, 'bert-finetuned-uncased',\
5
+ \ is a question-answering chatbot trained on the SQuAD dataset, demonstrating competency\
6
+ \ in building conversational AI using recent advances in natural language processing.\
7
+ \ It utilizes a BERT model fine-tuned for extractive question answering.\n\n \
8
+ \ ## Data Collection and Preprocessing\n The model was trained on the\
9
+ \ Stanford Question Answering Dataset (SQuAD), which contains over 100,000 question-answer\
10
+ \ pairs based on Wikipedia articles. The data preprocessing involved tokenizing\
11
+ \ context paragraphs and questions, truncating sequences to fit BERT's max length,\
12
+ \ and adding special tokens to mark question and paragraph segments.\n\n \
13
+ \ ## Model Architecture and Training\n The architecture is based on the BERT\
14
+ \ transformer model, which was pretrained on large unlabeled text corpora. For this\
15
+ \ project, the BERT base model was fine-tuned on SQuAD for extractive question answering,\
16
+ \ with additional output layers for predicting the start and end indices of the\
17
+ \ answer span.\n\n ## SQuAD 2.0 Dataset\n SQuAD 2.0 combines the existing\
18
+ \ SQuAD data with over 50,000 unanswerable questions written adversarially by crowdworkers\
19
+ \ to look similar to answerable ones. This version of the dataset challenges models\
20
+ \ to not only produce answers when possible but also determine when no answer is\
21
+ \ supported by the paragraph and abstain from answering.\n "
22
+ intended_use: "\n - Answering questions from the squad_v2 dataset.\n \
23
+ \ - Developing question-answering systems within the scope of the aai520-project.\n\
24
+ \ - Research and experimentation in the NLP question-answering domain.\n\
25
+ \ "
26
+ limitations_and_bias: "\n The model inherits limitations and biases from the\
27
+ \ 'bert-base-uncased' model, as it was trained on the same foundational data. \n\
28
+ \ It may underperform on questions that are ambiguous or too far outside\
29
+ \ the scope of the topics covered in the squad_v2 dataset. \n Additionally,\
30
+ \ the model may reflect societal biases present in its training data.\n "
31
+ ethical_considerations: "\n This model should not be used for making critical\
32
+ \ decisions without human oversight, \n as it can generate incorrect or biased\
33
+ \ answers, especially for topics not covered in the training data. \n Users\
34
+ \ should also consider the ethical implications of using AI in decision-making processes\
35
+ \ and the potential for perpetuating biases.\n "
36
+ evaluation: "\n The model was evaluated on the squad_v2 dataset using various\
37
+ \ metrics. These metrics, along with their corresponding scores, \n are detailed\
38
+ \ in the 'eval_results' section. The evaluation process ensured a comprehensive\
39
+ \ assessment of the model's performance \n in question-answering scenarios.\n\
40
+ \ "
41
+ training: "\n The model was trained over 4 epochs with a learning rate of 2e-05,\
42
+ \ using a batch size of 64. \n The training utilized a cross-entropy loss\
43
+ \ function and the AdamW optimizer, with gradient accumulation over 4 steps.\n \
44
+ \ "
45
+ tips_and_tricks: "\n For optimal performance, questions should be clear, concise,\
46
+ \ and grammatically correct. \n The model performs best on questions related\
47
+ \ to topics covered in the squad_v2 dataset. \n It is advisable to pre-process\
48
+ \ text for consistency in encoding and punctuation, and to manage expectations for\
49
+ \ questions on topics outside the training data.\n "
50
  model-index:
51
+ - name: bert-finetuned-uncased
52
+ results:
53
+ - task:
54
+ type: question-answering
55
+ dataset:
56
+ name: SQuAD v2
57
+ type: squad_v2
58
+ metrics:
59
+ - type: Exact
60
+ value: 28.594289564558242
61
+ - type: F1
62
+ value: 32.86661007615217
63
+ - type: Total
64
+ value: 11873
65
+ - type: Hasans Exact
66
+ value: 52.260458839406205
67
+ - type: Hasans F1
68
+ value: 60.817351793885734
69
+ - type: Hasans Total
70
+ value: 5928
71
+ - type: Noans Exact
72
+ value: 4.995794785534062
73
+ - type: Noans F1
74
+ value: 4.995794785534062
75
+ - type: Noans Total
76
+ value: 5945
77
+ - type: Best Exact
78
+ value: 50.11370336056599
79
+ - type: Best Exact Thresh
80
+ value: 0.0
81
+ - type: Best F1
82
+ value: 50.11370336056599
83
+ - type: Best F1 Thresh
84
+ value: 0.0
85
  ---
86
 
87
+ # Model Card for Model ID
 
88
 
89
+ <!-- Provide a quick summary of what the model is/does. -->
90
 
 
 
 
91
 
 
92
 
93
+ ## Model Details
94
 
95
+ ### Model Description
96
 
97
+ <!-- Provide a longer summary of what this model is. -->
98
 
 
99
 
 
100
 
101
+ - **Developed by:** [More Information Needed]
102
+ - **Shared by [optional]:** [More Information Needed]
103
+ - **Model type:** [More Information Needed]
104
+ - **Language(s) (NLP):** en
105
+ - **License:** mit
106
+ - **Finetuned from model [optional]:** [More Information Needed]
107
 
108
+ ### Model Sources [optional]
109
 
110
+ <!-- Provide the basic links for the model. -->
 
 
 
 
 
 
 
 
 
111
 
112
+ - **Repository:** [More Information Needed]
113
+ - **Paper [optional]:** [More Information Needed]
114
+ - **Demo [optional]:** [More Information Needed]
115
 
116
+ ## Uses
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
119
+
120
+ ### Direct Use
121
+
122
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
123
+
124
+ [More Information Needed]
125
+
126
+ ### Downstream Use [optional]
127
+
128
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
129
+
130
+ [More Information Needed]
131
+
132
+ ### Out-of-Scope Use
133
+
134
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
135
+
136
+ [More Information Needed]
137
+
138
+ ## Bias, Risks, and Limitations
139
+
140
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
141
+
142
+ [More Information Needed]
143
+
144
+ ### Recommendations
145
+
146
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
147
+
148
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
149
+
150
+ ## How to Get Started with the Model
151
+
152
+ Use the code below to get started with the model.
153
+
154
+ [More Information Needed]
155
+
156
+ ## Training Details
157
+
158
+ ### Training Data
159
+
160
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
161
+
162
+ [More Information Needed]
163
+
164
+ ### Training Procedure
165
+
166
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
167
+
168
+ #### Preprocessing [optional]
169
+
170
+ [More Information Needed]
171
+
172
+
173
+ #### Training Hyperparameters
174
+
175
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
176
+
177
+ #### Speeds, Sizes, Times [optional]
178
+
179
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
180
+
181
+ [More Information Needed]
182
+
183
+ ## Evaluation
184
+
185
+ <!-- This section describes the evaluation protocols and provides the results. -->
186
+
187
+ ### Testing Data, Factors & Metrics
188
+
189
+ #### Testing Data
190
+
191
+ <!-- This should link to a Data Card if possible. -->
192
+
193
+ [More Information Needed]
194
+
195
+ #### Factors
196
+
197
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
198
+
199
+ [More Information Needed]
200
+
201
+ #### Metrics
202
+
203
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
204
+
205
+ [More Information Needed]
206
+
207
+ ### Results
208
+
209
+ [More Information Needed]
210
+
211
+ #### Summary
212
+
213
+
214
+
215
+ ## Model Examination [optional]
216
+
217
+ <!-- Relevant interpretability work for the model goes here -->
218
+
219
+ [More Information Needed]
220
+
221
+ ## Environmental Impact
222
+
223
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
224
+
225
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
226
+
227
+ - **Hardware Type:** [More Information Needed]
228
+ - **Hours used:** [More Information Needed]
229
+ - **Cloud Provider:** [More Information Needed]
230
+ - **Compute Region:** [More Information Needed]
231
+ - **Carbon Emitted:** [More Information Needed]
232
+
233
+ ## Technical Specifications [optional]
234
+
235
+ ### Model Architecture and Objective
236
+
237
+ [More Information Needed]
238
+
239
+ ### Compute Infrastructure
240
+
241
+ [More Information Needed]
242
+
243
+ #### Hardware
244
+
245
+ [More Information Needed]
246
+
247
+ #### Software
248
+
249
+ [More Information Needed]
250
+
251
+ ## Citation [optional]
252
+
253
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
254
+
255
+ **BibTeX:**
256
+
257
+ [More Information Needed]
258
+
259
+ **APA:**
260
+
261
+ [More Information Needed]
262
+
263
+ ## Glossary [optional]
264
+
265
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
266
+
267
+ [More Information Needed]
268
+
269
+ ## More Information [optional]
270
+
271
+ [More Information Needed]
272
+
273
+ ## Model Card Authors [optional]
274
+
275
+ [More Information Needed]
276
+
277
+ ## Model Card Contact
278
+
279
+ [More Information Needed]
280
 
 
281