Bachstelze commited on
Commit
13f21d0
1 Parent(s): 0706518

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -42,6 +42,8 @@ alt="instruction BERT drawing" width="600"/>
42
 
43
  A minimalistic instruction model with an already good analysed and pretrained encoder like BERT.
44
  So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
 
 
45
  We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
46
 
47
  ## Run the model with a longer output
@@ -58,4 +60,13 @@ input = "Write a poem about love, peace and pancake."
58
  input_ids = tokenizer(input, return_tensors="pt").input_ids
59
  output_ids = model.generate(input_ids, max_new_tokens=200)
60
  print(tokenizer.decode(output_ids[0]))
61
- ```
 
 
 
 
 
 
 
 
 
 
42
 
43
  A minimalistic instruction model with an already good analysed and pretrained encoder like BERT.
44
  So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
45
+
46
+ The trainings code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
47
  We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
48
 
49
  ## Run the model with a longer output
 
60
  input_ids = tokenizer(input, return_tensors="pt").input_ids
61
  output_ids = model.generate(input_ids, max_new_tokens=200)
62
  print(tokenizer.decode(output_ids[0]))
63
+ ```
64
+
65
+ ## Training parameters
66
+
67
+ - base model: "bert-base-cased"
68
+ - test subset of the Muennighoff/flan dataset
69
+ - trained for 0.97 epochs
70
+ - batch size of 14
71
+ - 10000 warm-up steps
72
+ - learning rate of 0.00005