bryant1410 commited on
Commit
a5a6b70
1 Parent(s): 000bd8e

Fix `model_max_length` in `tokenizer_config.json`

Browse files

The current value of `model_max_length` in `tokenizer_config.json`(basically infinity) is inconsistent with `max_position_embeddings` in `config.json`. It's also inconsistent with that of `bge-base-en`.

This also happens with `bge-small-en`, but I was thinking that it was good to have any potential discussion here first, before sending a PR for that one as well.

Files changed (1) hide show
  1. tokenizer_config.json +1 -1
tokenizer_config.json CHANGED
@@ -4,7 +4,7 @@
4
  "do_basic_tokenize": true,
5
  "do_lower_case": true,
6
  "mask_token": "[MASK]",
7
- "model_max_length": 1000000000000000019884624838656,
8
  "never_split": null,
9
  "pad_token": "[PAD]",
10
  "sep_token": "[SEP]",
 
4
  "do_basic_tokenize": true,
5
  "do_lower_case": true,
6
  "mask_token": "[MASK]",
7
+ "model_max_length": 512,
8
  "never_split": null,
9
  "pad_token": "[PAD]",
10
  "sep_token": "[SEP]",