repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
287
closed
Gpt2
Adding GPT-2...
02-18-2019 10:12:43
02-18-2019 10:12:43
transformers
286
closed
Update activation function docstring
02-16-2019 20:18:08
02-16-2019 20:18:08
Thanks!
transformers
285
closed
Anyone tried this model to write a next sentence?
02-16-2019 13:45:00
02-16-2019 13:45:00
Closing for now.<|||||>Why ? @thomwolf <|||||>I'm trying to clean up the issues to get a better view of what needs to be fixed. But you are right opening/closing issue is too binary. Let's add labels instead.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
284
closed
Error in Apex's FusedLayerNorm
After installing `apex` with the cuda extensions and running BERT, I get the following error in `FusedLayerNormAffineFunction`, [apex/normalization/fused_layer_norm.py](https://github.com/NVIDIA/apex/blob/master/apex/normalization/fused_layer_norm.py#L16) (line 21). ``` RuntimeError: a Tensor with 2482176 elements cannot be converted to Scalar (item at /pytorch/aten/src/ATen/native/Scalar.cpp:9) ``` Here are the shapes of my tensors: ``` input_ - [32, 101, 768] (this is the embeddings tensor in BertEmbeddings) bias_ - [768] weight_ - [768] self.normalized_shape - [768] ``` I'm not sure if it's a problem with `pytorch-pretrained-BERT` or `apex`. Any idea? Full stacktrace below. ``` File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 710, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 261, in forward embeddings = self.LayerNorm(embeddings) File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/normalization/fused_layer_norm.py", line 149, in forward input, self.weight, self.bias) File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/normalization/fused_layer_norm.py", line 21, in forward input_, self.normalized_shape, weight_, bias_, self.eps) RuntimeError: a Tensor with 2482176 elements cannot be converted to Scalar (item at /pytorch/aten/src/ATen/native/Scalar.cpp:9) frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f1aa5da3021 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f1aa5da28ea in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libc10.so) frame #2: at::native::item(at::Tensor const&) + 0x12c3 (0x7f1aa690d5b3 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libcaffe2.so) frame #3: at::TypeDefault::item(at::Tensor const&) const + 0x55 (0x7f1aa6b1c905 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libcaffe2.so) frame #4: torch::autograd::VariableType::eye_out(at::Tensor&, long, long) const + 0x184 (0x7f1aa4faeec4 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libtorch.so.1) frame #5: <unknown function> + 0x89ca (0x7f1a82e739ca in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #6: layer_norm_affine(at::Tensor, c10::ArrayRef<long>, at::Tensor, at::Tensor, double) + 0x185 (0x7f1a82e762a5 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #7: <unknown function> + 0x18d44 (0x7f1a82e83d44 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #8: <unknown function> + 0x16495 (0x7f1a82e81495 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #9: _PyCFunction_FastCallDict + 0x154 (0x55a8f9925744 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #10: <unknown function> + 0x198610 (0x55a8f99ac610 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #11: _PyEval_EvalFrameDefault + 0x30a (0x55a8f99d138a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #12: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so) frame #13: _PyFunction_FastCallDict + 0x11b (0x55a8f99a6bab in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #14: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #15: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #16: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #17: THPFunction_do_forward(THPFunction*, _object*) + 0x15c (0x7f1ae02e21ec in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #18: PyCFunction_Call + 0x5f (0x55a8f992863f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #19: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #20: <unknown function> + 0x16ba91 (0x55a8f997fa91 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #21: _PyObject_FastCallDict + 0x8b (0x55a8f992592b in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #22: <unknown function> + 0x19857e (0x55a8f99ac57e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #23: _PyEval_EvalFrameDefault + 0x30a (0x55a8f99d138a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #24: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so) frame #25: _PyFunction_FastCallDict + 0x11b (0x55a8f99a6bab in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #26: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #27: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #28: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #29: _PyEval_EvalFrameDefault + 0x19ec (0x55a8f99d2a6c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #30: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so) frame #31: <unknown function> + 0x1918e4 (0x55a8f99a58e4 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #32: _PyFunction_FastCallDict + 0x1bc (0x55a8f99a6c4c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #33: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #34: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #35: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #36: <unknown function> + 0x16ba91 (0x55a8f997fa91 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #37: _PyObject_FastCallDict + 0x8b (0x55a8f992592b in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #38: <unknown function> + 0x19857e (0x55a8f99ac57e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #39: _PyEval_EvalFrameDefault + 0x30a (0x55a8f99d138a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #40: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so) frame #41: <unknown function> + 0x1918e4 (0x55a8f99a58e4 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #42: _PyFunction_FastCallDict + 0x3da (0x55a8f99a6e6a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #43: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #44: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #45: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #46: _PyEval_EvalFrameDefault + 0x19ec (0x55a8f99d2a6c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #47: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so) frame #48: <unknown function> + 0x1918e4 (0x55a8f99a58e4 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #49: _PyFunction_FastCallDict + 0x1bc (0x55a8f99a6c4c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #50: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #51: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #52: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #53: <unknown function> + 0x16ba91 (0x55a8f997fa91 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #54: _PyObject_FastCallDict + 0x8b (0x55a8f992592b in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #55: <unknown function> + 0x19857e (0x55a8f99ac57e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #56: _PyEval_EvalFrameDefault + 0x30a (0x55a8f99d138a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #57: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so) frame #58: <unknown function> + 0x1918e4 (0x55a8f99a58e4 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #59: _PyFunction_FastCallDict + 0x3da (0x55a8f99a6e6a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #60: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #61: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #62: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) frame #63: _PyEval_EvalFrameDefault + 0x19ec (0x55a8f99d2a6c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python) ```
02-15-2019 14:01:48
02-15-2019 14:01:48
This was an error in `apex`, due to mismatched compiled libraries. Fix can be found [here](https://github.com/NVIDIA/apex/issues/156#issuecomment-465301976). > Try a full `pip uninstall apex`, then `cd apex_repo_dir; rm-rf build; python setup.py install --cuda_ext --cpp_ext` and see if the segfault persists.
transformers
283
closed
unicode
The general run_squad.py doesn't appear to work properly for python 2.7 because of the json dumping string vs unicode issues during the eval. python2.7 run_squad.py \ --bert_model bert-base-uncased \ --do_train \ --do_predict \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /pytorch-pretrained-BERT/tmp/debug_squad/ 02/15/2019 04:55:17 - INFO - __main__ - Writing predictions to: /pytorch-pretrained-BERT/tmp/debug_squad/predictions.json 02/15/2019 04:55:17 - INFO - __main__ - Writing nbest to: /pytorch-pretrained-BERT/tmp/debug_squad/nbest_predictions.json Traceback (most recent call last): File "run_squad.py", line 1077, in <module> main() File "run_squad.py", line 1073, in main args.version_2_with_negative, args.null_score_diff_threshold) File "run_squad.py", line 619, in write_predictions writer.write(json.dumps(all_predictions, indent=4) + "\n") TypeError: write() argument 1 must be unicode, not str
02-15-2019 11:21:08
02-15-2019 11:21:08
Yes, the examples are not adapted for Python 2, only the library. I don't plan to adapt or maintain them but feel free to submit a PR!<|||||>env: python2.7 line 662: writer.write(json.dumps(all_predictions, indent=4) + "\n") change as :writer.write(json.dumps(all_predictions, indent=4).decode('utf-8') + "\n")
transformers
282
closed
Fix some bug about SQuAD code
Fix issue in #207 ![image](https://user-images.githubusercontent.com/16603773/52842280-90939a00-3139-11e9-87e8-92bcfb8af76d.png) This error occurs when 'nbest' only contain 1 item, but none 'text'. So the code to add empty will not work. I add another condition to solve it. https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L570-L590
02-15-2019 08:01:56
02-15-2019 08:01:56
Ok, thanks @wlhgtc!
transformers
281
closed
Conversion of gpt-2 small model
Hey! This seems like something a lot of folks will want. I'd like to be able to load GPT-2 117M and fine-tune it. What's necessary to convert it? I looked at the tensorflow code a little and it looks vaguely related to transformer xl, but I haven't looked at the paper yet or etc.
02-15-2019 07:52:25
02-15-2019 07:52:25
I'd like to help out on this. I will have a look and try to understand the earlier bridges in this repo. Let me know if you see anywhere a newcomer can be helpful with.<|||||>Sure, would be happy to welcome a PR. You can start from `modeling_openai.py` and `tokenization_openai.py`'s codes. It's pretty much the same architecture (read OpenAI's paper first!) You should mostly reimplement the BPE tokenization to work byte-level and move the layer norms modules to the input rather than the output of the layers.<|||||>Ok, GPT-2 should be in the coming 0.6.0 release (see #287)<|||||>Ok it's on pip: https://github.com/huggingface/pytorch-pretrained-BERT/releases/tag/v0.6.0 Please read the updated README for details. All should be there (model and examples). Have a nice week y'all.<|||||>@thomwolf do we have pytorch implementation of GPT-2 small? <|||||>Yes, just read the README
transformers
280
closed
Have you eval the inference speed of transformer-xl?
Thank you very much!
02-15-2019 03:39:14
02-15-2019 03:39:14
transformers
279
closed
DataParallel imbalanced memory usage
Simialr to this issue: https://discuss.pytorch.org/t/dataparallel-imbalanced-memory-usage/22551/12, when I run run_lm_finetuning.py using 4 GPUs on Microsoft Azure, the first GPU will have 4000MB Memory usage while the other 3 are at 700MB. The Volatile Util for the first GPU also is at 100% while the rest are at 0%. It seems that the solution might be something to do with incorporating the loss calculation in the forward pass but I do not know how to solve it.
02-14-2019 06:06:38
02-14-2019 06:06:38
Managed to get volatile GPU to work properly but memory allocation is sitll imbalanced +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.78 Driver Version: 410.78 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P40 Off | 00000275:00:00.0 Off | 0 | | N/A 43C P0 61W / 250W | 11151MiB / 22919MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla P40 Off | 00003984:00:00.0 Off | 0 | | N/A 41C P0 60W / 250W | 5979MiB / 22919MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla P40 Off | 00005B97:00:00.0 Off | 0 | | N/A 38C P0 63W / 250W | 5979MiB / 22919MiB | 100% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla P40 Off | 0000EA90:00:00.0 Off | 0 | | N/A 42C P0 61W / 250W | 5979MiB / 22919MiB | 99% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 5927 C python 11141MiB | | 1 5927 C python 5969MiB | | 2 5927 C python 5969MiB | | 3 5927 C python 5969MiB | +-----------------------------------------------------------------------------+<|||||>Yes, there is no mechanism to balance memory in the examples. In NVIDIA's tests, it didn't help.
transformers
278
closed
PAD symbols change the output
Adding `[PAD]` symbols to an input sentence changes the output of the model. I put together a small example here: https://gist.github.com/juditacs/8be068d5f9063ad68e3098a473b497bd I also noticed that the seed state affects the output as well. Resetting it in every run ensures that the output is always the same. Is this because of layernorm?
02-14-2019 00:57:02
02-14-2019 00:57:02
Hi Judit: - Regarding the padding: you should send an `attention_mask` with the input if the input is smaller than the tensor you are sending in (see the description on `BertModel` in the README). - Regarding the seed: don't forget to put your model in eval mode (`model.eval()`) to disable the dropout layers.<|||||>@thomwolf Despite the `attention_mask` the values are a slightly different. It is normal that `[PAD]` vectors have different values? ``` from pytorch_transformers import BertModel from rest.run_glue import * tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=False) model = BertModel.from_pretrained('bert-base-uncased') model.eval() torch.manual_seed(0) sent = "this is a complicated sentence [SEP]" tokens = ['[CLS]'] + tokenizer.tokenize(sent) ids = tokenizer.convert_tokens_to_ids(tokens) t = torch.LongTensor([ids]) with torch.no_grad(): out = model(t)[0] torch.manual_seed(0) sent = "this is a complicated sentence [SEP]" tokens = ['[CLS]'] + tokenizer.tokenize(sent) tokens.extend(['[PAD]'] * 3) ids = torch.tensor(tokenizer.convert_tokens_to_ids(tokens)).unsqueeze(0) mask = torch.zeros((1, ids.shape[1], ids.shape[1]), dtype=torch.float) mask[:, :, 0:-3] = 1.0 with torch.no_grad(): out2 = model(ids, attention_mask = mask[:, 0])[0] print('------------') for i in range(out.shape[1]): print(i, out[0][0, i].item()) print('------------') for i in range(out2.shape[1]): torch.manual_seed(0) print(i, out2[0][0, i].item()) ``` here is the output ``` ------------ 0 -0.10266201943159103 1 0.11214534193277359 2 -0.1575649380683899 3 -0.3163739740848541 4 -0.4168904423713684 5 -0.4069269001483917 6 0.28849801421165466 ------------ 0 -0.10266169905662537 1 0.1121453121304512 2 -0.15756472945213318 3 -0.3163738548755646 4 -0.41689014434814453 5 -0.40692687034606934 6 0.288497656583786 7 0.28312715888023376 8 0.08457585424184799 9 -0.3077544569969177 ``` `[PAD]`'s are different, is that normal? **7 0.28312715888023376 8 0.08457585424184799 9 -0.3077544569969177**<|||||>I am having same problem and couldn't find a reason or fix yet.<|||||>Due to Position Embeddings every token results in different vectors. You might want to google "How the Embedding Layers in BERT Were Implemented"<|||||>> Due to Position Embeddings every token results in different vectors. Could you be more specific what is the source of this numerical instability? Perhaps refer to exact code? I am still not exactly sure why output changes slightly when using attention mask, when I use differently padded inputs. There should be no self-attention over padded inputs. Self-attention scores are set to large negative number before softmax: `attention_scores = attention_scores + attention_mask` Could it be that sometimes -10_000 might not be enough to get 0 from softmax? I have recorded differences at most in the order of 2e-6. Or is it because of arithmetic errors? According to https://en.wikipedia.org/wiki/Machine_epsilon, upped bound for the relative error in 32bit format is somewhere at 1.19e-07, which is still an order away. Could that be because of the error propagation through many FP32 operations?
transformers
277
closed
80min training time to fine-tune BERT-base on the SQuAD dataset instead of 24min?
I just fine-tuned BERT-base on the SQuAD dataset with an AWS EC2 `p3.2xlarge` Deep Learning AMI with a single Tesla V100 16GB: I used the config in your README: ``` export SQUAD_DIR=/path/to/SQUAD python run_squad.py \ --bert_model bert-base-uncased \ --do_train \ --do_predict \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` It took 80min. According to your README: > This example code fine-tunes BERT on the SQuAD dataset. It runs in 24 min (with BERT-base) or 68 min (with BERT-large) on a single tesla V100 16GB. How to explain this difference? Is there any way to accelerate the training to 24min as well? Thanks
02-13-2019 15:48:43
02-13-2019 15:48:43
You should use 16bit training (`--fp16` argument). You can use the dynamic loss scaling or tune the loss scale yourself if the results are not the best.<|||||>@thomwolf Thanks! I enabled 16bit training and it took about 20min/epoch. Is that what you experienced?<|||||>Sounds good.<|||||>@thomwolf May I know what is the expected EM & F1 score if users train for 2-3 epochs? I got 43 and 48 respectively.<|||||>You can have a look at the readme examples but it should be a lot higher, around 88-90. Maybe your batch size is too small, look at the readme for more information.
transformers
276
closed
Argument do_lower_case is repeated in run_lm_finetuning.py
Hi, I am trying to finetune LM and am facing the following issue. **argparse.ArgumentError: argument --do_lower_case: conflicting option string: --do_lower_case**
02-13-2019 15:29:35
02-13-2019 15:29:35
Hi @dileep1996, this has just been fixed in master (#275)!
transformers
275
closed
--do_lower_case is duplicated in parser args
I'm therefore deleting one repetition (please review!)
02-13-2019 14:30:58
02-13-2019 14:30:58
Thanks @davidefiocco!
transformers
274
closed
Help: how to get index/symbol from last_hidden, on text8?
I am trying on text8 dataset. I want to print next token. The model in source code forward() output is loss, but I want to get logits and softmax result, and finally get next token in vocab. how to get index/symbol from last_hidden, on text8? Thanks
02-13-2019 09:50:51
02-13-2019 09:50:51
Hi, There is no pretrained character-level model for text8 right now. Only a word-level model trained on wikitext 103.
transformers
273
closed
Update to fifth release
Mostly a bug fix update for loading the `TransfoXLModel` from s3: - this fixes a bug in the loading of the pretrained `TransfoXLModel` from the s3 dump (which is a converted `TransfoXLLMHeadModel`) and the weights were not loaded. - I also added a fallback of `OpenAIGPTTokenizer` on BERT's `BasicTokenizer` when SpaCy and ftfy are not installed. Using BERT's `BasicTokenizer` instead of SpaCy should be fine in most cases as long as you have a relatively clean input (SpaCy+ftfy were included to exactly reproduce the paper's pre-processing steps on the Toronto Book Corpus) and this also let us use the `never_split` option to avoid splitting special tokens like `[CLS], [SEP]...` which is easier than adding the tokens after tokenization. - I also updated the README on the tokenizers options and methods which was lagging behind a bit.
02-13-2019 09:16:53
02-13-2019 09:16:53
transformers
272
closed
Facing issue in Run Fine tune LM
So my LM sample.txt is such that each doc has only one line So in BERTDataSet len is giving negative I tried changing it to self.num_docs - 1 def __len__(self): print(self.corpus_lines ,self.num_docs) return self.corpus_lines - self.num_docs - 1 I am also getting errors at multiple steps, Is the code written with the assumption that each document will have multiple lines in it?
02-13-2019 02:45:55
02-13-2019 02:45:55
Yes, you need documents with multiple lines because only sentences from the same doc are used as positive examples for the nextSentence prediction. <|||||>Seems like the expected behavior. Feel free to open a PR to extend the example if you want @tuhinjubcse.
transformers
271
closed
Transformer-XL: wrong encoding in the vocab
Seems that something odd happened during vocab serialization as many symbols with non-latin symbols are broken. E.g.: ``` In [1]: import pytorch_pretrained_bert In [2]: tokenizer = pytorch_pretrained_bert.TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') In [4]: print(tokenizer.idx2sym[224178]) 'Enquêtes ``` The correct token should be "'Enquêtes". And there are around 10k tokens like this. Could it be 'encoding="latin1"' here? https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_transfo_xl_checkpoint_to_pytorch.py#L54
02-13-2019 01:52:38
02-13-2019 01:52:38
Yeah, the re-encoding seems to fix the bug: ``` In [6]: "'Enquêtes".encode('latin1').decode('utf8') Out[6]: "'Enquêtes" ```<|||||>which version of python are you using?<|||||>It's python3.6. Does the snippet above gives different result on other version? JFYI: I'm using the script below to create a fixed vocab and save in the current folder. ```(python) import collections import urllib.request from pytorch_pretrained_bert import tokenization_transfo_xl import torch def fix(x): return x.encode('latin1').decode('utf8') def main(): basedir = '.' good_vocab_path = basedir + '/' + tokenization_transfo_xl.VOCAB_NAME vocab_url = tokenization_transfo_xl.PRETRAINED_VOCAB_ARCHIVE_MAP['transfo-xl-wt103'] urllib.request.urlretrieve(vocab_url, basedir + '/vocab.buggy.bin') vocab = torch.load(basedir + '/vocab.buggy.bin') vocab['counter'] = collections.Counter({fix(k): v for k, v in vocab['counter'].items()}) vocab['sym2idx'] = {fix(k): v for k, v in vocab['sym2idx'].items()} vocab['idx2sym'] = [fix(k) for k in vocab['idx2sym']] torch.save(vocab, good_vocab_path) if __name__ == "__main__": main() ```<|||||>Maybe indeed. Do you want to submit a PR to fix this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
270
closed
Transformer-XL: hidden states are nan
Hi, I followed the code in the Transformer-XL section: ```python import torch from pytorch_pretrained_bert import TransfoXLTokenizer, TransfoXLModel, TransfoXLLMHeadModel # Load pre-trained model tokenizer (vocabulary from wikitext 103) tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') # Tokenized input text_1 = "Who was Jim Henson ?" text_2 = "Jim Henson was a puppeteer" tokenized_text_1 = tokenizer.tokenize(text_1) tokenized_text_2 = tokenizer.tokenize(text_2) # Convert token to vocabulary indices indexed_tokens_1 = tokenizer.convert_tokens_to_ids(tokenized_text_1) indexed_tokens_2 = tokenizer.convert_tokens_to_ids(tokenized_text_2) # Convert inputs to PyTorch tensors tokens_tensor_1 = torch.tensor([indexed_tokens_1]) tokens_tensor_2 = torch.tensor([indexed_tokens_2]) model = TransfoXLModel.from_pretrained('transfo-xl-wt103') model.eval() # If you have a GPU, put everything on cuda tokens_tensor_1 = tokens_tensor_1 tokens_tensor_2 = tokens_tensor_2 with torch.no_grad(): # Predict hidden states features for each layer hidden_states_1, mems_1 = model(tokens_tensor_1) # We can re-use the memory cells in a subsequent call to attend a longer context hidden_states_2, mems_2 = model(tokens_tensor_2, mems=mems_1) print(hidden_states_1) print(hidden_states_2) ``` (One modification: I'm running this on CPU). The hidden states of both sentences are: ```bash tensor([[[nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan]]]) ``` Is this expected? I wanted to get the embeddings of the two sentences 🤔 Tested with PyTorch 1.0.1 and Python 3.7.
02-12-2019 21:50:32
02-12-2019 21:50:32
This is only Transformer-XL related. For GPT the output is: ```python tensor([[[ 0.1963, 0.0367, -0.2051, ..., 0.7062, -0.2786, 0.1352], [-0.4705, 0.1581, 0.0452, ..., 0.7809, -0.2519, 0.4257], [-0.2602, -0.7126, -0.7966, ..., 0.6364, -0.1560, -0.6084], ..., [-0.3665, 1.2743, -2.4027, ..., -1.7271, -1.7892, 0.7689], [-1.3017, 2.7999, -2.8868, ..., -1.3412, 0.2787, -0.0605], [ 0.2648, 0.3508, 0.2894, ..., -0.7471, 0.1855, -0.0492]]]) ```<|||||>Same situation, on GPU. <|||||>Indeed, there was a bug in the loading of the `TransfoXLModel` from the S3 dump (which is a converted `TransfoXLLMHeadModel`) so the weights were not loaded. You can see that the weights are not loaded if you activate the logger before loading the model: ```python import logging logging.basicConfig(level=logging.INFO) ``` I've fixed it in release 0.5.1. I've also fixed another issue you (@stefan-it) mentions in https://github.com/zalandoresearch/flair/issues/68 which is the dependency of `OpenAIGPTTokenizer` on SpaCy and ftfy by adding a fallback on BERT's `BasicTokenizer` (should be fine for normal usage, SpaCy+ftfy were included to exactly reproduce the paper's pre-processing steps).<|||||>Publishing 0.5.1 as soon as all the tests are checked.<|||||>Ok 0.5.1 is published: https://github.com/huggingface/pytorch-pretrained-BERT/releases/tag/v0.5.1
transformers
269
closed
Get hidden states from all layers of Transformer-XL?
Hi, Thank you for supporting the pretrained Transformer-XL model! I was wondering if it makes sense to get hidden states from all layers of Transformer-XL as the output, just as what can be done for BERT. It seems this is not supported currently. Practically I found this strategy worked well for BERT and gave better results. Not sure if it is a good idea for Transformer-XL. Thank you!
02-12-2019 16:25:18
02-12-2019 16:25:18
Hi @hugochan, actually that what's in the `mems` of the Transformer-XL are (maybe you can read again the paper). One thing to be careful about is that the `mems` have transposed first dimensions and are longer (see the readme). Here is how to extract the hidden states from the model output: ```python hidden_states, mems = model(tokens_tensor) seq_length = hidden_states.size(1) lower_hidden_states = list(t[-seq_length:, ...].transpose(0, 1) for t in mems) all_hidden_states = lower_hidden_states + [hidden_states] ```<|||||>Hi @thomwolf , thank you for your answer! Just one quick question. It seems that `mems` already contains a list of num_layer hidden states, what is the difference between `lower_hidden_states[-1]` and `hidden_states` in your code? Thank you!<|||||>Actually `mems` contains all the hidden states PLUS the output of the embeddings (`lower_hidden_states[0]`) so `lower_hidden_states[-1]` is the output of the hidden state of the layer below the last layer and `hidden_states` is the output of the last layer (before the softmax). I will add a note on that in the readme.
transformers
268
closed
fixed a minor bug in README.md
Assertion failed if one followed the instructions in README.md->Usage->BERT. https://github.com/huggingface/pytorch-pretrained-BERT/issues/266#issuecomment-462730151
02-12-2019 12:05:01
02-12-2019 12:05:01
Thanks @wangxiaodiu
transformers
267
closed
Missing files for Transformer-XL examples
Hi, thanks so much for the new *0.5.0* release. I wanted to train a `TransfoXLModel` model, as described in the `README` [here](https://github.com/huggingface/pytorch-pretrained-BERT/blame/master/README.md#L132). Unfortunately, the files `transfo_xl_train.py` and `transfo_xl_eval.py` are not located in the `examples` directory. Could you please add them to repository? Thanks :heart:
02-12-2019 09:15:46
02-12-2019 09:15:46
Oh yes, that was a typo, there is only one example for Transformer-XL and it's the `run_transfo_xl.py` file detailed [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/README.md#openai-gpt-and-transformer-xl-running-the-examples). Read the API details in the readme for more information on the input/outputs of the two Transformer-XL models. I've updated the readme, thanks.
transformers
266
closed
Tokenization Incorrect
The tokenizer is not working correctly for me for e.g. [CLS] is gettig broken '[' , 'cl ', '##s', ']' In [1]: import torch ...: from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMas ...: kedLM ...: ...: # Load pre-trained model tokenizer (vocabulary) ...: tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex. In [2]: text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP ...: ]" ...: tokenized_text = tokenizer.tokenize(text) In [3]: tokenized_text Out[3]: ['[', 'cl', '##s', ']', 'who', 'was', 'jim', 'henson', '?', '[', 'sep', ']', 'jim', 'henson', 'was', 'a', 'puppet', '##eer', '[', 'sep', ']']
02-12-2019 06:30:57
02-12-2019 06:30:57
Though we are not facing the same issue…… I followed the instruction from the readme, the `tokenized_text` is expected by assertion to be: `['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]']`. However, the actual `tokenized_text` is: `['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[MASK]', 'jim', 'henson', 'was', 'a', 'puppet', '##eer', '[SEP]']`. I believe this is because of `masked_index = 6` in the example code from the readme. If you let it to `masked_index = 8`, everything is perfect. I opened a PR to fix this minor bug.<|||||>I think the tokenizer issue has been resolved in the latest version (0.5.0). <|||||>Yes!<|||||>I just encountered the same issue as @dhirajmadan1 with `transformers==2.2.1`. Is this expected somehow? I am following the quickstart guide: https://huggingface.co/transformers/quickstart.html ``` tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Run an example text through this: text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = tokenizer.tokenize(text) masked_index = 8 tokenized_text[masked_index] = '[MASK]' predicted_tokenized_sentence = ['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]'] ```<|||||>What is your issue @yenicelik? Do you mind opening a new issue with your problem?<|||||>ah, my apologies: https://github.com/huggingface/transformers/issues/2047 apparently a PR is on the way!
transformers
265
closed
Variance Sources
Hi, when I change the `--seed` argument, I get a high variance between different runs on my dataset. So I was wondering where the sources of variance might come from. I see that the seed is set (e.g. in `run_squad.py`) via: `random.seed(args.seed) np.random.seed(args.seed) torch.manual_seed(args.seed)` But how can I find out where randomization is actually used? I found the `RandomSampler` and replaced it with a `SequentialSampler`, but the variance remains high. I know that `modeling.py` randomly initializes the weights, but these are overwritten by the fixed weights when loading a pre-trained BERT model, e.g. `bert-base-uncased`, correct? Can anyone point me in any other direction where my source of variance might come from? Thanks!
02-11-2019 13:01:38
02-11-2019 13:01:38
Hi Carolin, Depending on the model you are using, not all the weights are initialized from the pre-trained models. Check the details in the [overview](https://github.com/huggingface/pytorch-pretrained-BERT#overview) section of the readme to see if it's the case for you. Apart from weights initialization and dataset shuffling other typical source of variances are the dropout layers. Bert fine-tuning has been reported to be a high-variance process indeed, in particular on small datasets.<|||||>Hi Thomas, thanks for the quick reply! I'm using `BertForMaskedLM`, so the weights should be set. But yes, I didn't think of dropout, thanks for pointing that out!
transformers
264
closed
RuntimeError: cuda runtime error while running run_classifier.py with 'bert-large-uncased' bert model
02-11-2019 11:05:22
02-11-2019 11:05:22
reduce batch size?<|||||>It is 32 as of now . What do you think I should reduce it to ?<|||||>Start very low and increase while looking at `nvidia-smi` or a similar GPU memory visualization tool.<|||||>Closing this for now, feel free to re-open if you have other issues.
transformers
263
closed
potential bug in extract_features.py
Hi, `token_type_ids` is not set for this line: `all_encoder_layers, _ = model(input_ids, token_type_ids=None, attention_mask=input_mask)` https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/extract_features.py#L267, this does not affect single sequence feature extraction, but for a pair of sequence, the model will process the pair as a single sequence and add `A` embedding to the two sequences, which should add `A`, `B` respectively. Seems like a bug. Best, Jie
02-08-2019 22:11:49
02-08-2019 22:11:49
Hi Jie, `extract_feature.py` is an example script. If you want to adapt it for sentences-pair, we would be happy to welcome a PR :)
transformers
262
closed
speed becomes slow
Hi, I'm trying to fine-tune Bert-base-uncased model for Squad v1.1 on microsoft azure. And I'm experiencing slow speed as the training continues. [logs from first epoch] Iteration: 3%|▎ | 452/14774 [01:46<57:06, 4.18it/s] Iteration: 3%|▎ | 453/14774 [01:46<57:07, 4.18it/s] Iteration: 3%|▎ | 454/14774 [01:47<57:02, 4.18it/s] Iteration: 3%|▎ | 455/14774 [01:47<57:14, 4.17it/s] Iteration: 3%|▎ | 456/14774 [01:47<57:12, 4.17it/s] Iteration: 3%|▎ | 457/14774 [01:47<57:23, 4.16it/s] Iteration: 3%|▎ | 458/14774 [01:48<57:26, 4.15it/s] [logs from 2nd epoch] Iteration: 29%|██▉ | 4313/14774 [3:51:45<10:33:14, 3.63s/it] Iteration: 29%|██▉ | 4314/14774 [3:51:49<10:31:50, 3.62s/it] Iteration: 29%|██▉ | 4315/14774 [3:51:52<10:31:40, 3.62s/it] Iteration: 29%|██▉ | 4316/14774 [3:51:56<10:28:11, 3.60s/it] Iteration: 29%|██▉ | 4317/14774 [3:51:59<10:29:19, 3.61s/it] Iteration: 29%|██▉ | 4318/14774 [3:52:03<10:27:00, 3.60s/it] I have seen you were also using Microsoft Azure, and I wonder if you could help me to figure out what was wrong with my setting. [Azure Cluster configuration] VM size : STANDARD_NC6S_V3 (single Tesla V100) Operating system : Canonical UbuntuServer 16.04-LTS (latest) Auto scale : true Target number of nodes : 1 (Min: 0, Max: 50) [json file used to submit the job] "containerSettings": { "imageSourceRegistry": { "image": "pytorch/pytorch:latest" } }, "jobPreparation": { "commandLine": "conda install python==3.7 && pip install requests boto3 tqdm" }, And I have used the same setting in the repo except tran_batch_size=6. Thanks!
02-07-2019 19:02:12
02-07-2019 19:02:12
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I know this is closed, but I'm running into a similar issue. For the first ~100 batches, the script runs with an OK speed (1s/it, batch size 64, 512 tokens, 8x GTX 1080Ti) (specific for `DistilBertForSequenceClassification` in my case). After that, the speed drops significantly, to about 10s/it, with the GPUs mostly sitting idle. (0% on `gpustat`. Any idea on what could be causing this?<|||||>Update: Looks like I was accumulating the gradient for too long.<|||||>> Update: Looks like I was accumulating the gradient for too long. @ArthurCamara So, how did you fix the problem in your case? Did you change the `gradient_accumulation_steps` parameter of `Trainer`? What were the initial value and the value which helped to resolve the problem? And how did you understand that the problem was really in this?<|||||>I have the same problem. I noticed that the training speed slows down as GPU temperature goes up... When the temperature goes down (if I wait after terminating the process), the speed becomes okay again. This issue happens only when I use `Trainer`. When I don't use it (i.e. use PyTorch utilities directly), the training speed is stable and the temperature doesn't go up. @taesikna Did you fix your issue?<|||||>> > Update: Looks like I was accumulating the gradient for too long. > > @ArthurCamara So, how did you fix the problem in your case? Did you change the `gradient_accumulation_steps` parameter of `Trainer`? What were the initial value and the value which helped to resolve the problem? > And how did you understand that the problem was really in this? I think the initial setting was 5 or something. I dropped to 1 and it was fine then. <|||||>> > > Update: Looks like I was accumulating the gradient for too long. > > > > @ArthurCamara So, how did you fix the problem in your case? Did you change the `gradient_accumulation_steps` parameter of `Trainer`? What were the initial value and the value which helped to resolve the problem? > > And how did you understand that the problem was really in this? > > I think the initial setting was 5 or something. I dropped to 1 and it was fine then. As I see now the default value is 1, but I still observe the slow down. @ArthurCamara Do you have any idea on that?<|||||>> > > > Update: Looks like I was accumulating the gradient for too long. > > > > > > > > > @ArthurCamara So, how did you fix the problem in your case? Did you change the `gradient_accumulation_steps` parameter of `Trainer`? What were the initial value and the value which helped to resolve the problem? > > > And how did you understand that the problem was really in this? > > > > > > I think the initial setting was 5 or something. I dropped to 1 and it was fine then. > > As I see now the default value is 1, but I still observe the slow down. @ArthurCamara Do you have any idea on that? I was not using the Trainer, but my own training loop. dropping the accumulation steps to 1 helped because it was overwhelming the GPUs memory and that makes the GPUs sit idly. If the GPUs on `nvidia-smi` are idle, but their memory is full, it's probably something related to memory usage. Otherwise, no idea. <|||||> > facing this same issue too, on 2080. will not use Trainer. ``` en ignored: tokens, ner_tags, id. [INFO|trainer.py:1156] 2021-06-05 16:46:31,375 >> ***** Running training ***** [INFO|trainer.py:1157] 2021-06-05 16:46:31,386 >> Num examples = 2021 [INFO|trainer.py:1158] 2021-06-05 16:46:31,397 >> Num Epochs = 1 [INFO|trainer.py:1159] 2021-06-05 16:46:31,407 >> Instantaneous batch size per device = 10 [INFO|trainer.py:1160] 2021-06-05 16:46:31,418 >> Total train batch size (w. parallel, distributed & accumulation) = 10 [INFO|trainer.py:1161] 2021-06-05 16:46:31,428 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1162] 2021-06-05 16:46:31,439 >> Total optimization steps = 203 1%|█▍ | 2/203 [00:12<16:46, 5.01s/it 1%|██ | 3/203 [00:20<19:53, 5.97s/it 2%|██▊ | 4/203 [00:37<30:55, 9.33s/i 2%|███▍ | 5/203 [00:50<34:44, 10.53s/ 3%|████▏ | 6/203 [01:01<34:07, 10.39s 3%|████▊ | 7/203 [01:02<25:16, 7.74s 4%|█████▌ | 8/203 [01:16<31:19, 9.64 4%|██████▎ | 9/203 [01:30<34:47, 10.7 5%|██████▉ | 10/203 [01:43<37:30, 11.6 5%|███████▌ | 11/203 [01:56<38:22, 11. 6%|████████▎ | 12/203 [02:09<38:37, 12 6%|████████▉ | 13/203 [02:23<40:23, 12 7%|█████████▋ | 14/203 [02:33<37:51, 1 7%|██████████▎ | 15/203 [02:44<36:31, 8%|███████████ | 16/203 [02:55<35:39, 8%|███████████▋ | 17/203 [03:07<36:25, 9%|████████████▍ | 18/203 [03:20<37:19 9%|█████████████ | 19/203 [03:34<38:25 10%|█████████████▊ | 20/203 [03:47<38:4 10%|██████████████▍ | 21/203 [04:00<38: 11%|███████████████▏ | 22/203 [04:13<39 11%|███████████████▊ | 23/203 [04:27<39 :42, 13.24s/it] ```
transformers
261
closed
removing unused argument eval_batch_size from LM finetuning #256
Removing unused eval_batch_size argument for simplification. As requested in #256.
02-07-2019 09:08:39
02-07-2019 09:08:39
Nice!
transformers
260
closed
pretrained model(s) in onnx format
hi, would you assist/help in order to export/convert at least 1 model into the onnx format ? https://onnx.ai kind
02-07-2019 00:55:12
02-07-2019 00:55:12
Hi @WilliamTambellini, have you tried to follow the standard ONNX procedure for converting a PyTorch model? The model in this repo are just regular PyTorch models.<|||||>Hello Thomas, I ve not yet tried, just seen : https://github.com/onnx/models/issues/130 https://stackoverflow.com/questions/54220042/how-do-you-generate-an-onnx-representation-of-a-pytorch-bert-pretrained-neural-n Will try, tks. <|||||>Hi, when I try to export a TokenClassification model to a ONNX model, I encounter `RuntimeError: ONNX export failed: Couldn't export operator aten::erf`, does that mean some part of BERT model layers not supported by ONNX? I think that problem comes from the definition of GELU function, which is `x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))`. Should I try to use other way to calculate this function or wait for ONNX to support this opertator? <|||||>@geekboood update your pytorch version to latest and the problem will most likely go away.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>for anyone who is looking for the answer: torch=1.1.0 python=3.6 `torch.onnx.export(model, (input_ids, segment_ids, input_mask), "bert.onnx", verbose=False)` works well for me<|||||>> for anyone who is looking for the answer: > torch=1.1.0 > python=3.6 > > `torch.onnx.export(model, (input_ids, segment_ids, input_mask), "bert.onnx", verbose=False)` > > works well for me Hi, thanks for the answer. Do you get good results when using the exported model for inference in another framework? I exported a BertForQuestionAnswering model to ONNX without errors, but I'm getting wrong predictions when using onnxruntime or a second export to TF Serving and I can't figure out why!<|||||>Not sure if this is still an issue for you but in the BertForSequenceClassification model the parameters are in a different order `torch.onnx.export(model, (input_ids, input_mask, segment_ids), "bert.onnx", verbose=False)` works as intended<|||||>@chessgecko wow you're right, thanks! working now<|||||>cc @mfuntowicz :)
transformers
259
closed
please add option to load fine-tuned file to CPU if trained on GPU
I fine-tuned the pytorch_model.bin on a GPU machine (google cloud) but need to use it on my home computer (no GPU). When I tried to open it using `model = BertForMaskedLM.from_pretrained(bert_version)` I got the following error: RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU. Perhaps you can add an option into `from_pretrained()` such as `cpu=True` which will then call `torch.load(weights_path, map_location=lambda storage, location: 'cpu')`
02-06-2019 23:16:04
02-06-2019 23:16:04
Indeed, this will be in the next release, thanks!<|||||>is there is a change according the CPU issue ?<|||||>Should be fixed now. Do you still have an error?<|||||>i am loading the model and i dont know how to load on CPU , gives me "model.to" to is not defined. Can you tell me how to send the model to run on CPU if trained on GPU.<|||||>Would love some explanation on how to do this as well!<|||||>I have the same question. how to load a model trained on GPU to CPU?<|||||>I am watching this as well.<|||||>I have the same question. Is there any option for using CPU-only?<|||||>watching this as well<|||||>same
transformers
258
closed
Fix the undefined variable in squad example
`train_dataset` is undefined
02-06-2019 18:36:53
02-06-2019 18:36:53
Thanks @BoeingX !
transformers
257
closed
Minor redundancy in model defintion?
this is a _major_ nitpick but it was a bit confusing at first: https://github.com/huggingface/pytorch-pretrained-BERT/blob/822915142b2f201c0b01acd7cffe1b05994d2d82/pytorch_pretrained_bert/modeling.py#L206-L212 L212 can simply be replaced by `self.all_head_size = config.hidden_size` as you already error out if the result of division isn't a whole number.
02-06-2019 12:44:46
02-06-2019 12:44:46
Yes, feel free to submit a PR. Otherwise, I'll fix it in the next release.
transformers
256
closed
does run_lm_finetuning.py actually use --eval_batch_size?
I'm looking through this code (thanks so much for writing it, btw) and I'm not seeing whether it actually uses eval_batch_size at all. If it doesn't, is it still performing an evaluation step to assess goodness of fit?
02-06-2019 01:57:28
02-06-2019 01:57:28
No, there's no evaluation step in the example script yet. What I can recommend is using your downstream task for evaluation of the pretrained BERT. Alternatively, you could of course also add some evaluation of the LM / nextSentence loss on a validation set.<|||||>Perhaps for clarity then, that parameter should be taken out of the script? <|||||>Sure, makes sense. I created a PR. Thanks for pointing this out @bsugerman .<|||||>Fixed in master now.
transformers
255
closed
Error while using Apex
Hi, I am trying to do mixed precision training, but I have countered a problem that seems to be related to the LayerNorm implementation of Apex. I have the following error msg while running the example (same error for my other code). ` Traceback (most recent call last): File "run_lm_finetuning.py", line 648, in <module> main() File "run_lm_finetuning.py", line 529, in main model = BertForPreTraining.from_pretrained(args.bert_model) File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 506, in from_pretrained model = cls(config, *inputs, **kwargs) File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 689, in __init__ self.bert = BertModel(config) File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 600, in __init__ self.embeddings = BertEmbeddings(config) File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 183, in __init__ self.LayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12) File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/normalization/fused_layer_norm.py", line 126, in __init__ File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked ModuleNotFoundError: No module named 'fused_layer_norm_cuda' ` I am wondering if it is related to the version of Apex, so may I know which Apex checkpoint you used.
02-05-2019 23:49:01
02-05-2019 23:49:01
Hi @chenyangh, You need to install `apex` with the C++ and CUDA extensions: ```bash git clone https://github.com/NVIDIA/apex.git cd apex python setup.py install --cuda_ext --cpp_ext ```<|||||>@thomwolf Thanks!<|||||>@thomwolf After doing what you wrote, I got this error. torch.__version__ = 1.0.1.post2 Traceback (most recent call last): File "setup.py", line 60, in <module> raise RuntimeError("--cuda_ext was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.") RuntimeError: --cuda_ext was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc. What else should I do for this?<|||||>You should refer to apex installation instructions. Apex has slightly changed since my comment so best is to go read NVIDIA's README and installation instructions here: https://github.com/NVIDIA/apex<|||||>@kbulutozler You can change pytorch docker image version to pytorch/pytorch:1.3-cuda10.1-cudnn7-devel
transformers
254
closed
Python 2
Make the package compatible with python 2.7+
02-05-2019 23:08:41
02-05-2019 23:08:41
transformers
253
closed
Merge pull request #1 from huggingface/master
updating the repo
02-05-2019 21:22:07
02-05-2019 21:22:07
transformers
252
closed
BERT tuning all parameters?
Just a clarification question: when tuning bert parameters (say, for SQUAD), does it tune the parameters of the final parameter or the whole BERT model?
02-05-2019 18:39:37
02-05-2019 18:39:37
In the examples scripts, it tunes the whole model. But BERT models classes are just regular PyTorch `nn.Modules` so you can also freeze layer like you would do in any PyTorch module.
transformers
251
closed
Only keep the active part mof the loss for token classification
If attention mask is not none, then we want to restrict our loss to the items which are not padding (hereby assumed those that have attention_mask = 1). This is important if doing e.g NER.
02-04-2019 16:48:04
02-04-2019 16:48:04
Thanks Thibault!
transformers
250
closed
Fix squad answer start and end position
Previous version might miss some overlong answer start and end indices (which should be 0), and sometimes the start/end positions would be outside the model inputs. Doc_start and doc_end are based on tokenized subword sequences while example.start_position and example.end_position are indices in original word-level, which are usually shorter than the subword sequences. Why not use tok_start_position and tok_end_position for comparison?
02-03-2019 13:06:05
02-03-2019 13:06:05
Thanks @cooelf, there was a PR merging `run_squad` and `run_squad2` that also fixed this issue.
transformers
249
closed
Fine tuning Bert for Question answering
I just wanted to ask if the weights of the Bert base model are getting updated while fine tuning Bert for Question answering. I see that the Bert for QA is a model with A linear layer on top of Bert pre-trained model. I am trying to reproduce the same model in keras. Could any one tell me if i should freeze the layers in the Bert base model or not?
02-03-2019 06:08:52
02-03-2019 06:08:52
The weights of the BERT base model are getting updated while finetuning the network.<|||||>Indeed
transformers
248
closed
fix prediction on run-squad.py example
run_squad.py exits with an error when running do_predict without training. The error is due to the model_state_dict not existing when --do_predict is selected. Traceback (most recent call last): File "run_squad.py", line 980, in <module> main() File "run_squad.py", line 923, in main model_state_dict = torch.load(output_model_file) File "/home/joe/pytorchnlp/lib/python3.5/site-packages/torch/serialization.py", line 365, in load f = open(f, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '../../BERT_work/Squad/pytorch_model.bin'
02-01-2019 18:25:24
02-01-2019 18:25:24
Thanks @JoeDumoulin
transformers
247
closed
Multilabel classification and diverging loss
Hi, I'm not sure I'm posting this at the right spot but I am trying to use your excellent implementation to do some multi label classification on some text. I basically adapted the run_classifier.py code to a Jupyter Notebook and change a little bit the BERT Sequence Classifier model so it can handle multilabel classification. However, my loss tends to diverge and my outputs are either all ones or all zeros. The labels distribution in my train dataset is : `array([ 65, 564, 108, 17, 40, 26, 306, 195, 25, 345, 54, 80, 214]) ` i.e. the label 1 is used 65 times, the label 2 is used 564 times etc... Each sample has between 1 and 4 labels. I am using the Adam Optimizer on the BCEWithLogitsLoss and I am unable to figure out where the problem comes from? Should I add some weights in my loss function? Do I use it in a right way? Is my model wrong somewhere? I attach to this post a Notebook of my test. Maybe someone encountered the same problem before and could help me? [NOTEBOOK](https://nbviewer.jupyter.org/github/nicolas-mingione/nlp/blob/master/test_bert.ipynb) Thanks !
02-01-2019 15:48:37
02-01-2019 15:48:37
Hey, I am working on something similar. I feel like the original code might be incorrect. They seem to directly take the output of the model as 'loss' without applying any criteria. But I might be totally wrong. <|||||>Hey! :) Really? What do you mean by criteria? I tried to artificially change my dataset so that the target outputs are [1, 0, 0,..., 0] for every sample. I wanted to see whether the model was able to learn this dummy case. However, it fails and it predicts the exact opposite, i.e. [0, 1, 1,.... 1]. That's why I think I must be doing something wrong somewhere. <|||||>Hi @nicolas-mng @zhipeng-fan Did you guys manage to get a multilabel problem to work ? Could you please share a gist ?<|||||>Hi, you should have a look here: https://medium.com/huggingface/multi-label-text-classification-using-bert-the-mighty-transformer-69714fa3fb3d ;)
transformers
246
closed
Accurate SQuAD answer start and end position
Previous version might miss some overlong answer start and end indices (which should be 0), and sometimes the start/end positions would be outside the model inputs. Doc_start and doc_end are based on tokenized subword sequences while example.start_position and example.end_position are indices in original word-level, which are usually shorter than the subword sequences.
02-01-2019 07:19:39
02-01-2019 07:19:39
transformers
245
closed
can you do a new release + pypi
we've been getting some requests to incorporate newer features into allennlp that are only on master (e.g. `never_split`). thanks!
01-31-2019 19:15:10
01-31-2019 19:15:10
Hi Joel, yes the new release (0.5.0) is pretty much ready (remaining work on branches `fifth-release` and `transfo-xl` to finish testing the newly added pre-trained OpenAI GPT and Transformer-XL). Likely next week.<|||||>Awesome, thanks! On Thu, Jan 31, 2019, 11:06 AM Thomas Wolf <notifications@github.com> wrote: > Hi Joel, yes the new release (0.5.0) is pretty much ready (remaining work > on branches fifth-release and transfo-xl to finish testing the newly > added pre-trained OpenAI GPT and Transformer-XL). > > Likely next week. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/245#issuecomment-459506373>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/ABP2mYQxXrEKfGvJhESKqKIQzI2dvcolks5vI1rUgaJpZM4ac94Y> > . > <|||||>Ok @joelgrus, the new release is out: https://github.com/huggingface/pytorch-pretrained-BERT/releases/tag/v0.5.0
transformers
244
closed
Avoid confusion of inplace LM masking
Fix confusion of LM masking that happens inplace. Discussed in https://github.com/huggingface/pytorch-pretrained-BERT/issues/243 and https://github.com/huggingface/pytorch-pretrained-BERT/issues/226
01-31-2019 10:45:58
01-31-2019 10:45:58
Great, thanks @tholor!
transformers
243
closed
seems there is a bug in fine tuning language model
For masked language model, the input should be tokens masked. But in examples/run_lm_finetuning.py, input is not masked. In method convert_example_to_features, is it supposed to use masked output as token input? tokens = [] segment_ids = [] tokens.append("[CLS]") segment_ids.append(0) for token in tokens_a:# is tokens_a supposed to be t1_random? tokens.append(token) segment_ids.append(0) tokens.append("[SEP]") segment_ids.append(0) assert len(tokens_b) > 0 for token in tokens_b: tokens.append(token) segment_ids.append(1) tokens.append("[SEP]") segment_ids.append(1) input_ids = tokenizer.convert_tokens_to_ids(tokens)
01-31-2019 09:37:11
01-31-2019 09:37:11
I believe this is not a bug you are referring to, but indeed some confusing part of the code that we should probably change to avoid future confusion. `tokens_a` get masked **inplace** by the method `random_word`. `tokens_a` and `t1_random` refer indeed to the same objects. You can see that the input got masked also by checking the logs of the first examples: ``` Iteration: 0%| | 0/196 [00:00<?, ?it/s]01/31/2019 11:31:35 - INFO - __main__ - *** Example *** 01/31/2019 11:31:35 - INFO - __main__ - guid: 0 01/31/2019 11:31:35 - INFO - __main__ - tokens: [CLS] [MASK] to 95 % of the [MASK] ##y [MASK] ' s [MASK] [SEP] le ##que ##ux ( 2005 : [MASK] ) . [SEP] 01/31/2019 11:31:35 - INFO - __main__ - input_ids: 101 103 1106 4573 110 1104 1103 103 1183 103 112 188 103 102 5837 3530 5025 113 1478 131 103 114 119 102 01/31/2019 11:31:35 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 01/31/2019 11:31:35 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 01/31/2019 11:31:35 - INFO - __main__ - LM label: [-1, 1146, -1, -1, -1, -1, -1, 6831, -1, 1236, -1, -1, 14296, -1, -1, -1, -1, -1, -1, -1, 125, -1, -1, -1] 01/31/2019 11:31:35 - INFO - __main__ - Is next sentence label: 0 01/31/2019 11:31:37 - INFO - __main__ - *** Example *** 01/31/2019 11:31:37 - INFO - __main__ - guid: 1 01/31/2019 11:31:37 - INFO - __main__ - tokens: [CLS] a car [MASK] [MASK] 200 mill ##ig ##rams . [SEP] the rain had only ceased with the gray [MASK] of morning [MASK] [SEP] 01/31/2019 11:31:37 - INFO - __main__ - input_ids: 101 170 1610 103 103 2363 6159 6512 24818 119 102 1103 4458 1125 1178 6445 1114 1103 5021 103 1104 2106 103 102 01/31/2019 11:31:37 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 01/31/2019 11:31:37 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 01/31/2019 11:31:37 - INFO - __main__ - LM label: [-1, -1, -1, 2980, 1110, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 24177, -1, -1, 1120, -1] 01/31/2019 11:31:37 - INFO - __main__ - Is next sentence label: 1 ``` Please see discussion in https://github.com/huggingface/pytorch-pretrained-BERT/issues/226<|||||>ah, ok, i see, thx~<|||||>Added a PR to simplify this.<|||||>Btw, would inplace mask pollute training data in 'on_memory' way?<|||||>You mean if the original training sentence stored in `train_dataset.all_docs` get somehow modified (= masked)?! => No, this object is not touched by the LM masking / padding / cutting
transformers
242
closed
Fix argparse type error
Resolved the following error on executing `run_squad2.py --help` ```TypeError: %o format: an integer is required, not dict```
01-30-2019 20:11:10
01-30-2019 20:11:10
Thanks Surya!
transformers
241
closed
Tokenization doesn't seem to match BERT paper
In the [BERT paper](https://arxiv.org/abs/1810.04805) section 4.3 ("Named Entity Recognition") there is an example of some tokenized text: ```python ['Jim', 'Hen', '##son', 'was', 'a', 'puppet', '##eer'] ``` However, when I take that sentence and try to tokenize it myself with `BertTokenizer` from this repo ```python from pytorch_pretrained_bert import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False) text = "Jim Henson was a puppeteer" tokenizer.tokenize(text) == ['Jim', 'He', '##nson', 'was', 'a', 'puppet', '##eer'] ``` same thing happens if I pre-tokenize and just use `BertTokenizer.wordpiece_tokenizer.tokenize()` ```python from itertools import chain from pytorch_pretrained_bert import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False) text = "Jim Henson was a puppeteer".split() list(chain.from_iterable([tokenizer.wordpiece_tokenizer.tokenize(token) for token in text])) == ['Jim', 'He', '##nson', 'was', 'a', 'puppet', '##eer'] ``` Is there something I am misunderstanding / doing wrong or is this an actual bug? The BERT paper and this repo tokenize `"Henson"` as `['Hen', '##son']` and `['He', '##nson']` respectively.
01-30-2019 18:33:15
01-30-2019 18:33:15
WordPiece tokenization depends on the particular BERT model: In general, one model, say, bert-based-cased will produce a different tokenization than another, say, bert-large-uncased. If you try all models, one or more might produce the tokenization shown in the example in the paper. It might also happen that none of them does, in which case the example was probably produced with an unpublished model. A bug would be if the same model leads to different tokenizations in the pytorch and tensorflow version.<|||||>Hmm, I see. I didn't know that, thanks for pointing it out! For what it's worth, `bert-base-multilingual-cased` is the only model (from those currently listed in the readme of this repo) that produces the tokenization shown in the example in the paper.
transformers
240
closed
Minor update in README
Updated links to classes in `modeling.py`
01-30-2019 18:20:29
01-30-2019 18:20:29
Thanks Girishkumar!
transformers
239
closed
cannot load BERTAdam when restoring from BioBert
I am trying to convert the recently released BioBert checkpoint: https://github.com/naver/biobert-pretrained The conversion script loads the checkpoint, but appears to balk at BERTAdam when building the Pytorch model. ``` ... Building PyTorch model from configuration: { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 28996 } Initialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'beta'] Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/venvs/dev3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/__main__.py", line 19, in <module> convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT) File "/venvs/dev3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 69, in convert_tf_checkpoint_to_pytorch pointer = getattr(pointer, l[0]) AttributeError: 'Parameter' object has no attribute 'BERTAdam' ```
01-30-2019 16:21:09
01-30-2019 16:21:09
I see. This is because they didn't use the same names for the adam optimizer variables than the Google team. I'll see if I can find a simple way around this for future cases. In the mean time, you can install `pytorch-pretrained-bert` from the master (`git clone ...` and `pip install -e .`) and add the names of these variables (`BERTAdam` to the black-list line 53 in the conversion script: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py#L53<|||||>Hmm, loading bioberts parameters works for me. Mabye as a future feature we could have the option to load biobert parameters as an option in the package? I ran it like this: ``` convert_tf_checkpoint_to_pytorch("AI/data/biobert/biobert_model.ckpt.index", "AI/data/biobert/bert_config.json","AI/data/biobert/pytorch_model.bin") ``` it also loads afterwards.<|||||>This can help. https://github.com/MeRajat/SolvingAlmostAnythingWithBert/blob/ner_medical/convert_to_pytorch_wt.ipynb <|||||>After I convert the tensorflow checkpoint to pytorch model by excluding some variables as mentioned by @MeRajat , I get the following warnings when I tried to load the model. > 02/21/2019 17:33:06 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.bias', 'qa_outputs.weight'] 02/21/2019 17:33:06 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias'] <|||||>This is normal. Closing the issue now.
transformers
238
closed
padded positions are ignored when embedding position ids
When embedding position ids, all positions are considered. ``` seq_length = input_ids.size(1) position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device) ``` This is different from most transformer implementations. Should it be ``` position_ids = np.array([ [pos_i+1 if w_i != PAD else 0 for pos_i, w_i in enumerate(seq)] for seq in batch_seq]) ```
01-30-2019 06:43:13
01-30-2019 06:43:13
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
237
closed
How can I change vocab size for pretrained model?
Is there way to change (expand) vocab size for pretrained model? When I input the new token id to model, it returns: /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1108 with torch.no_grad(): 1109 torch.embedding_renorm_(weight, input, max_norm, norm_type) -> 1110 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1111 1112 RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorMath.cpp:352
01-30-2019 05:10:37
01-30-2019 05:10:37
Hi, If you want to modify the vocabulary, you should refer to this part of the original repo `README` https://github.com/google-research/bert#learning-a-new-wordpiece-vocabulary<|||||>If you don't want a complete new vocabulary (which would require training from scratch), but extend the pretrained one with a couple of domain specific tokens, this comment from Jacob Devlin might help: > [...] if you want to add more vocab you can either: (a) Just replace the "[unusedX]" tokens with your vocabulary. Since these were not used they are effectively randomly initialized. (b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls. (https://github.com/google-research/bert/issues/9) I am currently experimenting with approach a). Since there are 993 unused tokens this might already help for the most important tokens in your domain.<|||||>@tholor and @rodgzilla answers are the way to go. Closing this issue since there no activity. Feel free to re-open if needed.<|||||>> If you don't want a complete new vocabulary (which would require training from scratch), but extend the pretrained one with a couple of domain specific tokens, this comment from Jacob Devlin might help: > > > [...] if you want to add more vocab you can either: > > (a) Just replace the "[unusedX]" tokens with your vocabulary. Since these were not used they are effectively randomly initialized. > > (b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls. > > ([google-research/bert#9](https://github.com/google-research/bert/issues/9)) > > I am currently experimenting with approach a). Since there are 993 unused tokens this might already help for the most important tokens in your domain. @tholor I have exactly the same situation as you had. I'm wondering If you can tell me how your experiment with approach (a) went. Did it improve the accuracy. I really appreciate if you can share your conclusion.<|||||>> @tholor and @rodgzilla answers are the way to go. > Closing this issue since there no activity. > Feel free to re-open if needed. Hi @thomwolf , for implementing models like VideoBERT we need to append thousands of entries to the word embedding lookup table. How could we do so in Pytorch/any such examples using the library?<|||||>@tholor Can you guide me on how you are counting 993 unused tokens? I see only first 100 places of unused tokens?<|||||>For those finding this on the web, I found the following answer helpful: https://github.com/huggingface/transformers/issues/1413#issuecomment-538083512
transformers
236
closed
Preprocessing necessary for lengthier text
Hi, I tried to train squad model on a different dataset where I have lengthier questions/contexts. It gave memory error CUDA out of memory. Tried to allocate 4.50 MiB (GPU 5; 11.78 GiB total capacity; This error seems to happen in pytorch when there are lengthier data points ( pytorch tells how much it tried to allocate as opposed to normal CUDA out of memory error). tensorflow code for bert doesn't give this error as it trims the question/text lengths. I think you should include in this package as well.
01-29-2019 15:56:30
01-29-2019 15:56:30
Ok. I see you included this. (max_sent_length, max_query_length) I will debug my error.You probably can close this issue.
transformers
235
closed
Training BERT behind a proxy server
When I try to run BERT training, I get the following error during the vocabulary download: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-large-uncased-vocab.txt (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fbb55377828>: Failed to establish a new connection: [Errno 110] Connection timed out')) I am running the script behind a proxy server which I suspect is the cause of this error. Is there any way to remedy this?
01-29-2019 13:47:29
01-29-2019 13:47:29
You can download the Tensorflow weights from Google's BERT repo and convert them as detailed in the readme of the present repo.<|||||>Hi tomwolf, I am new to XLNet and have the same issue as above. Could you direct me to the readme of the issue? I am not able to find it. I modified my code to resolve the issue to **model_file_address = '/myfolder/XLNetWork/xlnet-base-cased-config.json'** But I get the below error on the line : **model = XLNetForSequenceClassification.from_pretrained(model_file_address,num_labels=len(tag2idx))** Error: **UnpicklingError: invalid load key, '{'.** I am pretty much stucked. Your help will be appreciated. Thanks, Saul <|||||>@SaulML Did you solve it? I got the same error, i.e. `UnpicklingError: invalid load key, '{'.`, when I tried to load pretrained bert using `model = BertForSequenceClassification.from_pretrained("/PathToBert/uncased_L-12_H-768_A-12/bert_config.json", num_labels=2)`, thanks in advance!<|||||>You can now supply a `proxies` argument to `from_pretrained` when you are using proxies. Check the doc and docstrings.<|||||>Got the same error as @SaulML and @iamxpy. Has anyone solved it?<|||||>Got the same unpickling error. Was it solved?<|||||>I think you need to just have the path as the directory rather than the config file.
transformers
234
closed
Fine tuning for evaluation
Hi! 1) Help me please figure out, what would be optimal batch size for evaluating nextSentencePrediction model? For performance. Is it same as used during pre-training (128)? 2) If i building high performance evaluating backend on CUDA, would it be a good idea to use several threads with bert model in each, or its better to use one thread with proper batching?
01-29-2019 13:36:53
01-29-2019 13:36:53
1. For evaluation I would advise the maximum batch size that your GPU allows. You will be able to use more efficiently this way. 2. I think you will be better off by using a single thread.<|||||>Thanks! How can i figure out optimal batch size? I want to try tesla k80<|||||>You increase it gradually and when the program crashes, it is too big ^^.<|||||>Thanks!<|||||>Guys, sorry i reopen this issue, but it might be helpful and on topic of evaluation I want to load batch of data into model for evaluation. Batch have size of 16 sentences of different length Code: ``` tokens_tensor = torch.tensor(indexed_tokens) segments_tensors = torch.tensor(segments_ids) predictions = model(tokens_tensor, segments_tensors) ``` indexed_tokens are array of size 16 of arrays of inputs. I got error ValueError: expected sequence of length 121 at dim 1 (got 23) when i create tensor from a single element tokens_tensor = torch.tensor([indexed_tokens[0]]) it works What im doing wrong? Thanks!<|||||>Could you create of minimal program that reproduces your problem (with the code you are using to generate `indexed_tokens`)?<|||||>1. Tensor Input array should have same length for all rows. My sentences had various length. That's why pytorch raise exception 2. If you add zeros to the end of input arrays, to make all rows equal, evaluation will be slower than one per sentence. Batching not improving speed.<|||||>Hi @Alexadar, you have to batch your examples and pad them indeed. No other way I'm afraid.<|||||>Sorry, i missed your post request for example. Yes, padding is only way to batch. It is slower than process sentencess one by one, i tested on GPU.
transformers
233
closed
What is get_lr() meaning in the optimizer.py
I use a Model based on BertModel, and when I use the BertAdam the learning rate isn't changed. And when I use `get_lr()`, the return result is `[0]`. And I see the length of state isn't 0, but why I get that?
01-28-2019 13:19:06
01-28-2019 13:19:06
You can use it to get the current learning rate of the `BertAdam` optimizer (which vary according to the schedules discussed in #195).
transformers
231
closed
Why is the output bias computed separately?
Hi ! Sorry if this is a dumb question, but I don't understand why is the bias [added separately](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L379) to the decoder weights instead of using `self.decoder = nn.Linear(num_features, num_tokens, bias=True)`? Isn't it equivalent?
01-27-2019 17:22:32
01-27-2019 17:22:32
The code section you linked follows the original TensorFlow code: https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/run_pretraining.py#L257<|||||>Exactly.
transformers
230
closed
Cleaning `~/.pytorch_pretrained_bert`
What is inside `~/.pytorch_pretrained_bert`? Is it just the downloaded pre-trained model weights? Is it safe to remove this directory?
01-26-2019 23:18:42
01-26-2019 23:18:42
This folder contains the pretrained model weights as they have been trained by google and the vocabulary files for the tokenizer. I would not remove it unless you are really tight on disk space, in this case I guess you could only keep the `.json` files with the vocabulary and load your finetuned model.<|||||>Yes it contains the weights, configuration and vocabulary files. You can remove it if you want. In that case the weights will be downloaded again the next time you initiate a BertModel.
transformers
229
closed
Is BERT suitable for seq2seq tasks, such as machine translation?
If true, is there an example?
01-26-2019 11:01:42
01-26-2019 11:01:42
It is, check the nice recent work of Guillaume Lample and Alexis Conneau: https://arxiv.org/abs/1901.07291
transformers
228
closed
Freezing base transformer weights
As I understand, say if I'm doing a classification task, then the transformer weights, along with the top classification layer weights, are both trainable (i.e. `requires_grad=True`), correct? If so, is there a way to freeze the transformer weights, but only train the top layer? Is that a good idea in general when I have a small dataset?
01-26-2019 09:09:36
01-26-2019 09:09:36
Hi! You can modify the trainable attributes as described in #95.<|||||>Thanks!
transformers
227
closed
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'
Here is the complete error message: ``` Traceback (most recent call last): File "app/set_expantion_eval.py", line 118, in <module> map_n=flags.map_n) File "app/set_expantion_eval.py", line 62, in Eval expansionWithScores = BE.set_expansion_tensorized(seeds, ["1"]) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/app/bert_expansion.py", line 109, in set_expansion_tensorized gold_repr_list.append(self.extract_representation(" ".join(seed), x, dim)) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/app/bert_expansion.py", line 317, in extract_representation output_all_encoded_layers=output_all_encoded_layers) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 626, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 193, in forward words_embeddings = self.word_embeddings(input_ids) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 118, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 1454, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index' ``` Here is a summary of what I do in my code: ```python self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.model = BertModel.from_pretrained(bert_model) # loading the model self.model.to(self.device) # without this there is no error, but it runs in CPU (instead of GPU). self.model.eval() # declaring to the system that we're only doing 'forward' calculations # creating the input tensors here ... # move the tensors to the target device input_ids_tensor.to(self.device) segment_ids_tensor.to(self.device) input_mask_tensor.to(self.device) output_all_encoded_layers.to(self.device) encoded_layers, _ = self.model(input_ids_tensor, segment_ids_tensor, input_mask_tensor, output_all_encoded_layers=output_all_encoded_layers) ``` When I don't have `model.to(device) ` the code works fine, but I think it only uses CPU only. When I add it, it fails with the above error. I did a little investigation and printed the inputs to `.model(.)` to see if they are properly copied to `device`: ``` print("\n * input_ids_tensor \n ") print(input_ids_tensor) print(input_ids_tensor.device) print("\n * segment_ids_tensor \n ") print(segment_ids_tensor) print(segment_ids_tensor.device) print("\n * input_mask_tensor \n ") print(input_mask_tensor) print(input_mask_tensor.device) print("\n * self.device \n ") print(self.device) ``` which outputs: ``` * input_ids_tensor tensor([[ 101, 5334, 2148, 1035, 3792, 3146, 102, 5334, 102, 0, 0], [ 101, 5334, 2148, 1035, 3792, 3146, 102, 2148, 1035, 3792, 102], [ 101, 5334, 2148, 1035, 3792, 3146, 102, 3146, 102, 0, 0]]) cpu * segment_ids_tensor tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0]]) cpu * input_mask_tensor tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]) cpu * self.device cuda:0 ``` As it can be seen, the tensors are still `cpu`, even after running `.to(device)`. Any thoughts where things are going wrong?
01-25-2019 14:45:34
01-25-2019 14:45:34
You failed to move the tensors to GPU. Replace your code with this: ``` input_ids_tensor = input_ids_tensor.to(self.device) segment_ids_tensor = segment_ids_tensor.to(self.device) input_mask_tensor = input_mask_tensor.to(self.device) ```<|||||>Ah I didn't realize they don't work in-place (unlike the syntax for model files `model.to(device)`). <|||||>thank you!<|||||>Great thanks!<|||||>thanks!<|||||>use model.to(device) as well
transformers
226
closed
Logical error in the run_lm_finetuning?
Hi, @thomwolf @nhatchan @tholor @deepset-ai Many thanks for amazing work with this repository =) I maybe grossly wrong or just missed some line of the code somewhere, but it seems to me that there is a glaring issue in the overall logic of `examples/run_lm_finetuning.py` - I guess you never pre-trained the model till convergence from scratch, right? _________________________________________ **Context** I have already been able to fit the model to the Russian version of the SQUAD dataset from scratch (so-called **SberSQUAD** from sdsj 2017), and I was able to obtain **~40% EM w/o any pre-training**. Afaik, ~60% EM is about the top result on this dataset, achieved using BiDAF, so the model worksm which is good =). Anyway this was a sanity check for me to see that the model is sound, obviously to **achieve good results you need to pre-train first** (afaik the authors of the BERT paper did not even post any results w/o pre-training, right?). So now I am planning to pre-train BERT for the Russian language with various pre-processing ideas: - BPE (like in the original); - Embedding bag (works well for "difficult" languages) + ; _________________________________________ **The Problem** First of all let's quote the paper ``` In order to train a deep bidirectional representation, we take a straightforward approach of masking some percentage of the input tokens at random, and then predicting only those masked tokens. We refer to this procedure as a “masked LM” (MLM), although it is often referred to as a Cloze task in the literature (Taylor, 1953). In this case, the fi- nal hidden vectors corresponding to the mask tokens are fed into an output softmax over the vo- cabulary, as in a standard LM. In all of our exper- iments, we mask 15% of all WordPiece tokens in each sequence at random. In contrast to denoising auto-encoders (Vincent et al., 2008), we only pre- dict the masked words rather than reconstructing the entire input. ``` So as far as I can see: - We mask / alter some of the input (afaik the masking scheme [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L276) is correct) and make the model correct our "mistakes". It only makes sense - we break the input, and the model corrects it; - But if you look [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L142), [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L331-L334) and [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L371) - it seems to me that in the code: - Just padded / processed tokens are passed as input; - The lm targets are the "messed up" tokens; So, the training is kind of reversed. The correct sequence is passed, but the incorrect sequence is the target. Anyway - I may just have missed some line of code, that changes everything. I am just trying to understand the model properly, because I need to do a total rewrite of the pre-processing, because in my domain usage of embedding bags proved to be more beneficial than BPE. Many thanks!
01-25-2019 11:51:02
01-25-2019 11:51:02
Hi, @snakers4, I think this part is correct. The input comes from `tokens` and `input_ids` in line 371, some of which are _already_ masked/altered, and the LM targets are `lm_label_ids`, which contain the original tokens. Note that `random_word`, called in line 331 and 332, masks the words in `tokens_a` and `tokens_b` _in-place_; `t1_random` and `tokens_a` refer the same object actually. If you are trying to pre-train a model from scratch and having slow convergence issue, see discussions in #202. <|||||>> The input comes from tokens and input_ids in line 371, some of which are already masked/altered, and the LM targets are lm_label_ids, which contain the original tokens. Ah, you are right, I see it [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L354-L371), sorry. I totally missed the in-place part. This [bit](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L301) explains why `lm_label_ids` are the original tokens. This was a bit counter-intuitive. Anyway, thanks for the explanations, now everything is clear.
transformers
225
closed
max sentence length
is there an max sentence length for this bert code?
01-24-2019 03:16:17
01-24-2019 03:16:17
Hi, 512 tokens if you use the pre-trained models. Any length you want if you train your models from scratch.<|||||>could we set it smaller ? cause if i set it as 512, then result is out of memory<|||||>You can just send a smaller input in the model, no need to go to the max<|||||>thank you @thomwolf
transformers
224
closed
how to add new vocabulary?
for specific task, it is required to add new vocabulary for tokenizer. It is ok that re-training for those vocabulary for me :) Is it possible to add new vocabulary for tokenizer?
01-24-2019 02:42:38
01-24-2019 02:42:38
Hi @hahmyg, please refer to the relevant section in the original implementation repository: https://github.com/google-research/bert#learning-a-new-wordpiece-vocabulary.
transformers
223
closed
Feat/9
01-24-2019 02:22:07
01-24-2019 02:22:07
transformers
222
closed
ConnectionError returned if Internet network is not stable
Hi, although I have download BERT pretrained model, "ConnectionError" returned if my Internet network is not very stable. Function ```file_utils.cached_path``` needs stable internet. Is there any way to avoid checking for amazonaws before loading bert-embedding?
01-24-2019 02:01:16
01-24-2019 02:01:16
I'm guessing you're using some of the classes defined in modeling.py, such as one of the Bert "pretrained models" (e.g. any of the models that inherit from `PreTrainedBertModel`)? On construction, each of these classes takes a `config` argument, where `config` is a BertConfig object (also defined in modeling.py). The BertConfig can either be created from a model at one of the links in `PRETRAINED_MODEL_ARCHIVE_MAP` or from a config file stored in a local directory. You just have to set the `pretrained_model_name` to a local directory containing a bert_config.json file and a pytorch_model.bin file rather than one of 'bert-base-uncased', 'bert-large-uncased' etc. Setting `pretrained_model_name` to one of the latter options will try to pull from the Amazon AWS repositories. So if you're running the run_classification.py script, you would set the 'bert-model' flag to the directory with your downloaded bert model if you don't want it to pull from AWS. One thing is if you've downloaded one of the original Google Bert models, you'll need to convert tf checkpoints to pytorch bin files. There's a script for this in the repository. You shouldn't need to worry about this if you've downloaded one of the models at the links in `PRETRAINED_MODEL_ARCHIVE_MAP` (defined at the top of modeling.py) TLDR: Set `--bert-model` to the directory with your downloaded Bert model Does that make any sense?<|||||>Yes! Set local directory in modeling.py and tokenization.py can solve my problem. Thank you so much!<|||||>Thanks @cmeister747 !
transformers
221
closed
Using BERT with custom QA dataset
Hi, I want to use BERT to train a QA model on a custom SQuAD-like dataset. Ideally, I would like to leverage the learning from the SQuAD dataset, and add fine-tuning on my custom dataset, which has specific vocabulary. What is the best way to do this?
01-23-2019 10:26:54
01-23-2019 10:26:54
I think that you should start by pretraining a BERT model on SQuAD to give it a sense on how to perform question answering and then try finetuning it to your task. This may already give you good results, if it doesn't you might have to dig a bit deeper in the model. I don't really know how adding your domain specific tokens to the vocabulary would interact with the tokenizer.<|||||>One nice recent example is "A BERT Baseline for the Natural Questions" by Chris Alberti, Kenton Lee and Michael Collins from Google Research: http://arxiv.org/abs/1901.08634<|||||>This might help other dev who want to use BERT for custom QA: https://github.com/cdqa-suite/cdQA
transformers
220
closed
Questions Answering Example
Hello folks! Can you provide simple example how to use pytorch bert with pretrained model for questions answering?
01-23-2019 08:19:56
01-23-2019 08:19:56
Hi! You can check this file that implements question answering on the SQuAD dataset: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py<|||||>Thank you very much!<|||||>How can we use pre trained BertForQuestionAnswering model? I have looked into BertForNextSentencePrediction and output of model makes sense given the input vector, but unable to find any good example on BertForQuestionAnswering.<|||||>Have you tried looking at the official [documentation](https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertforquestionanswering) that provides a simple example for each model?<|||||>Hi @LysandreJik, the official example was not clear to me. I understood the part of encoding. But I am looking for something like, I will give a question and a paragraph which would contain the answer, and I need the model to predict the answer span. But in the example they have done it with a single sentence, which is quite confusing!<|||||>Hey @LysandreJik, Sorry my bad, didn't look at run_squad.py, it has been changed a lot since I saw it first during which BERT was only released! It is so good to see everything being integrated at a single place! Thanks for the great work you guys! ❤️ <|||||>@Arjunsankarlal Glad you could get what you were looking for!<|||||>@Arjunsankarlal @LysandreJik can you guys help me with the example. I got an error when I ran the example given in the [documentation](https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertforquestionanswering) when encoding the sequence, that tokernizer doesn't have attribute "encode". So I updated the code as follows: `from pytorch_pretrained_bert import BertTokenizer, BertForQuestionAnswering import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForQuestionAnswering.from_pretrained('bert-base-uncased') tokenized_text = tokenizer.tokenize("Hello, my dog is cute") indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) input_ids = torch.tensor([indexed_tokens]) # Batch size 1 start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions) print(outputs)` this is the output tensor(1.7739, grad_fn=<DivBackward0>) I believe it's a loss but I don't understand the example as in how does it answer the question. Also there isn't any start and end span. Can you please explain the example. Much appreciated.<|||||>Hi @adilmukhtar82 , could you give a look at the [`run_squad.py` example](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_squad.py), it shows how to use several models to do question answering. You should probably update your repository version to `pytorch-transformers` too, most of the examples on our documentation won't work with `pytorch_pretrained_bert`.<|||||>@LysandreJik Thanks I have updated the repository and example is working fine. I am confused about the example mentioned in [documentation](https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertforquestionanswering) ("hello, my dog is cute") as to how does it do with single sentence and not paragraph along with it. <|||||>A bit late but here you go - ``` from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased',return_token_type_ids = True) model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad') context = "The US has passed the peak on new coronavirus cases, President Donald Trump said and predicted that some states would reopen this month.The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, the highest for any country in the world." question = "What was President Donald Trump's prediction?" encoding = tokenizer.encode_plus(question, context) input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"] start_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask])) ans_tokens = input_ids[torch.argmax(start_scores) : torch.argmax(end_scores)+1] answer_tokens = tokenizer.convert_ids_to_tokens(ans_tokens , skip_special_tokens=True) all_tokens = tokenizer.convert_ids_to_tokens(input_ids) print ("\nAnswer Tokens: ") print (answer_tokens) answer_tokens_to_string = tokenizer.convert_tokens_to_string(answer_tokens) print ("\nFinal Answer : ") print (answer_tokens_to_string) ``` Output is : Answer Tokens: ['some', 'states', 'would', 're', '##open', 'this', 'month'] Final Answer : some states would reopen this month <|||||>@ramsrigouthamg Hey, could you maybe also provide a tensorflow example?<|||||>Thanks @ramsrigouthamg ! @mariusjohan there are PyTorch and TensorFlow examples in the [usage](https://huggingface.co/transformers/usage.html#extractive-question-answering) section of the documentation.<|||||>@LysandreJik The link is now updated to https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py<|||||>@ramsrigouthamg @LysandreJik Examples mentioned by you is giving following error: ``` Traceback (most recent call last): File "run_predict.py", line 30, in <module> answer_start_scores TypeError: argmax(): argument 'input' (position 1) must be Tensor, not str ``` After doing investigation, I found that the following code is returning strings instead of integer indices: @ramsrigouthamg start_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask])) @LysandreJik answer_start_scores, answer_end_scores = model(**inputs) Values returned: answer_start_scores = 'answer_start_scores' answer_end_scores = 'answer_end_scores ' start_scores = 'start_scores' end_scores = 'end_scores' I have fine tuned bert-en-base model on squad v1.1 and want to write prediction code. Can you please help?<|||||>@saurabhhssaurabh I found the solution. You just need to change `answer_start_scores, answer_end_scores = model(**inputs)` to either `answer_start_scores, answer_end_scores = model(**inputs).values()` or `answer_start_scores, answer_end_scores = model(**inputs, return_dicts=True)` I got it from here: https://stackoverflow.com/questions/64901831/huggingface-transformer-model-returns-string-instead-of-logits<|||||>Thank you, @JacobLoe
transformers
219
closed
How can I get the confidence score for the classification task
In evaluation step, it seems it only shows the predicted label for the data instance. How can I get the confidence score for each class?
01-23-2019 07:21:51
01-23-2019 07:21:51
You can use `torch.nn.functional.softmax` on the `logits` that the model outputs here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/examples/run_classifier.py#L589-L591 It will give you the confidence score for each class.
transformers
218
closed
Fix learning rate problems in run_classifier.py
- Don't do warmup twice (in BertAdam and manually) - Compute num_train_steps correctly for the case where gradient_accumulation_steps > 1. The current version might lead the the LR never leaving the warmup phase, depending on the value of gradient_accumulation_steps. With these changes I get > 84% accuracy on MRPC, without them its around 77%.
01-22-2019 22:43:30
01-22-2019 22:43:30
Thanks @matej-svejda. So this problem was actually introduced by adding NVIDIA's fp16 optimizer (`FusedAdam`) to the examples. This optimizer is a simple Adam which doesn't incorporate a learning rate schedule so we had to add a manual learning rate schedule in the examples. So a better solution is to keep the `warmup_linear` function but to only modify the learning rates when the fp16 optimiser is used (i.e. updating the weights only if `args.fp16==True`). Also it would be great to update the other examples similarly. Do you want to do that in your PR? I can also do that if you don't have the time. <|||||>Sure, I can do that. Wanted to try out fp16 anyways :+1: <|||||>@thomwolf Something like this?<|||||>Thanks @matej-svejda, I was a bit late on this PR. I've made a small commit to make the notation more explicit (removed `t_total` which was mainly a duplicate of `num_train_steps` and renamed `num_train_steps` in a more explicit `num_train_optimization_steps`). Merging this now
transformers
217
closed
Loading fine_tuned BertModel fails due to prefix error
I am loading a pretrained BERT model with `BertModel.from_pretrained` as I feed the `pooled_output` representation directly to a loss without a head. After fine-tuning the model, I save it as in [`run_classifier.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L553). Afterwards, I want to load the fine-tuned model, again without a head, so I'm using `BertModel.from_pretrained` model again to initialize it, this time from the directory where the config and model files are stored. When trying to load the pretrained model, none of the weights are found and I get: ``` Weights of BertModel not initialized from pretrained model: ['bert.embeddings.word_embeddings.weight' , 'bert.embeddings.position_embeddings.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert .embeddings.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.0.attention.self .query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.key.weight', ...] ``` This seems to be due to [this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L543) in `modeling.py`. As `BertModel.from_pretrained` does not create a `bert` attribute (in contrast to the BertModels with a head), the `bert.` prefix is used erroneously instead of the `''` prefix, which causes the weights of the fine-tuned model not to be found. If I change this line to check additionally if we load a fine-tuned model, then this works: ``` load(model, prefix='' if hasattr(model, 'bert') or pretrained_model_name not in PRETRAINED_MODEL_ARCHIVE_MAP else 'bert.') ``` Does this make sense? Let me know if I'm using `BertModel.from_pretrained` in the wrong way or if I should be using a different model for fine-tuning if I just care about the `pooled_output` representation.
01-22-2019 21:55:45
01-22-2019 21:55:45
I think that you have find the problem but I'm not sure if your fix is the most appropriate way to deal with it. As this problem will only happen when we are loading a `BertModel` pretrained instance, maybe ``` load(model, prefix='' if hasattr(model, 'bert') or cls == BertModel else 'bert.') ``` would be more logical. Could you check if this change also fixes your problem?<|||||>The problem is that this only happens when we load a `BertModel` that was previously fine-tuned. If we load a pretrained `BertModel`, then the pretrained parameters don't have the `bert.` prefix, so we have to add it and it works. However, if we load the fine-tuned `BertModel`, then the parameters already have the `bert.` prefix, so we don't need to add it anymore. But this is not recognized at the moment. So the above change causes the loading of a pretrained `BertModel` to fail.<|||||>I tried to reproduce your problem to better understand but I'm getting some pretty strange results. I don't get any error but the weights do not load properly. Am I missing something obvious or is it more or less what you are doing? ```python import torch from pytorch_pretrained_bert import BertModel from pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE bert_model = 'bert-base-uncased' save_file = 'test_ruder/model.bin' model_base = BertModel.from_pretrained( bert_model, cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(-1) ) # Saving model_to_save = model_base.module if hasattr(model_base, 'module') else model_base torch.save(model_to_save.state_dict(), save_file) # Loading model_state_dict = torch.load(save_file) model_loaded = BertModel.from_pretrained( bert_model, state_dict = model_state_dict ) # Tests param_orig = list(model_base.parameters()) param_load = list(model_loaded.parameters()) print(len(param_orig) == len(param_load)) # True print(all(x.shape == y.shape for x, y in zip(param_orig, param_load))) # True for p_orig, p_load in zip(param_orig, param_load): print(torch.all(p_orig == p_load)) # prints tensor(0, dtype=torch.uint8) everytime ```<|||||>Thanks for adding this working example. Yep, that's the issue I'm facing. I've slightly amended it to load the model from the config and weights file in the archive instead: ```python import torch from pytorch_pretrained_bert import BertModel, modeling from pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE from pathlib import Path save_dir = Path('test_ruder') save_dir.mkdir(exist_ok=True) bert_model = 'bert-base-uncased' save_file = save_dir / modeling.WEIGHTS_NAME config_file = save_dir / modeling.CONFIG_NAME model_base = BertModel.from_pretrained( bert_model, cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(-1) ) # Saving model_to_save = model_base.module if hasattr(model_base, 'module') else model_base torch.save(model_to_save.state_dict(), save_file) with open(config_file, 'w') as f: f.write(model_base.config.to_json_string()) # Loading model_state_dict = torch.load(save_file) model_loaded = BertModel.from_pretrained(save_dir) # Tests param_orig = list(model_base.parameters()) param_load = list(model_loaded.parameters()) print(len(param_orig) == len(param_load)) # True print(all(x.shape == y.shape for x, y in zip(param_orig, param_load))) # True for p_orig, p_load in zip(param_orig, param_load): print(torch.all(p_orig == p_load)) # prints tensor(0, dtype=torch.uint8) everytime ```<|||||>I don't get the warnings that you are mentioning in your original post, the piece of code that I've created seems to fail for another reason. Could you please try to reproduce your original problem in a minimal piece of code?<|||||>That's the message printed by the logger [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L545). You need to enable logging first. You can also just print the same message instead.<|||||>Hi sebastian, indeed, the pretrained loading script is currently designed to load the weights from `BertForPreTraining ` models. I will fix that in the next release. We just have to slightly modify [the line you indicated](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L543) to check the keys of the state dictionary: ```python load(model, prefix='bert.' if not hasattr(model, 'bert') and any(s.startwith('bert.') in state_dict.keys()) else '') ``` In the meantime you can fix your problem by adding `bert.` to the keys of your state dictionary. In your example, you can either change the saving operation: ```python # Saving model_to_save = model_base.module if hasattr(model_base, 'module') else model_base to_save_dict = model_to_save.state_dict() to_save_with_prefix = {} for key, value in to_save_dict.items(): to_save_with_prefix['bert.' + key] = value torch.save(to_save_with_prefix, save_file) with open(config_file, 'w') as f: f.write(model_base.config.to_json_string()) ``` or the loading operation: ```python # Loading model_state_dict = torch.load(save_file) state_dict_with_prefix = {} for key, value in model_state_dict.items(): state_dict_with_prefix['bert.' + key] = value model_loaded = BertModel.from_pretrained(save_dir, state_dict=state_dict_with_prefix) ```<|||||>Actually Sebastian, since the model you save and the model you load are instances of the same `BertModel` class, you can also simply use the standard PyTorch serialization practice (we only have a special `from_pretrained` loading function to be able to load various type of models using the same pre-trained model stored on AWS). Just build a new `BertModel` using the configuration file you saved. Here is a snippet : ```python # Saving (same as you did) model_to_save = model_base.module if hasattr(model_base, 'module') else model_base torch.save(model_to_save.state_dict(), save_file) with open(config_file, 'w') as f: f.write(model_base.config.to_json_string()) # Loading (using standard PyTorch loading practice) config = BertConfig(config_file) model = BertModel(config) model.load_state_dict(torch.load(save_file)) ```<|||||>Thanks a lot for the comprehensive suggestions, @thomwolf. You're totally right that just loading it as normally in PyTorch is the most straightforward and simplest way. Your last suggestion works. Thanks! 👍 <|||||>Hi All, iam facing following issue while loading pretrained BERT Sequence model with my own data RuntimeError: Error(s) in loading state_dict for DataParallel: Missing key(s) in state_dict: "module.out.weight", "module.out.bias". Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm.... any idea about this error
transformers
216
closed
Training classifier does not work for more than two classes
I am trying to run a classifier on the AGN data which has four classes. I am using the following command to train and evaluate the classifier. python examples/run_classifier.py \ --task_name agn \ --do_train \ --do_eval \ --do_lower_case \ --data_dir $GLUE_DIR/AGN/ \ --bert_model bert-base-uncased \ --max_seq_length 128 \ --train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 2.0 \ --output_dir /tmp/agn_output/ I have created a task named agn similar to cola, mnli and others. The model is trained properly but during evaluation it throws the following error. ''' /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "examples/run_classifier.py", line 690, in <module> main() File "examples/run_classifier.py", line 663, in main logits = logits.detach().cpu().numpy() RuntimeError: CUDA error: device-side assert triggered ''' The reason for this issue is: The model is trained with output size of 4 (since four classes), but during testing the model has output size of 2 because the BertForSequenceClassification class has default value for num_labels as 2. So, if we change the following line in run_classifier.py model = BertForSequenceClassification.from_pretrained(args.bert_model, state_dict=model_state_dict) to model = BertForSequenceClassification.from_pretrained(args.bert_model, state_dict=model_state_dict, num_labels=num_labels), the issue will be resolved. Please let me know If I can push the changes.
01-22-2019 18:14:52
01-22-2019 18:14:52
What version of `pytorch-pretrained-BERT` are you using? It seems to me that the change you are describing is already implemented. https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/examples/run_classifier.py#L560<|||||>Okay. It is my bad that I did not have the latest version while debugging the issue. Thanks for pointing though. I will close the issue.
transformers
215
closed
Loading fine tuned BertForMaskedLM
Hi, I tried to fine tune BertForMaskedLM and it works. But i'm facing issues when I try to load the fine tuned model. Here is the code I used to load the model : ``` model_state_dict = torch.load("./finetunedmodel/pytorch_model.bin", map_location='cpu') model_fine = BertForMaskedLM.from_pretrained(pretrained_model_name='bert-base-multilingual-cased', state_dict=model_state_dict, cache_dir='./data') ``` The error I'm facing is : __init__() got an unexpected keyword argument 'state_dict' Does someone already faced this issue ? Thanks Edit : I trained my model on gpu and try to use it on cpu. When I use it on gpu it works !
01-21-2019 17:13:54
01-21-2019 17:13:54
Maybe update to a recent version of `pytorch-pretrained-bert`?<|||||>I'm already using the last release. I don't have any issues running it on gpu. The problem append when using map_location <|||||>yeah yes. if you trained model in GPU, can't be loaded. we will change map_location="CPU" in modeling.py line 511. https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/pytorch_pretrained_bert/modeling.py#L511 `state_dict = torch.load(weights_path, map_location="CPU" )` now it will load our finetuned model. this is only for inference running on CPU. <|||||>All good ! It works, thanks. Maybe adding a device parameter to the function from_pretrained could be usefull. Thanks for your help.<|||||>Great, thanks @MuruganR96 <|||||>@tgriseau - I want to fine-tune bert on MaskedLM using domain-specific text. could you please provide an example of how you fine-tuned or provide some details about what kind of inputs need to be passed? will I be using the true sentence as the output for fine-tuning?
transformers
214
closed
SQuAD output layer and the computation loss
Hi, I noticed that in the final linear layer of `BertForQuestionAnswering`, the loss is computed based on `start_logits` and `end_logits `. That means the positions of questions are also considered to compute loss. Maybe we should only care about the positions of context? e.g., by setting the question part of `start_logits` and `end_logits ` to `-inf`? https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/pytorch_pretrained_bert/modeling.py#L1089-L1113
01-21-2019 09:34:44
01-21-2019 09:34:44
Yes you can also do that.
transformers
213
closed
will examples update the parameters of bert model?
on the examples, it loads bert-base model and do some tasks, the paper says that it will fix the parameters of bert and only update the parameters of our tasks, but i find that it seems not fix parameters of bert? just load the model, and adds some layers to train
01-21-2019 07:04:08
01-21-2019 07:04:08
Could you please show the part of the paper where you have seen mentioned, I haven't found it. Are you talking about this paragraph? >In this section we evaluate how well BERT performs in the feature-based approach by generating ELMo-like pre-trained contextual representations on the CoNLL-2003 NER task. To do this, we use the same input representation as in Section 4.3, but use the activations from one or more layers with- out fine-tuning any parameters of BERT. These contextual embeddings are used as input to a randomly initialized two-layer 768-dimensional BiLSTM before the classification layer.<|||||>Closing this since there no activity. Feel free to re-open if needed.<|||||>i think it is asking whether we are fine-tuning the whole bert model or use bert outputs as a fixed feature for representing the sentences (like ELMO)
transformers
212
closed
Pytorch-Bert: Why this command: pip install pytorch-pretrained-bert doesn't work for me
I try to install pytorch-bert using the command: pip install pytorch-pretrained-bert However this doesn't work for me. And the feedback is below: Could not find a version that satisfies the requirement pytorch-pretrained-be rt (from versions: ) No matching distribution found for pytorch-pretrained-bert I also tried to update my pip, but in vain. So how can I install bert? (Will the root of the issue be that I use python2.7?)
01-20-2019 09:44:52
01-20-2019 09:44:52
Hi, most likely you'll have to switch to python 3.5 or newer! https://pypi.org/project/pytorch-pretrained-bert/ (check for requirements in the page: `Requires: Python >=3.5.0`) <|||||>Indeed<|||||>Need some help. I'm experiencing the same problem with Python 3.7.3 Error code: ERROR: Could not find a version that satisfies the requirement torch>=0.4.1 (from pytorch-pretrained-bert==0.4.0) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch>=0.4.1 (from pytorch-pretrained-bert==0.4.0) <|||||>Try installing pytorch first following the official instruction on the pytorch website: https://pytorch.org/
transformers
211
closed
How convert pytorch to tf checkpoint?
How convert pytorch to tf checkpoint?
01-19-2019 16:02:47
01-19-2019 16:02:47
I don't think such a conversion is currently implemented in this repository, but I have my own implementation here (if you're interested in adapting it for your use-case): https://github.com/nikitakit/self-attentive-parser/blob/8238e79e2089300db059eddff78229a09e254f70/export/export_bert.py#L94-L141<|||||>Thanks @nikitakit, do you think your scripts would make sense in the present repo as well or is it tided to your parsing application?<|||||>Closing this for now. Feel free to re-open.<|||||>Would be actually quite nice to have such a conversion script in order to serve pytorch models via [bert-as-service](https://github.com/hanxiao/bert-as-service)<|||||>Any progress on this? As @tholor says, would be nice for bert-as-service. @nikitakit is it possible to run your script for any finetuned pytorch model? If so, any tips/suggestions on how to do that?<|||||>Re-opening the issue. I don't have time to work on that at the moment but I would be happy to welcome a PR if there is interest in this feature.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Any news on this ?
transformers
210
closed
error: the following arguments are required: --bert_model, --output_dir
the above error arose when i ran the run_squad.py in pycharm(i just copied and ran locally). so can anbody tell how to input these two parameters "--bert_model"," --output_dir" in the IDE?
01-19-2019 15:32:43
01-19-2019 15:32:43
Check this site : https://github.com/huggingface/pytorch-pretrained-BERT and find your 2 Parameters "--bert_model"," --output_dir" . You'll find the Below example 👍 Example 👍 💯 --bert_model : We can use BERT Models like : bert-base-uncased, bert-base-cased, bert-large-uncased, bert-large-cased, etc. All BERT models Refer : [https://github.com/google-research/bert#pre-trained-models](url) --output_dir : Output Directory in Local : C:/0.GITHUB_Desktop/pytorch-pretrained-BERT [https://github.com/huggingface/pytorch-pretrained-BERT](url) --bert_model bert-base-uncased --output_dir /tmp/mrpc_output/ ------------------------------------------------------------------------------------------------------------------- export GLUE_DIR=/path/to/glue python run_classifier.py \ --task_name MRPC \ --do_train \ --do_eval \ --do_lower_case \ --data_dir $GLUE_DIR/MRPC/ \ --bert_model bert-base-uncased \ --max_seq_length 128 \ --train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/mrpc_output/<|||||>What do you do specifically with the above? I have downloaded the BERT model, placed it in the same directory and am still getting this error.
transformers
209
closed
Missing softmax in BertForQuestionAnswering after linear layer?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/pytorch_pretrained_bert/modeling.py#L1089-L1113 It seems there should be a softmax after the linear layer, or did I miss something?
01-19-2019 06:55:30
01-19-2019 06:55:30
It depends on what you use as loss, as mentioned in the [documentation](https://pytorch.org/docs/stable/nn.html#crossentropyloss): >This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
transformers
208
closed
Merge run_squad.py and run_squad2.py
Merge run_squad.py and run_squad2.py to a single file as [official's Bert](https://github.com/google-research/bert). I did integration test by myself and run once. The following is the scores with base-uncased model | Task | Exact | F1 | | ------------- | ------------- | ------------- | | SQuAD v1.1 | 80.90823084200568| 88.03529050425266 | | SQuAD v2.0 | 72.56801145456078| 75.65274647953608 |
01-19-2019 02:30:58
01-19-2019 02:30:58
#152 run squad pull request with Squad 2.0 was originally one file but was asked to be separated into two files for commit. #174<|||||>@abeljim Thanks for your replying, but I think that they should merge to a single file due to easier maintain and too many repeat code.<|||||>How many epochs used in SQuAD2.0 in your test? > @abeljim <|||||>> How many epochs used in SQuAD2.0 in your test? > > > @abeljim Just 2<|||||>Ok I didn't notice the original implementation had these scripts merged in a single file. I guess we merge these scripts together here also. Do you want to update your branch to resolve the merge conflicts @Liangtaiwan?<|||||>Ok merged, thanks @Liangtaiwan <|||||>Hi @Liangtaiwan and @abeljim , should models trained on SQuAD 2 use the `version_2_with_negative` flag when evaluating on SQuAD 1.1 dev sets? I'm noticing more than a 10 point difference with and without the flag. Thanks! Results with the flag appear to be closer to the performance on the answerable examples in SQuAD 2, but I wanted to confirm.<|||||>Hi @samsontmr, that's a good question. In my opinion, since there is still a plausible answer to the unanswerable questions in SQuAD 2.0, you should use the version ```verions_2_with_negative``` when training.<|||||>Thanks! I think you mean when testing?<|||||>Hi @samsontmr, sorry for misunderstanding your question. If you trained the model with ```verions_2_with_negative``` on SQuAD 2.0, the only difference when you evaluate on SQuAD 1.1 is post-processing. With ```verions_2_with_negative``` flag when testing, the model would output unanswerable which is not a valid choice in SQuAD 1.1. This might the reason why there is a 10 point difference with or without the flag. I don't think you need ```verions_2_with_negative``` flag when testing. Could you paste the result and the version of transformers you used? <|||||>I see! Unfortunately I didn't save the results, but I got an F1 of 70+ with the flag, which is closer to the squad 2 "hasAns" results, and 80+ without the flag.
transformers
207
closed
AttributeError: 'NoneType' object has no attribute 'start_logit'
In the `run_squad2` example notebook, the `write_predictions` method fails because `best_non_null_entry` is `None` ``` Evaluating: 100%|███████████████████████████| 1529/1529 [05:12<00:00, 4.88it/s] 01/18/2019 21:42:28 - INFO - __main__ - Writing predictions to: ./models/squad2/predictions.json 01/18/2019 21:42:28 - INFO - __main__ - Writing nbest to: ./models/squad2/nbest_predictions.json Traceback (most recent call last): File "run_squad2.py", line 1075, in <module> main() File "run_squad2.py", line 1071, in main output_nbest_file, output_null_log_odds_file, args.verbose_logging, True, args.null_score_diff_threshold) File "run_squad2.py", line 612, in write_predictions score_diff = score_null - best_non_null_entry.start_logit - ( AttributeError: 'NoneType' object has no attribute 'start_logit' ```
01-18-2019 20:55:47
01-18-2019 20:55:47
Can you post a self-contained example to reproduce your error ? Which version of python, pytorch and pytorch-pretrained-bert are you using?<|||||>Closing since there is no recent activity. Feel free to re-open if needed.<|||||>I ran into the same issue. Any pointers on how I could triage this further?<|||||>@thomwolf sorry, was busy with something else. Will post a self-sufficient example and possibly a PR with the fix soon.<|||||>============================================================================================================================================================================= Reinstalling: subscription-manager-rhsm x86_64 1.20.10-7.el6 rhel-6-server-rpms 285 k Transaction Summary ============================================================================================================================================================================= Reinstall 1 Package(s) Total size: 285 k Installed size: 379 k Is this ok [y/N]: y Downloading Packages: Running Transaction Test Traceback (most recent call last): File "/usr/bin/yum", line 29, in <module> yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 298, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 227, in main return_code = base.doTransaction() File "/usr/share/yum-cli/cli.py", line 547, in doTransaction testcb = RPMTransaction(self, test=True) File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 198, in __init__ self._setupOutputLogging(base.conf.rpmverbosity) File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 225, in _setupOutputLogging self.base.ts.ts.scriptFd = self._writepipe.fileno() AttributeError: 'NoneType' object has no attribute 'scriptFd' Uploading Enabled Repositories Report Loaded plugins: product-id, rhnplugin, subscription-manager Loaded plugins: product-id, rhnplugin, subscription-manager Loaded plugins: product-id, rhnplugin, subscription-manager Loaded plugins: product-id, rhnplugin, subscription-manager Loaded plugins: product-id, rhnplugin, subscription-manager <|||||>--> Running transaction check ---> Package glibc-headers.x86_64 0:2.12-1.212.el6 will be installed --> Processing Dependency: kernel-headers >= 2.2.1 for package: glibc-headers-2.12-1.212.el6.x86_64 --> Processing Dependency: kernel-headers for package: glibc-headers-2.12-1.212.el6.x86_64 ---> Package irqbalance.x86_64 2:1.0.7-9.el6 will be an update --> Processing Dependency: kernel >= 2.6.32-358.2.1 for package: 2:irqbalance-1.0.7-9.el6.x86_64 --> Finished Dependency Resolution Error: Package: 2:irqbalance-1.0.7-9.el6.x86_64 (rhel-6-server-rpms) Requires: kernel >= 2.6.32-358.2.1 Installed: kernel-2.6.32-131.0.15.el6.x86_64 (@anaconda-RedHatEnterpriseLinux-201105101844.x86_64/6.1) kernel = 2.6.32-131.0.15.el6 kernel = 2.6.32-131.0.15.el6 Error: Package: glibc-headers-2.12-1.212.el6.x86_64 (rhel-6-server-rpms) Requires: kernel-headers >= 2.2.1 Error: Package: glibc-headers-2.12-1.212.el6.x86_64 (rhel-6-server-rpms) Requires: kernel-headers You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Uploading Enabled Repositories Report
transformers
206
closed
Classifier example not training on CoLa data
Hi, I obtained strange classification eval results (always predicting the same label) when trying out the `run_classifier.py` after cloning the repo (no modif) so to dig a bit more I rebalanced the CoLA dataset (train.tsv and dev.tsv) to have a better understanding of what is happening. When running the classifier example the network still doesn't learn and keeps predicting the same label. Any idea why? Was thinking the issue might be in saving / loading the state_dict after training so I bypassed it by using directly the model trained in train loop but to no avail, the results are the same. Any pointers? Am I missing something big here? The rebalanced & randomized CoLA dataset (simply sampled down the majority class to minority one). https://drive.google.com/file/d/1_QjnknEusQZgbhTqJcFhBLDrRZXQQRZ6/view?usp=sharing My training command: `run_classifier.py --task_name CoLA --do_train --do_eval --data_dir ../data/CoLA-BALANCED/ --bert_model bert-base-multilingual-cased --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output/cola` The output digits are always in favor of one class, example: ``` eval batch#0 print(digits) [[ 0.05559488 0.00027706] [ 0.05565088 0.00031819] [ 0.05568472 0.00032058] [ 0.05567677 0.00027802] [ 0.05567478 0.00028492] [ 0.05566313 0.00030545] [ 0.05558664 0.00028202] [ 0.05569095 0.0002955 ]] eval batch#1 print(digits) [[ 0.05566129 0.00032648] [ 0.05567207 0.00029943] [ 0.05569698 0.00030764] [ 0.05563145 0.00030007] [ 0.05566984 0.00032966] [ 0.05565657 0.00032679] [ 0.05569271 0.00030621] [ 0.05561762 0.00030394]] ``` Some training examples: ``` 01/18/2019 16:28:12 - INFO - __main__ - *** Example *** 01/18/2019 16:28:12 - INFO - __main__ - guid: train-0 01/18/2019 16:28:12 - INFO - __main__ - tokens: [CLS] Ent ##hus ##ias ##tic golf ##ers with large hand ##icap ##s can be good company . [SEP] 01/18/2019 16:28:12 - INFO - __main__ - input_ids: 101 63412 15471 15465 13275 32288 10901 10169 12077 15230 73130 10107 10944 10347 15198 12100 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - label: 1 (id = 1) 01/18/2019 16:28:12 - INFO - __main__ - *** Example *** 01/18/2019 16:28:12 - INFO - __main__ - guid: train-1 01/18/2019 16:28:12 - INFO - __main__ - tokens: [CLS] The horse jump ##ed over the fe ##nce . [SEP] 01/18/2019 16:28:12 - INFO - __main__ - input_ids: 101 10117 30491 54941 10336 10491 10105 34778 12150 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - label: 1 (id = 1) 01/18/2019 16:28:12 - INFO - __main__ - *** Example *** 01/18/2019 16:28:12 - INFO - __main__ - guid: train-2 01/18/2019 16:28:12 - INFO - __main__ - tokens: [CLS] Brown equipped Jones a camera . [SEP] 01/18/2019 16:28:12 - INFO - __main__ - input_ids: 101 12623 41880 12298 169 26665 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - label: 0 (id = 0) 01/18/2019 16:28:12 - INFO - __main__ - *** Example *** 01/18/2019 16:28:12 - INFO - __main__ - guid: train-3 01/18/2019 16:28:12 - INFO - __main__ - tokens: [CLS] I destroyed there . [SEP] 01/18/2019 16:28:12 - INFO - __main__ - input_ids: 101 146 24089 11155 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:28:12 - INFO - __main__ - label: 0 (id = 0) ``` Some eval examples: ``` 01/18/2019 16:32:04 - INFO - __main__ - *** Example *** 01/18/2019 16:32:04 - INFO - __main__ - guid: dev-0 01/18/2019 16:32:04 - INFO - __main__ - tokens: [CLS] Dana walk ##ed and Leslie ran . [SEP] 01/18/2019 16:32:04 - INFO - __main__ - input_ids: 101 27149 33734 10336 10111 25944 17044 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - label: 1 (id = 1) 01/18/2019 16:32:04 - INFO - __main__ - *** Example *** 01/18/2019 16:32:04 - INFO - __main__ - guid: dev-1 01/18/2019 16:32:04 - INFO - __main__ - tokens: [CLS] The younger woman might have been tall and , and the older one def ##inite ##ly was , bl ##ond . [SEP] 01/18/2019 16:32:04 - INFO - __main__ - input_ids: 101 10117 27461 18299 20970 10529 10590 36243 10111 117 10111 10105 18757 10464 100745 100240 10454 10134 117 21484 26029 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - label: 0 (id = 0) 01/18/2019 16:32:04 - INFO - __main__ - *** Example *** 01/18/2019 16:32:04 - INFO - __main__ - guid: dev-2 01/18/2019 16:32:04 - INFO - __main__ - tokens: [CLS] What the water did to the bot ##tle was fill it . [SEP] 01/18/2019 16:32:04 - INFO - __main__ - input_ids: 101 12489 10105 12286 12172 10114 10105 41960 16406 10134 20241 10271 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/18/2019 16:32:04 - INFO - __main__ - label: 0 (id = 0) ```
01-18-2019 15:36:20
01-18-2019 15:36:20
Try this for CoLA: `--bert_model bert-base-uncased --do_lower_case`. You may also need to increase `num_train_epochs` or `learning_rate` a little. <|||||>Thanks for your suggestions, but when following them nothing changes, it always predict one class regardless. Tried: - 3 and 10 epochs - different learning rates: 5e-3, 5e-4, 5e-5 - all done with --bert_model bert-base-uncased --do_lower_case (but at the same time being able to use the multi language + Cased input would be an important plus) One of the new command tested: `run_classifier.py --task_name CoLA --do_train --do_eval --data_dir ../data/CoLA-BALANCED/ --bert_model bert-base-uncased --do_lower_case --max_seq_length 128 --train_batch_size 32 --learning_rate 5e-4 --num_train_epochs 10.0 --output_dir output/cola/` The console output: https://gist.github.com/ironflood/150618e6f9cb56572729bf282c9cd2aa The rebalanced train & test dataset didn't change from first message and I checked its validity.<|||||>I think 5e-3 and 5e-4 are too high; they are even higher than pre-training. I'm getting eval_accuracy = 0.76 (i.e. not one class) with `--learning_rate 5e-5 --num_train_epochs 3.0` (other arguments and dataset are same as yours), so I have no idea why you didn't. <|||||>Thanks for testing it out. I tried again 5e-5 and I'm getting close to your result as well. As I tried earlier this LR could it be that sometimes it doesn't converge properly because of random seed? Another question: testing out 5e-5 and 5e-6 on the cased multi language model I'm getting approx 0.68 acc instead of 0.78 with lowercased standard model on 3 and 10 epoch training, is it only because of the case adding more difficulty?<|||||>No, random seed is always same. See [this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/examples/run_classifier.py#L421) in the code. For multilingual models, you should refer [multilingual.md](https://github.com/google-research/bert/blob/master/multilingual.md#results).<|||||>Closing since there is no recent activity. Feel free to re-open if needed.
transformers
205
closed
What is the meaning of Attention Mask
Hi, I noticed that there is something called `Attention Mask` in the model. In the annotation of class `BertForQuestionAnswering`, ```python `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max input sequence length in the current batch. It's the mask that we typically use for attention when a batch has varying length sentences. ``` And its usage is in class `BertSelfAttention`, function `forward`, ```python # Apply the attention mask is (precomputed for all layers in BertModel forward() function) attention_scores = attention_scores + attention_mask ``` It seems the attention_mask is used to add 1 to the scores for positions that is taken up by real tokens, and add 0 to the positions outside current sequence. Then, why not set the scores to `-inf` where the positions are outside the current sequence. Then pass the scores to a softmax layer, those score will become 0 as we want.
01-18-2019 14:04:11
01-18-2019 14:04:11
Yes, this conversion is done inside the model, see this line: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L626 (we don't use infinity but a large value that works also when the model is used in half precision mode)<|||||>Thanks for your answer. Well, I still have a little problem understanding what you mean in the last sentence: > (we don't use infinity but a large value that works also when the model is used in half precision mode) I make a simple experiment: ```python >>> import torch >>> import numpy as np >>> inf = np.array(-np.inf) >>> inf array(-inf) >>> inf = torch.from_numpy(inf) >>> inf tensor(-inf, dtype=torch.float64) >>> inf.half() tensor(-inf, dtype=torch.float16) ``` It seems `-inf` works well. In another experiment, I tried the following: ```python >>> scores = torch.FloatTensor([1, 2, 3, 4, 4]) >>> mask = torch.ByteTensor([1, 1, 1, 0, 0]) >>> scores.masked_fill_(mask == 0, -np.inf) tensor([1., 2., 3., -inf, -inf]) >>> scores.half() tensor([1., 2., 3., -inf, -inf], dtype=torch.float16) ``` It seems that both 2 experiments works well, so I don't get what is the problem to use `-inf` in half precision mode Thank you<|||||>attention mask is -10000.0 for positins that we do not want attention for it is set to -10000.0 in # Since attention_mask is 1.0 for positions we want to attend and 0.0 for # masked positions, this operation will create a tensor which is 0.0 for # positions we want to attend and -10000.0 for masked positions. # Since we are adding it to the raw scores before the softmax, this is # effectively the same as removing these entirely. in [get_extended_attention_mask](https://github.com/huggingface/transformers/blob/e95d433d77727a9babadf008dd621a2326d37303/src/transformers/modeling_utils.py#L700)
transformers
204
closed
Two to Three mask word prediction at the same sentence is very complex
Two to Three mask word prediction at the same sentence also very complex. how to get good accuracy? if i have to pretrained bert model and own dataset with **masked_lm_prob=0.25** (https://github.com/google-research/bert#pre-training-with-bert), what will happened? Thanks.
01-18-2019 05:52:40
01-18-2019 05:52:40
Hi @MuruganR96, from my experiments two to three mask word prediction doesn't seems to be possible with BERT.<|||||>thanks @thomwolf sir
transformers
203
closed
Add some new layers from BertModel and then 'grad' error occurs
I wanna do the fine-tuning work by adding a textcnn on the base of BertModel. I write a new class and add two layers of conv (like a textcnn) basically on Embedding Layer. And then an error occurs, called "grad can be implicitly created only for scalar outputs" i search for the Internet and can't find a good solution to that, hope someone can solve it
01-18-2019 02:19:58
01-18-2019 02:19:58
If you can share a (minimal) example reproducing the error, I can have a look.<|||||>I'm closing this. Feel free to re-open and share more information if you still have some issues.
transformers
202
closed
training new BERT seems not working
I tried to train a BERT mode from scratch by "run_lm_finetuning.py" with toy training data (samples/sample.txt) by changing the following: `#model = BertForPreTraining.from_pretrained(args.bert_model)` `bert_config = BertConfig.from_json_file('bert_config.json')` `model = BertForPreTraining(bert_config) ` where the json file comes from[ BERT-Base, Multilingual Cased](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) To check the correctness of training, I printed the scores of sequential relationship (for predicting next sentence tasks) in the "pytorch_pretrained_bert/modeling.py" `prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)` `print(seq_relationship_score)` And the result was (just picking an example from a single batch). Tensor([[-0.1078, -0.2696], [-0.1425, -0.3207], [-0.0179, -0.2271], [-0.0260, -0.2963], [-0.1410, -0.2506], [-0.0566, -0.3013], [-0.0874, -0.3330], [-0.1568, -0.2580], [-0.0144, -0.3072], [-0.1527, -0.3178], [-0.1288, -0.2998], [-0.0439, -0.3267], [-0.0641, -0.2566], [-0.1496, -0.3696], [ 0.0286, -0.2495], [-0.0922, -0.3002]], device='cuda:0', grad_fn=AddmmBackward) Notice since the scores for the first column were higher than for the second column, the result showed that the models predicted all batch as not next sentence or next sentence. And this result was universal for all batches. I feel this shouldn't be the case.
01-18-2019 00:30:24
01-18-2019 00:30:24
Hi @UCJerryDong, Training BERT from scratch takes a (very) long time (see the paper for TPU training, an estimation is training time using GPUs is about a week using 64 GPUs), this script is more for fine-tuning (using the pre-training objective) than to train from scratch. Did you monitor the losses during training and wait for convergence?<|||||>Hi, I am trying to do something similar:) My guess is that `sample.txt` is too small. @thomwolf Just to confirm, the above code should produce a new BERT model from scratch that's based on the existing vocab file right? Thanks!<|||||>It seems to be problematic to generate new samples every epoch, at least for such a small corpus. The model convergenced for me with `--num_train_epochs 50.0`, if I reuse the same `train_dataset` by adding `train_dataset = [train_dataset[i] for i in range(len(train_dataset))]` in the code.<|||||>Hi @thomwolf, I trained the model for an hour but the loss is always around 0.6-0.8 and never converges. I know it's computationally expensive to train the BERT; that's why I choose the very small dataset (sample.txt, which only has 36 lines). The main issue is that I have tried the same dataset with the [original tensorflow version BERT](https://github.com/google-research/bert.git) and it converges within 5 minutes: > next_sentence_accuracy = 1.0 next_sentence_loss = 0.00012585879 That's why I'm wondering if something is wrong with the model. I have also checked the output of each forward step, and found out that the encoder_layers have similar row values, i.e. rows in the matrix "encoder_layers" are similar to each other. ` encoded_layers = self.encoder(embedding_output, extended_attention_mask, output_all_encoded_layers=output_all_encoded_layers)`<|||||>Ok, that's strange indeed. Can you share your code? I can have a look. I haven't tried the pre-training script myself yet.<|||||>Thanks for helping! I have created a [github repo](https://github.com/UCJerryDong/pytorch_bert.git) with my modified code. Also, I have tried what @nhatchan suggests (thanks!) and it does work. But I feel that shouldn't be the correct way for final solution as it stores every data on memory and it will require too much if training with real dataset.<|||||>Thank, I'll have a look. Can you also show me what you did with the Tensorflow model so I can compare the behaviors in the two cases?<|||||>I just follow the instructions under section [Pre-training with BERT](https://github.com/google-research/bert)<|||||>> But I feel that shouldn't be the correct way for final solution as it stores every data on memory and it will require too much if training with real dataset. @UCJerryDong Yes, I just showed one of the differences from Tensorflow version, and that's why I didn't send a PR addressing this. I'm even not sure whether this affects the model performance when you train with real dataset or not. Incidentally, I'm also trying to do something similar, with real data, but still losses seems higher than that of Tensorflow version. I suspect some of minor differences (like this, issues 195 and 38), but not yet figured it out. <|||||>Hi guys, > see the paper for TPU training, an estimation is training time using GPUs is about a week using 64 GPUs Btw, there is an article on this topic http://timdettmers.com/2018/10/17/tpus-vs-gpus-for-transformers-bert/ I was wondering, maybe someone tried tweaking some parameters in the transformer, so that it could converge much faster (ofc, maybe at the expense of accuracy), i.e.: - Initializing the embedding layer with FastText / your embeddings of choice - in our tests it boosted accuracy and convergence with more plain models; - Using a more standard 200 or 300 dimension embedding instead of 768 (also tweaking the hidden size accordingly); Personally for me the allure of transformer is not really about the state-of-the-art accuracy, but about having the same architecture applicable for any sort of NLP task (i.e. QA tasks or SQUAD like objectives may require a custom engineering or some non-transferrable models). <|||||>HI,I have a problem that which line code leet the pretrained model freezed(fine-turn) but no trainable <|||||>Hi @snakers4 and @BITLsy, please open new issues for your problems and discussion.<|||||>Hi @thomwolf Do you have any update on this? Is the issue resolved?<|||||>Hi @ntomita yes, this is just a differing behavior between the TensorFlow and PyTorch training code. - the original TensorFlow code does `static masking` in which the masking of the training dataset is computed once for all so you can quickly overfit on a small training set with a few epochs - in our code we use `dynamic masking` where the masking is generated on the fly so overfitting a single batch takes more epochs. The recent RoBERTa paper (http://arxiv.org/abs/1907.11692) compares the two approaches (see section 4.1) and conclude that `dynamic masking` is comparable or slightly better than `static masking` (as expected I would say).<|||||>Hi @thomwolf that's awesome! I was working on pretraining a modified BERT model using this library with our own data for a quite while, struggled convergence, and wondering if I should try other libraries like original tf implementation or fairseq as other people reported slower convergence with this library. I use dynamic masking so what you're saying is reasonable. I also saw recently that MS azure group has successfully pretrained their models which are implemented with this library. Since you keep telling people that this library is not meant for pretraining I thought there are some critical bugs in models or optimization processes. I needed some confidence to keep working with this library so thanks for your follow-up!<|||||>No "critical bugs" indeed lol :-) You can use this library as the basis for training from scratch (like Microsoft and NVIDIA did). We just don't provide training scripts (at the current stage, maybe we'll add some later but I would like to keep them simple if we do).<|||||>Bert is way to sensitive to the learning rate and data as well. Somehow it makes thing back to 20 years ago when deep learning is still a unusable approch. It's not the fault of libraries writers. The model itself has that problem.
transformers
201
closed
run_squad2 Don't save model if do not train
There is a bug in example/run_squad2.py If the model do not train, the initialized value will cover the pertained model.
01-17-2019 16:43:59
01-17-2019 16:43:59
Thanks!
transformers
200
closed
Adding Transformer-XL pre-trained model
Add Transformer-XL (https://github.com/kimiyoung/transformer-xl) with the pre-trained WT103 model (maybe also the 1B-Word model). The original Google/CMU PyTorch version (https://github.com/kimiyoung/transformer-xl/tree/master/pytorch) has been slightly modified to better match the TF version which has the SOTA results. I've mostly untied the relative positioning/word biases and changed the initialization of memory states (TODO: PR these modifications back up to the original repo). The Google/CMU model can be converted with: ```bash pytorch_pretrained_bert convert_transfo_xl_checkpoint [PATH_TO_TRANSFO_XL_FOLDER]/model.ckpt-0 [PATH_TO_SAVE_PT_DUMP] ``` And the corpus and vocabulary with: ```bash pytorch_pretrained_bert convert_transfo_xl_checkpoint [PATH_TO_TRANSFO_XL_DATA_FOLDER]/cache.pkl [PATH_TO_SAVE_DATA_AND_VOCABULARY] ``` The evaluation can be run with ```bash cd ./examples python eval_transfo_xl.py --cuda --model_name [PATH_TO_PT_DUMP] --work_dir [PATH_TO_SAVE_LOG] ``` Currently I have slightly higher values for the perplexity with the PyTorch model (using the TF pre-trained weights) `20.4` versus `18.3` for the TF version, might try to investigate this a little bit further.
01-17-2019 08:38:31
01-17-2019 08:38:31
Awesome work! I'm not sure I understand how this model integrates with BERT. Did Google release weights for the MLM + next sentence prediction task using the Transformer XL? And if they did not, how well do the classical LM weights performs for finetuning tasks?<|||||>Oh it's not integrated with BERT, that just another model. I'm adding it here to make it use the same easy-to-use interface (with pretrained/cached model and tokenizer) like I do for the OpenAI GPT in the other PR (#183) It's a language model like OpenAI GPT (trained with just a classical LM loss). Check the Transformer-XL repo/paper for more details!<|||||>Ok, managed to find the bug in this conversion. The custom-made AdaptiveSofmax used in Transformer-XL indexes the next-cluster probability tokens in reverse-order (see indexing by `-i` on [line 141 of the original repo](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/utils/proj_adaptive_softmax.py#L141)) so we got worse performances on less frequent words. Tough to find! Fixing this conversion we got a test set perplexity of `18.213` on Wiki-103 to be compared to a reported result of `18.3` with the TensorFlow model.<|||||>#254 is also the main PR for the inclusion of Transformer-XL. Closing this PR.<|||||>Hi, I am assuming the pre-trained model available with huggingface is the `large` variant and not `base`?
transformers
199
closed
(very) minor update to README
01-16-2019 20:05:53
01-16-2019 20:05:53
transformers
198
closed
HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-base-uncased.tar.gz (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x000002456AF21710>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',))
I have been trying to executing this code : import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Tokenized input text = "Who was Jim Henson ? Jim Henson was a puppeteer" tokenized_text = tokenizer.tokenize(text) # Mask a token that we will try to predict back with `BertForMaskedLM` masked_index = 6 tokenized_text[masked_index] = '[MASK]' assert tokenized_text == ['who', 'was', 'jim', 'henson', '?', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer'] # Convert token to vocabulary indices indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) # Define sentence A and B indices associated to 1st and 2nd sentences (see paper) segments_ids = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1] # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # Load pre-trained model (weights) model = BertModel.from_pretrained('bert-base-uncased') model.eval() This is the error that I am getting continuously : HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-base-uncased.tar.gz (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x000002456AF21710>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)) Can you help me with this please
01-16-2019 06:55:31
01-16-2019 06:55:31
Hi, you need a (stable) internet connection to download the weights. This operation is only done once as the weights are then cached on you drive.<|||||>Thankyou so much!<|||||>@laibamehnaz have you solved the problem? I have a similar problem.<|||||>Hi, I am facing a similar issue, can anyone help with this? <|||||>解决方式 method: import os os.environ['NO_PROXY'] = 'huggingface.co' # 不走代理 or import os import requests os.environ['NO_PROXY'] = 'huggingface.co' # 不走代理 or import os os.environ['NO_PROXY'] = 'XXXXX.com' # 不走代理,任何网址都可以 <|||||> Thank you very much! > 解决方式 method: > > import os os.environ['NO_PROXY'] = 'huggingface.co' # 不走代理 > > or > > import os import requests os.environ['NO_PROXY'] = 'huggingface.co' # 不走代理 > > or > > import os os.environ['NO_PROXY'] = 'XXXXX.com' # 不走代理,任何网址都可以
transformers
197
closed
seems meet the GPU memory leak problem
I wrap the ``BertModel'' as a persistent object and init it once, then iteratively use it as the feature extractor to generate the feature of data batch, while it seems I met the GPU memory leak problem. After starting the program, the GPU memory usage keeps increasing until 'out-of-memory'. Some key codes are as following! Every 'self.bert_model.get_bert_feature()' executed, the GPU memory increased. I did simple debugging, and maybe the problem caused by the 'class BertEmbeddings.forward()'. My pytorch version is 0.4.0, py3. Waiting for your reply, thanks very much! ```python class BertModel(PreTrainedBertModel): def __init__(self, config): super(BertModel, self).__init__(config) self.embeddings = BertEmbeddings(config) self.encoder = BertEncoder(config) self.pooler = BertPooler(config) self.apply(self.init_bert_weights) def forward(self, input_ids, token_type_ids=None, attention_mask=None, output_all_encoded_layers=False): #logger.info('bert forward') if attention_mask is None: attention_mask = torch.ones_like(input_ids) if token_type_ids is None: token_type_ids = torch.zeros_like(input_ids) # We create a 3D attention mask from a 2D tensor mask. # Sizes are [batch_size, 1, 1, to_seq_length] # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] # this attention mask is more simple than the triangular masking of causal attention # used in OpenAI GPT, we just need to prepare the broadcast dimension here. extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) # Since attention_mask is 1.0 for positions we want to attend and 0.0 for # masked positions, this operation will create a tensor which is 0.0 for # positions we want to attend and -10000.0 for masked positions. # Since we are adding it to the raw scores before the softmax, this is # effectively the same as removing these entirely. extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 embedding_output = self.embeddings(input_ids, token_type_ids) encoded_layers = self.encoder(embedding_output, extended_attention_mask, output_all_encoded_layers=output_all_encoded_layers) return encoded_layers class Bert_Instance(object): def __init__(self, vocab_file, bert_model_path, device): #tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case) self.tokenizer = BertTokenizer(vocab_file) self.model = BertModel.from_pretrained(bert_model_path) self.device = device print ('bert_device=', self.device) self.model.to(self.device) self.model.eval() for para in self.model.parameters(): para.requires_grad = False def get_feature(self, text_list, max_seq_length=50, layer=-1): ''' Args: text_list is a list to store the sentences, length is the sentence_number Return: (batch_size, seq_len+2, hidden_size) ''' # a list, each dict element key is (ex_index, tokens, input_ids, input_mask, input_type_ids) all_features = convert_examples_to_features(examples=text_list, max_seq_length=max_seq_length, tokenizer=self.tokenizer) all_input_ids = torch.tensor([f['input_ids'] for f in all_features]).type(torch.cuda.LongTensor).to(self.device) all_input_mask = torch.tensor([f['input_mask'] for f in all_features]).type(torch.cuda.LongTensor).to(self.device) all_encoder_layers = self.model(all_input_ids, token_type_ids=None, attention_mask=all_input_mask) return all_encoder_layers, all_input_mask class Bert_Model(object): def __init__(self, device): self.bert_model = Bert_Instance(BERT_VOCAB, BERT_MODEL, device) self.device = device self.zp_pre_cache = {} self.zp_post_cache = {} self.candi_np = {} self.cache = {'zp_pre': self.zp_pre_cache, 'zp_post': self.zp_post_cache, 'candi_np': self.candi_np} def get_bert_feature(self, text_list, cache_name, batch_id, max_seq_length=30, layer=-1): if batch_id in self.cache[cache_name].keys(): #res = torch.tensor(self.cache[cache_name][batch_id]).type(torch.cuda.FloatTensor).to(self.device) res = self.cache[cache_name][batch_id] return res else: res = self.bert_model.get_feature(text_list, max_seq_length, layer) self.cache[cache_name][batch_id] = res return res class Experiment(object): def __init__(self): # load training data with open(DIR+"data/train_data", "rb") as fin1, \ open(DIR+"data/emb","rb") as fin2: self.train_generator = cPickle.load(fin1) self.embedding_matrix, _ , _ = cPickle.load(fin2, encoding='iso-8859-1') # load test data self.test_generator = DataGenerator("test", 256) self.dev_data = self.train_generator.generate_dev_data() self.test_data = self.test_generator.generate_data() # declare model architecture self.model = Network(nnargs["embedding_size"], nnargs["embedding_dimension"], self.embedding_matrix, nnargs["hidden_dimension"], 2).to(NET_DEVICE) self.bert_model = Bert_Model(BERT_DEVICE) this_lr = 0.003 self.optimizer = optim.Adagrad(self.model.parameters(), lr = this_lr) self.best = {"sum":0.0, "test_f":0.0, "best_test_f":0.0} self.dropout = nnargs["dropout"] def forward_step(self, data, mode, dropout=0.0): zp_relative_index, zp_pre, zp_pre_mask, zp_post, zp_post_mask, candi_np, candi_np_mask, feature, zp_pre_words, zp_post_words, candi_np_words, batch_id = data2tensor(data) batch_id = mode + '_' + str(batch_id) zp_pre_bert, _ = self.bert_model.get_bert_feature(zp_pre_words, 'zp_pre', batch_id) zp_post_bert, _ = self.bert_model.get_bert_feature(zp_post_words, 'zp_post', batch_id) candi_np_bert, _ = self.bert_model.get_bert_feature(candi_np_words, 'candi_np', batch_id) ..... ```
01-16-2019 03:05:33
01-16-2019 03:05:33
Maybe use the `torch.no_grad()` context-manager which is the recommended way to perform inference with PyTorch now? See https://pytorch.org/docs/stable/autograd.html#torch.autograd.no_grad<|||||>Closing this. Feel free to re-open if the issue is still there.<|||||>Hey there, I also have some memory leak problem when using the BertModel to produce embeddings to be used as features later on. I basically use the implementation as in the [usage example](https://huggingface.co/transformers/quickstart.html#quick-tour-usage). ```python self.tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased') self.model = trafo.BertModel.from_pretrained('bert-base-multilingual-cased') self.model.eval() ... def encode_text(self, text: str) -> np.ndarray: to_tokenize = f"[CLS] {text} [SEP]" tokenized_text = self.tokenizer.tokenize(to_tokenize) tokenized_text = tokenized_text[0:500] # Convert token to vocabulary indices indexed_tokens = self.tokenizer.convert_tokens_to_ids(tokenized_text) with torch.no_grad(): tokens_tensor = torch.tensor([indexed_tokens]).data outputs = self.model(tokens_tensor) return outputs ``` I realized that if I comment out the line `outputs = self.model(tokens_tensor)` and just return some random numpy array as output, I have not increasing memory problem. So it seems to be calling the model with the tensor that increases the memory. Further, if I use the 'bert-base-uncased' model, the memory stays the same as well. It only happens with the multi models. I used this method in a flask server application and made REST requests to it. <|||||>It's useful your assertion that it occurs _only_ when using _multi-lingual_ BERT model. Can you try to use `bert-base-multilingual-uncased` in order to do a comparison between these two? Perhaps there is a _performance bug_ in the multi-lingual setting. > Hey there, I also have some memory leak problem when using the BertModel to produce embeddings to be used as features later on. > I basically use the implementation as in the [usage example](https://huggingface.co/transformers/quickstart.html#quick-tour-usage). > > ```python > self.tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased') > self.model = trafo.BertModel.from_pretrained('bert-base-multilingual-cased') > self.model.eval() > > ... > > def encode_text(self, text: str) -> np.ndarray: > to_tokenize = f"[CLS] {text} [SEP]" > tokenized_text = self.tokenizer.tokenize(to_tokenize) > tokenized_text = tokenized_text[0:500] > # Convert token to vocabulary indices > indexed_tokens = self.tokenizer.convert_tokens_to_ids(tokenized_text) > with torch.no_grad(): > tokens_tensor = torch.tensor([indexed_tokens]).data > outputs = self.model(tokens_tensor) > return outputs > ``` > > I realized that if I comment out the line `outputs = self.model(tokens_tensor)` and just return some random numpy array as output, I have not increasing memory problem. So it seems to be calling the model with the tensor that increases the memory. > Further, if I use the 'bert-base-uncased' model, the memory stays the same as well. It only happens with the multi models. > > I used this method in a flask server application and made REST requests to it.<|||||>So I tried it with `bert-base-multilingual-uncased` as well and it is the same behavior. I do not understand, why memory constantly grows on inference. To my understanding, I only push data through the network and then use the result layer's output. Before using the transformers, I had been using custom word embeddings trained in own keras models and I did not have this behavior. What am I missing here?<|||||>I've just seen that you're using **PyTorch 0.4.0**! What an oldest version you're using :D can you try to install the latest version of **PyTorch (1.3.1)** through `pip install --upgrade torch` and give us feedback? And please, if you can, update also the version of Transformers to the last (2.2.2) through `pip install --upgrade transformers`. > So I tried it with `bert-base-multilingual-uncased` as well and it is the same behavior. > I do not understand, why memory constantly grows on inference. To my understanding, I only push data through the network and then use the result layer's output. Before using the transformers, I had been using custom word embeddings trained in own keras models and I did not have this behavior. What am I missing here?<|||||>Hey there, I'm using the newest pytorch and transformers. You are probably mistaking this because of the first comment of this thread (by zhangjcqq) but that was not mine. I just hijacked this thread because it seemed to be the same problem I now have and there was no solution here. <|||||>> Hey there, I'm using the newest pytorch and transformers. You are probably mistaking this because of the first comment of this thread (by zhangjcqq) but that was not mine. I just hijacked this thread because it seemed to be the same problem I now have and there was no solution here. So you have tried out to upgrade PyTorch to 1.3.1 as suggested in my last comment, but there is the same error? If no, specify your environment and a piece of code in order to reproduce the bug.<|||||>I have the newest version of pytorch and transformers, yes. I have been monitoring the memory usage over 24h when I made ~ 300.000 requests. It seems that the memory increases constantly for quite some time but also seems to stabilize at a certain maximum. So the application started using ~2.5GB RAM and now stays at ~4.3GB. Maybe it has something to do with varying lengths of the texts I process? So that the longest texts are processed at a later point in time which then require the most RAM. Then, any subsequent text cannot need more so it stabilizes. Though this is just a thought. Thanks already for your help, I'm off to Christmas vacations for now and will have a look at the issue in January again. I'll see if memory usage increases by then. <|||||>> flask I miss in the same problems but without flask, it works<|||||>> I have the newest version of pytorch and transformers, yes. > > I have been monitoring the memory usage over 24h when I made ~ 300.000 requests. It seems that the memory increases constantly for quite some time but also seems to stabilize at a certain maximum. So the application started using ~2.5GB RAM and now stays at ~4.3GB. > > Maybe it has something to do with varying lengths of the texts I process? So that the longest texts are processed at a later point in time which then require the most RAM. Then, any subsequent text cannot need more so it stabilizes. Though this is just a thought. > > Thanks already for your help, I'm off to Christmas vacations for now and will have a look at the issue in January again. I'll see if memory usage increases by then. I have similar problems too. The memory usage gradually grows from 1xxxM to 3xxxM. @RomanTeucher @zhangjcqq did you manage to solve the issue? <|||||>@amjltc295 Did you find any solution to above issue?<|||||>> @amjltc295 Did you find any solution to above issue? when i run flask by: threaded=False it works<|||||>> @amjltc295 Did you find any solution to above issue? It seems that any python process takes up more and more RAM over time. A co-worker of mine had issues as well but with some other python project. We have our applications in docker containers that are limited in RAM, so they all run at 100% after some time. Anyways, the applications still works as it is supposed to be, so we did not put further research into that. <|||||>return outputs.cpu()<|||||>Reporting that this issue still exists with the forward pass of BertModel, specifically the call to BertModel.forward(), I notice that system RAM usage increases on this line each iteration. Transformers v3.1.0 Pytorch v1.7.1 Cuda 11.0.221 Cudnn 8.0.5_0 RTX 3090 I am unable to run MNLI because of this, the RAM maxes out, and then system crashes towards the end of the 3rd training epoch. I will do some more digging and report back if I find a solution.<|||||>Mark. Still suffering this problem in Aug. 2022. Someone can offer a solution for this could be highly appreciated <|||||>I'm having the same issue running in Databricks with the following versions: transformers 4.25.1 pytorch 1.13.1+cu117 Nvidia Tesla T4
transformers
196
closed
TODO statement on Question/Answering Model
Has this been confirmed? https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/pytorch_pretrained_bert/modeling.py#L1084
01-15-2019 01:56:48
01-15-2019 01:56:48
Not really, I've moved to something else since I don't expect this to change significantly the results. I will remove the TODO.
transformers
195
closed
Potentially redundant learning rate scheduling
In the following two code snippets below: https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/examples/run_lm_finetuning.py#L570-L573 https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/examples/run_lm_finetuning.py#L611-L613 it appears that learning rate warmup is being done *twice*: once in the example file, and once inside the BertAdam class. Am I reading this wrong? Because I'm pretty sure the BertAdam class performs its own warm-up when initialized with those arguments. Here is an excerpt from the BertAdam class, where warm-up is also applied: https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/pytorch_pretrained_bert/optimization.py#L146-L150 This also applies to other examples, e.g. https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/examples/run_squad.py#L848-L851 https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/examples/run_squad.py#L909-L911
01-14-2019 09:10:06
01-14-2019 09:10:06
Humm could be the case indeed. What do think about this @tholor?<|||||>As far as I can tell this was introduced in c8ea286048517d9072397d77f4de21b8483a4531 as a byproduct of adding float16 support, and was then copied to other example files as well.<|||||>I agree, there seems to be double LR scheduling. The applied LR is therefore lower than intended. Quick plot of the LR being set in the outer scope (i.e. in run_squad or run_lm_finetuning) vs. the inner one (in BERTAdam) shows this: ![lr_schedule_debug](https://user-images.githubusercontent.com/1563902/51321145-49df5100-1a62-11e9-8908-516aaf9362d4.png) In addition, I have noticed two further parts for potential clean up: 1. I don't see a reason why the function `warmup_linear()` is implemented in two places: In `optimization.py` and in each example script. 2. Is the method `optimizer.get_lr()` ever being called? There's actually another LR scheduling. https://github.com/huggingface/pytorch-pretrained-BERT/blob/f040a43cb3954e14dc47a815de012ac3f87a85d0/pytorch_pretrained_bert/optimization.py#L79-L92<|||||>There is als an additional problem that causes the learning rate to not be set correctly in run_classifier.py. I created a pull request for that (and the double warmup problem): #218 <|||||>Is there are something done for this double warmup bug?<|||||>Yes, @matej-svejda worked on this in https://github.com/huggingface/pytorch-pretrained-BERT/pull/218<|||||>I see that, but it isn't merge now?<|||||>No, not yet. As you can see in the PR it's still WIP and he committed only 4 hours ago. If you need the fix urgently, you can apply the changes easily locally. It's quite a small fix.<|||||>Sorry,I forget to see the time :)<|||||>By the way, how can I draw a picture about the LR schedule about BERT like yours. I see if use `print(optimizer.param_groups['lr'] `, the learning rate is always like I init it.<|||||>I have plotted `optimizer.param_groups[0]["lr"]` from here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/f040a43cb3954e14dc47a815de012ac3f87a85d0/examples/run_lm_finetuning.py#L610-L616 and `lr_scheduled` from here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/f040a43cb3954e14dc47a815de012ac3f87a85d0/pytorch_pretrained_bert/optimization.py#L145-L152 Your above could should actually throw an exception because `optimizer.param_groups` is a list. Try `optimizer.param_groups[0]["lr"]` or `lr_this_step`.<|||||>Ok this should be fixed in master now!
transformers
194
closed
run_classifier.py doesn't save any configurations and I can't load the trained model.
01-14-2019 07:16:07
01-14-2019 07:16:07
I trained BertForSequenceClassification model with cola dataset mode for binary classification. It saved only eval_results.txt and pytorch_model.bin files. When I am loading model again like: model = BertForSequenceClassification.from_pretrained('models/') it produces such error: with open(json_file, "r", encoding='utf-8') as reader: FileNotFoundError: [Errno 2] No such file or directory: 'models/bert_config.json' I trained models using the command: export GLUE_DIR=data_dir_path; python run_classifier.py --task_name cola --do_train --do_eval --data_dir $GLUE_DIR/ --bert_model bert-base-multilingual-cased --max_seq_length 128 --train_batch_size 16 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir models/ Do I have any error with training script? How can I produce such config.json file to load model successfully?<|||||>You can fetch the configuration from S3 like it's [done in the example](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L560). Alternatively, you can save the configuration with: ```python with open('config.json', 'w') as f: f.write(model.config.to_json_string()) ```
transformers
193
closed
Fix importing unofficial TF models
Importing unofficial TF models seems to be working well, at least for me. This PR resolves #50.
01-14-2019 04:50:00
01-14-2019 04:50:00
Thanks !
transformers
192
closed
Documentation Fixes
Fixes misnamed documentation comments in `run_squad.py` and `run_squad2.py` and update of `README.md` for updated syntax of file conversion.
01-13-2019 15:56:58
01-13-2019 15:56:58
Closing this as I'm not sure it's right.
transformers
191
closed
lm_finetuning compatibility with Python 3.5
dicts are not ordered in Python 3.5 or prior, which is a cause of #175. This PR replaces one with a list, to keep its order.
01-13-2019 13:02:01
01-13-2019 13:02:01
Great, thank! Nice to have Python 3.5 compatibility again here!
transformers
190
closed
Fix documentation (missing backslashes)
This PR adds missing backslashes in LM Fine-tuning subsection in README.md.
01-13-2019 13:00:20
01-13-2019 13:00:20
Great, thanks!
transformers
189
closed
[bug fix] args.do_lower_case is always True
The "default=True" makes args.do_lower_case always True. ```python parser.add_argument("--do_lower_case", default=True, action='store_true') ```
01-13-2019 11:51:18
01-13-2019 11:51:18
Thanks @donglixp!
transformers
188
closed
Weight Decay Fix Original Paper
Hi There! Is the weight decay fix from? https://arxiv.org/abs/1711.05101 Thanks!
01-12-2019 20:22:45
01-12-2019 20:22:45
Yes
transformers
187
closed
issue is, that ##string will repeats at intermediate, it collapses all index for mask words
``` ----------------------------------> how much belan i havin my credit card and also debitcard ----------------------------------> ['how', 'much', 'belan', 'i', 'havin', 'my', 'credit', 'card', 'and', 'also', 'debitcard'] ----------------------------------> ['**belan**', '**havin**'] ----------------------------------> [2, 4] ----------------------------------> ['how', 'much', '**belan**', 'i', '**havin**', 'my', 'credit', 'card', 'and', 'also', 'debitcard'] ----------------------------------> how much belan i havin my credit card and also debitcard before_tokenized_text-------------> ['how', 'much', **'bela'**, **'##n'**, 'i', **'ha'**, **'##vin'**, 'my', 'credit', 'card', 'and', 'also', '**de'**, **'##bit',** '**##card']** index_useless---------------------> [2, 4] after_tokenized_text--------------> ['how', 'much', '[MASK]', '##n', '[MASK]', 'ha', '##vin', 'my', 'credit', 'card', 'and', 'also', 'de', '##bit', '##card'] ########## ['more', 'most'] ########## 2 <---------index_useless_length ########## 2 <---------predicted_words_len ########## how much [MASK] n [MASK] ha vin my credit card and also de bit card <---------tokenized_text ########## index_tk_aft [2, 4] ########## how much more n most ha vin my credit card and also de bit card ########## how much more n most ha vin my credit card and also de bit card <---------Result ``` i think As you understood. that spelling mistake words [2, 4] as Masking to predict. but in this place, what happened, ##string -> '##n' , '##vin', like this spoil the predict final output. i found and try so many ways. but all useless still. **how to predict and fetch two more masking words?** Thanks.
01-11-2019 11:35:06
01-11-2019 11:35:06
Hi, I don't think you can do that in a clean way, sorry. That how BERT is trained.<|||||>> That how BERT is trained i was pretrained our **bert-base-uncased** model with own dataset. Batch_size=32 max_seq_length=128 > I don't think you can do that in a clean way you asked me "what was the problem now?" i think so. normal word (ex: 'cadd') splits into multiple ##string (ex: 'cad', '##d'). it was affected my mask word output.(like index dynamically changed) step 1: tokenize tokenized_text = tokenizer.tokenize(source_transcript) step 2:replace masked word as "[MASK]" in index position tokenized_text[index] ='[MASK]' step 3:predicting mask word predicted_index = torch.argmax(predictions[0, masked_index]).item() predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0] ex: original text:how to apply for **cadd** (['how', 'to', 'apply', 'for', 'cadd']) masked word: cadd index:4 step 1: ['how', 'le', 'comply', 'for', 'cad', '##d'] note: index length increased step 2: ['how', 'to', 'apply', 'for', '[MASK]', '##d'] note: '[MASK]' index increased step 3: result: ['sb'] how to apply for **sb d** actually i was pretrained model for this as "how to apply for card" but it was not predicting well. Main issue is, that ##string will repeats at intermediate, it collapses all index for mask words. And this **Two to Three mask word prediction at the same sentence also very complex**. `Two to Three mask word prediction at the same sentence also very complex ` how to solve this problem @thomwolf sir<|||||>> > That how BERT is trained > > i was pretrained our **bert-base-uncased** model with own dataset. > Batch_size=32 > max_seq_length=128 > > > I don't think you can do that in a clean way > > you asked me "what was the problem now?" i think so. > normal word (ex: 'cadd') splits into multiple ##string (ex: 'cad', '##d'). > it was affected my mask word output.(like index dynamically changed) > > step 1: tokenize > tokenized_text = tokenizer.tokenize(source_transcript) > > step 2:replace masked word as "[MASK]" in index position > tokenized_text[index] ='[MASK]' > > step 3:predicting mask word > predicted_index = torch.argmax(predictions[0, masked_index]).item() > predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0] > > ex: > original text:how to apply for **cadd** (['how', 'to', 'apply', 'for', 'cadd']) > masked word: cadd > index:4 > > step 1: > ['how', 'le', 'comply', 'for', 'cad', '##d'] > note: index length increased > > step 2: > ['how', 'to', 'apply', 'for', '[MASK]', '##d'] > note: '[MASK]' index increased > > step 3: > result: ['sb'] > how to apply for **sb d** > > actually i was pretrained model for this as "how to apply for card" > but it was not predicting well. > > Main issue is, that ##string will repeats at intermediate, it collapses all index for mask words. > > And this **Two to Three mask word prediction at the same sentence also very complex**. > > `Two to Three mask word prediction at the same sentence also very complex ` > > how to solve this problem @thomwolf sir sir @thomwolf any suggestions. Thanks.