repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
186
closed
BertOnlyMLMHead is a duplicate of BertLMPredictionHead
https://github.com/huggingface/pytorch-pretrained-BERT/blob/35becc6d84f620c3da48db460d6fb900f2451782/pytorch_pretrained_bert/modeling.py#L387-L394 I don't understand how it is useful to wrap the BertLMPredictionHead class like that, perhaps it was forgotten in some refactoring ? I can do a PR if you confirm me it can be replaced. BertOnlyMLMHead is only used in BertForMaskedLM.
01-11-2019 10:35:36
01-11-2019 10:35:36
That's an heritage of how I converted the TF code (by reproducing the scope architecture in TF with PyTorch classes). We can't really change that now without re-converting all the TF code. If you want a more concise version of PyTorch BERT, you can check [pytorchic-bert](https://github.com/dhlee347/pytorchic-bert).
transformers
185
closed
got an unexpected keyword argument 'cache_dir'
I used the following code to run the job: ` export GLUE_DIR=./data python3 run_classifier.py \ --task_name COLA \ --do_train \ --do_eval \ --do_lower_case \ --data_dir $GLUE_DIR/ \ --bert_model bert-large-uncased \ --max_seq_length 20 \ --train_batch_size 10 \ --learning_rate 2e-5 \ --num_train_epochs 2.0 \ --output_dir ./output` Then, I got the output: `01/11/2019 02:02:55 - INFO - __main__ - device: cpu n_gpu: 0, distributed training: False, 16-bits training: False 01/11/2019 02:02:56 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt from cache at /Users/chiyuzhang/.pytorch_pretrained_bert/9b3c03a36e83b13d5ba95ac965c9f9074a99e14340c523ab405703179e79fc46.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 01/11/2019 02:02:56 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased.tar.gz from cache at /Users/chiyuzhang/.pytorch_pretrained_bert/214d4777e8e3eb234563136cd3a49f6bc34131de836848454373fa43f10adc5e.abfbb80ee795a608acbf35c7bf2d2d58574df3887cdd94b355fc67e03fddba05 01/11/2019 02:02:56 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /Users/chiyuzhang/.pytorch_pretrained_bert/214d4777e8e3eb234563136cd3a49f6bc34131de836848454373fa43f10adc5e.abfbb80ee795a608acbf35c7bf2d2d58574df3887cdd94b355fc67e03fddba05 to temp dir /var/folders/j0/_kd2ppm53wnb6pjypwy3gc_00000gn/T/tmpynwe_15z 01/11/2019 02:03:06 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "initializer_range": 0.02, "intermediate_size": 4096, "max_position_embeddings": 512, "num_attention_heads": 16, "num_hidden_layers": 24, "type_vocab_size": 2, "vocab_size": 30522 } Traceback (most recent call last): File "run_classifier.py", line 619, in <module> main() File "run_classifier.py", line 455, in main num_labels = 2) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 502, in from_pretrained model = cls(config, *inputs, **kwargs) TypeError: __init__() got an unexpected keyword argument 'cache_dir'` Could you please help me to fix this problem?
01-11-2019 10:08:56
01-11-2019 10:08:56
You should update to the latest version of `pytorch_pretrained_bert`(`pip install pytorch_pretrained_bert --upgrade`)
transformers
184
closed
Python 3.5 + Torch 1.0 does not work
When running `run_lm_finetuning.py` to fine-tune language model with default settings (see command below), sometimes I could run successfully, but sometimes I received different errors like `RuntimeError: The size of tensor a must match the size of tensor b at non-singleton dimension 1`, `RuntimeError: Creating MTGP constants failed. at /pytorch/aten/src/THC/THCTensorRandom.cu:35` or `RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)`. This problem can be solved when updating `python3.5` to `python3.6`. ``` python run_lm_finetuning.py \ --bert_model ~/bert/models/bert-base-uncased/ \ --do_train \ --train_file ~/bert/codes/samples/sample_text.txt \ --output_dir ~/bert/exp/lm \ --num_train_epochs 5.0 \ --learning_rate 3e-5 \ --train_batch_size 32 \ --max_seq_length 128 \ --on_memory ```
01-11-2019 09:43:43
01-11-2019 09:43:43
Thank you @yuhui-zh15 sir. i will check.<|||||>This should be fixed on master now (thanks to #191 )
transformers
183
closed
Adding OpenAI GPT pre-trained model
Adding OpenAI GPT pretrained model.
01-11-2019 08:03:57
01-11-2019 08:03:57
#254 is now the main PR for the inclusion of OpenAI GPT. Closing this PR.
transformers
182
closed
add do_lower_case arg and adjust model saving for lm finetuning.
Fixes for #177
01-11-2019 07:37:13
01-11-2019 07:37:13
transformers
181
closed
All about the training speed in classification job
I run the bert-base-uncased model with task 'mrpc' in ubuntu,nvidia p4000 8G. It's a classification problem, and I use the default demo data. But the training speed is about 2 batch every second. Any problem? I think it maybe too slow, but can not find why. I have another task with 1300000 data costs 6 hours per epoch.
01-11-2019 06:27:39
01-11-2019 06:27:39
Maybe try to use a bigger batch size or try fp16 training? Please refer to the [detailed instructions in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#examples).
transformers
180
closed
Weights not initialized from pretrained model
Thanks for your awesome work! When I execute the following code for a named entity recognition tasks: `model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=num_labels)` Output the following information: > Weights of BertForTokenClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] Weights from pretrained model not used in BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] What puzzles me is that the parameters of the classifier are not initialized.
01-11-2019 06:03:47
01-11-2019 06:03:47
Hi! Those messages are correct, the pretrained weights that have been released by Google Brain are just the ones of the core network. They did not release task specific weights. To get a model that solves a specific classification task, you would have to train one yourself or get it from someone else. @thomwolf There have been multiple issues about this specific behavior, maybe we should add some kind of text either as a print while loading the model or in the documentation. I would be happy to do it. What would you prefer?<|||||>Oh, I see, I will train the model with my own dataset, thank you for your answer.<|||||>Yes you are right @rodgzilla we should detail a bit the messages in [modeling.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L545) to say that `These weights will be trained from scratch`.
transformers
179
closed
Fix it to run properly even if without `--do_train` param.
It was modified similar to `run_classifier.py`, and Fixed to run properly even if without `--do_train` param.
01-10-2019 12:51:50
01-10-2019 12:51:50
Thanks!
transformers
178
closed
Can we use BERT for Punctuation Prediction?
Can we use the pre-trained BERT model for Punctuation Prediction for Conversational Speech? Let say punctuating an ASR output?
01-10-2019 07:25:30
01-10-2019 07:25:30
Hi, I don't really now. I guess you should just give it a try.
transformers
177
closed
run_lm_finetuning.py does not define a do_lower_case argument
The file references `args.do_lower_case`, but doesn't have the corresponding `parser.add_argument` call. As an aside, has anyone successfully applied LM fine-tuning for a downstream task (using this code, or maybe using the original tensorflow implementation)? I'm not even sure if the code will run in its current state. And after fixing this issue locally, I've had no luck using the output from fine-tuning: I have a model that gets state-of-the-art results when using pre-trained BERT, but after fine-tuning it performs no better than omitting BERT/pre-training entirely! I don't know whether to suspect that there are might be other bugs in the example code, or if the hyperparameters in the README are just a very poor starting point for what I'm doing.
01-10-2019 05:01:17
01-10-2019 05:01:17
On a related note: I see there is learning rate scheduling happening [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L608), but also inside the BertAdam class. Is this not redundant and erroneous? For reference I'm not using FP16 training, which has its own separate optimizer that doesn't appear to perform redundant learning rate scheduling. The same is true for other examples such as SQuAD (maybe it's the cause of #168, where results were reproduced only when using float16 training?)<|||||>Here also @tholor, maybe you have some feedback from using the fine-tuning script?<|||||>I figured out why I was seeing such poor results while attempting to fine-tune: the example saves `model.bert` instead of `model` to `pytorch_model.bin`, so the resulting file can't just be zipped up and loaded with `from_pretrained`.<|||||>I have just fixed the `do_lower_case` bug and adjusted the code for model saving to be in line with the other examples (see #182 ). I hope this solves your issue. Thanks for reporting! > As an aside, has anyone successfully applied LM fine-tuning for a downstream task (using this code, or maybe using the original tensorflow implementation)? We are currently using a fine-tuned model for a rather technical corpus and see improvements in terms of the extracted document embeddings in contrast to the original pre-trained BERT. However, we haven't done intense testing of hyperparameters or performance comparisons with the original pre-trained model yet. This is all still work in progress on our side. If you have results that you can share in public, I would be interested to see the difference you achieve. In general, I would only expect improvements for target corpora that have a very different language style than Wiki/OpenBooks. > On a related note: I see there is learning rate scheduling happening here, but also inside the BertAdam class. We have only trained with fp16 so far. @thomwolf have you experienced issues with LR scheduling in the other examples? Just copied the code from there.<|||||>Thanks for fixing these! After addressing the save/load mismatch I'm seeing downstream performance comparable to using pre-trained BERT. I just got a big scare when the default log level wasn't high enough to notify me that weights were being randomly re-initialized instead of loaded from the file I specified. It's still too early for me to tell if there are actual *benefits* to fine-tuning, though.<|||||>All this looks fine on master now. Please open a new issue (or re-open this one) if there are other issues.<|||||>I saw on https://github.com/huggingface/pytorch-pretrained-BERT/issues/126#issuecomment-451910577 that there's potentially some documentation effort underway beyond the README. Thanks a lot for this! I wonder if there's the possibility to add more detail about how to properly prepare a custom corpus (e.g. to avoid catastrophical forgetting) finetune the models on. Asking this as my (few, so far) attempts to finetune on other corpora have been destructive for performance on GLUE tasks when compared to the original models (I just discovered this issue, maybe the things you mention above affected me too). Kudos @thomwolf @tholor for all your work on this!
transformers
176
closed
Add [CLS] and [SEP] tokens in Usage
Thank you for this great job. In the Usage section, the `[CLS]` and `[SEP]` tokens should be added in the beginning and ending of `tokenized_text`? ``` # Tokenized input text = "Who was Jim Henson ? Jim Henson was a puppeteer" tokenized_text = tokenizer.tokenize(text) ``` In the current example, if the first token is masked (this position should be reserved for `[CLS]`), the result will be strange. Thanks.
01-09-2019 09:41:58
01-09-2019 09:41:58
You are right, I'll fix the readme<|||||>So, just to clarify, I should add '[CLS]' and '[SEP]' to the beginning and end of each utterance respectively, and it's a bug in the examples that they dont do this?<|||||>@hughperkins did you get any clarification on this?
transformers
175
closed
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
sir i was pretrained for our BERT-Base model for Multi-GPU training 8 GPUs. preprocessing succeed but next step training it shown error. in run_lm_finetuning.py. -- `python3 run_lm_finetuning.py --bert_model bert-base-uncased --do_train --train_file vocab007.txt --output_dir models --num_train_epochs 5.0 --learning_rate 3e-5 --train_batch_size 32 --max_seq_length 128 ` ``` Traceback (most recent call last): File "run_lm_finetuning.py", line 646, in <module> main() File "run_lm_finetuning.py", line 594, in main loss = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 143, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 153, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply raise output File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker output = module(*input, **kwargs) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 695, in forward output_all_encoded_layers=False) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 626, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 187, in forward seq_length = input_ids.size(1) RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1) ``` Thanks.
01-09-2019 07:26:46
01-09-2019 07:26:46
Sir how to resolve this? i am beginner for pytorch. Thanks.<|||||>I will have a look, I am not familiar with `run_lm_finetuning` yet. In the meantime maybe @tholor has an advice?<|||||>Haven't seen this error before, but how does your training corpus "vocab007.txt" look like? Is training working successfully for the file "samples/sample_text.txt"?<|||||>Sir my vocab007.txt is a my own text sentences same as samples/sample_text.txt. but I won't tested before this sample _text.txt. directly i am put training to my vocab007.txt Thank you so much @thomwolf @tholor sir. <|||||>Not sure if I understood your last message. Is this solved? <|||||> @tholor sir, stil now i am not solving this issue. > Is training working successfully for the file "samples/sample_text.txt"? No, i am not train the file "samples/sample_text.txt" > how does your training corpus "vocab007.txt" look like? this is line by line sentence like file "samples/sample_text.txt" sir i think this shape issue. batch vice split datas for multi gpu. that time this issue occurred. sir any suggestion? how to resolve is bug. thanks.<|||||>I cannot reproduce your error. Just tested again with a dummy corpus (referenced in the readme) on a 4x P100 machine using the same parameters as you: ``` python3 run_lm_finetuning.py --bert_model bert-base-uncased --do_train --train_file ../samples/small_wiki_sentence_corpus.txt --output_dir models --num_train_epochs 1.0 --learning_rate 3e-5 --train_batch_size 32 --max_seq_length 128 ``` Training just started normally. I would recommend to: 1) Check your local setup and try to run with the above corpus (download here). If this doesn't work, there's something wrong with your setup (e.g. CUDA) 2) If 1 works, examine your training corpus "vocab007.txt". I suppose there's something wrong here causing wrong `input_ids`. A good starting point will be the logs that you see in the beginning of model training and print some training examples (including `input_ids`). They should look something like this: ``` 01/11/2019 08:20:24 - INFO - __main__ - ***** Running training ***** 01/11/2019 08:20:24 - INFO - __main__ - Num examples = 476462 01/11/2019 08:20:24 - INFO - __main__ - Batch size = 32 01/11/2019 08:20:24 - INFO - __main__ - Num steps = 14889 Epoch: 0%| | 0/1 [00:00<?, ?it/s] 01/11/2019 08:20:24 - INFO - __main__ - *** Example *** | 0/14890 [00:00<?, ?it/s] 01/11/2019 08:20:24 - INFO - __main__ - guid: 0 01/11/2019 08:20:24 - INFO - __main__ - tokens: [CLS] [MASK] jp ##g mini ##at ##ur [MASK] [UNK] [MASK] eine ##m [MASK] [UNK] mit [UNK] . [SEP] [UNK] [UNK] [MASK] jp ##g mini ##at ##ur [UNK] [MASK] [MASK] ##t eine [UNK] in [UNK] [UNK] - [UNK] [UNK] [UNK] [UNK] in [UNK] [SEP] 01/11/2019 08:20:24 - INFO - __main__ - input_ids: 101 103 16545 2290 7163 4017 3126 103 100 103 27665 2213 103 100 10210 100 1012 102 100 100 103 16545 2290 7163 4017 3126 100 103 103 2102 27665 100 1999 100 100 1011 100 100 100 100 1999 100 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/11/2019 08:20:24 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/11/2019 08:20:24 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01/11/2019 08:20:24 - INFO - __main__ - LM label: [-1, 1012, -1, -1, -1, -1, -1, 100, -1, 1999, -1, -1, 100, -1, -1, -1, -1, -1, -1, -1, 1012, -1, -1, -1, -1, -1, -1, 3413, 3771, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] 01/11/2019 08:20:24 - INFO - __main__ - Is next sentence label: 0 ```<|||||>> Check your local setup and try to run with the above corpus ok @tholor sir, now i will check.<|||||>Hi, it can be solved by using `python3.6`. See #184 <|||||>Thanks sir.<|||||>Fixed on master now (compatible with Python 3.5 again)
transformers
174
closed
Added Squad 2.0
Accidentally closed the last pull request. Created Separate file for Squad 2.0 run with python3 run_squad.py --bert_model bert-large-uncased_up --do_predict --do_lower_case --train_file squad/train-v2.0.json --predict_file squad/dev-v2.0.json --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir squad2_diff --train_batch_size 24 --fp16 --loss_scale 128 --null_score_diff_threshold -2.6929588317871094 If null score not defined default value is 0.0
01-08-2019 23:25:40
01-08-2019 23:25:40
Great, thanks @abeljim. Do you have the associated results when you run this command?<|||||>python run_squad2.py \ --bert_model bert-large-uncased \ --do_train \ --do_predict \ --do_lower_case \ --train_file $SQUAD_DIR/train-v2.0.json \ --predict_file $SQUAD_DIR/dev-v2.0.json \ --train_batch_size 24 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./debug_squad/ \ --fp16 \ --loss_scale 128 \ --null_score_diff_threshold -2.6929588317871094 { "exact": 76.62764255032427, "f1": 79.22523967450329, "total": 11873, "HasAns_exact": 68.31983805668017, "HasAns_f1": 73.52248155455082, "HasAns_total": 5928, "NoAns_exact": 84.9116904962153, "NoAns_f1": 84.9116904962153, "NoAns_total": 5945 } This is the command I used and the results<|||||>how much time does it take to train?
transformers
173
closed
What 's the mlm accuracy of pretrained model?
What 's the mlm accuracy of pretrained model? In my case, I find the scores of candidate in top 10 are very close,but most are not suitable. Is this the same prediction as Google's original project? _Originally posted by @l126t in https://github.com/huggingface/pytorch-pretrained-BERT/issues/155#issuecomment-452195676_
01-08-2019 07:08:35
01-08-2019 07:08:35
Hi, we didn't evaluate this metric. If you do feel free to share the results. Regarding the comparison between the Google and PyTorch implementations, please refere to the included Notebooks and the associated section of the readme.
transformers
172
closed
Never split some texts.
I have noticed bert tokenize texts by two steps: 1. punctuation: split text to tokens 2. wordpiece: split token to word_pieces Some texts such as `"[UNK]"` are supposed to be left as they are. However, they become `["[", "UNK", "]"]` or something like this. This PR is to solve the above problem.
01-08-2019 03:02:09
01-08-2019 03:02:09
Please have a look. :)<|||||>Looks good indeed, thanks @WrRan!
transformers
171
closed
LayerNorm initialization
The LayerNorm gamma and beta should be initialized by .fill_(1.0) and .zero_(). reference links: https://github.com/tensorflow/tensorflow/blob/989e78c412a7e0f5361d4d7dfdfb230c8136e749/tensorflow/contrib/layers/python/layers/layers.py#L2298 https://github.com/tensorflow/tensorflow/blob/989e78c412a7e0f5361d4d7dfdfb230c8136e749/tensorflow/contrib/layers/python/layers/layers.py#L2308
01-07-2019 07:51:47
01-07-2019 07:51:47
Yes, the weights are overwritten by the loading model. But if the code is used to pretrain a model from scratch, it might affect performance. (related issue: https://github.com/huggingface/pytorch-pretrained-BERT/issues/143 )<|||||>Great, thanks @donglixp !
transformers
170
closed
How to pretrain my own data with this pytorch code?
I wonder how to pretrain with my own data.
01-07-2019 07:22:53
01-07-2019 07:22:53
Sir @Gpwner i think you have to refer to do pretrain for google/bert repo( https://github.com/google-research/bert#pre-training-with-bert ) and then convert tensorflow model as pytorch.<|||||>A pre-training script is now included in `master` thanks to @tholor's PR #124 <|||||>> A pre-training script is now included in `master` thanks to @tholor's PR #124 Thanks !! Does it support Multiple GPU?Because the official script does not support Multiple GPU<|||||>It does (you can read more about it [here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#lm-fine-tuning))<|||||>> It does great job,thanks ~<|||||>All thanks should go to @tholor :-)
transformers
169
closed
Update modeling.py to fix typo
Fix typo in the documentation for the description of not using masked_lm_labels
01-06-2019 22:13:44
01-06-2019 22:13:44
Thanks @ichn-hu this issue was resolved in a previous PR.
transformers
168
closed
Cannot reproduce the result of run_squad 1.1
I train 5 epochs with learning rate 5e-5, but my evaluation result is {'exact_match': 32.04351939451277, 'f1': 36.53574674513405}. What is the problem?
01-06-2019 06:34:47
01-06-2019 06:34:47
I can reproduce the results, learning rate is 3e-5 , epoch is 2.0<|||||>by using fp16, the f1 is 90.8 <|||||>> by using fp16, the f1 is 90.8 So the key is set fp16 is True?<|||||>@hmt2014 could you give the exact command line that you use to train your model?<|||||>Yes, please use the command line example indicated [here](https://github.com/huggingface/pytorch-pretrained-BERT#squad) in the readme for SQuAD.
transformers
167
closed
Question about hidden layers from pretained model
In the example shown to get hidden states https://github.com/huggingface/pytorch-pretrained-BERT#usage I want to confirm - the final hidden layer corresponds to the last element of `encoded_layers`, right?
01-05-2019 07:09:20
01-05-2019 07:09:20
Yes you are right. The first value returned is the output for `BertEncoder.forward`. https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L623-L634
transformers
166
closed
Fix error when `bert_model` param is path or url.
Error occurs when `bert_model` param is path or url. Therefore, if it is path, specify the last path to prevent error.
01-05-2019 02:43:06
01-05-2019 02:43:06
Ok, looks good to me, thanks @likejazz
transformers
165
closed
fixed model names in help string
Set correct model names according to modeling.py
01-04-2019 15:49:46
01-04-2019 15:49:46
Hi Oliver, I've already created a pull request which fixes this problem for all the example files (#156). Cheers!<|||||>Ok, great.
transformers
164
closed
pretrained model
is the pretrained model downloaded include word embedding? I do not see any embedding in your code please
01-04-2019 14:20:49
01-04-2019 14:20:49
All the code related to word embeddings is located there https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L172-L200 If you want to access pretrained embeddings, the easier thing to do would be to load a pretrained model and extract its embedding matrices.<|||||>> All the code related to word embeddings is located there > > [pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L172-L200) > > Lines 172 to 200 in [8da280e](/huggingface/pytorch-pretrained-BERT/commit/8da280ebbeca5ebd7561fd05af78c65df9161f92) > > class BertEmbeddings(nn.Module): > """Construct the embeddings from word, position and token_type embeddings. > """ > def __init__(self, config): > super(BertEmbeddings, self).__init__() > self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size) > self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) > self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) > > # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load > # any TensorFlow checkpoint file > self.LayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12) > self.dropout = nn.Dropout(config.hidden_dropout_prob) > > def forward(self, input_ids, token_type_ids=None): > seq_length = input_ids.size(1) > position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device) > position_ids = position_ids.unsqueeze(0).expand_as(input_ids) > if token_type_ids is None: > token_type_ids = torch.zeros_like(input_ids) > > words_embeddings = self.word_embeddings(input_ids) > position_embeddings = self.position_embeddings(position_ids) > token_type_embeddings = self.token_type_embeddings(token_type_ids) > > embeddings = words_embeddings + position_embeddings + token_type_embeddings > embeddings = self.LayerNorm(embeddings) > embeddings = self.dropout(embeddings) > return embeddings > If you want to access pretrained embeddings, the easier thing to do would be to load a pretrained model and extract its embedding matrices. oh I have seen this code these days . and from this code I think it dose not use the pretrained embedding paras , and what do you mean by load and extract a pretrained model ???? Is it from the original supplies<|||||>```python In [1]: from pytorch_pretrained_bert import BertModel In [2]: model = BertModel.from_pretrained('bert-base-uncased') In [3]: model.embeddings.word_embeddings Out[3]: Embedding(30522, 768) ``` This field of the `BertEmbeddings` class contains the pretrained embeddings. It gets set by calling `BertModel.from_pretrained`.<|||||>Thanks Gregory that the way to go indeed!
transformers
163
closed
TypeError: Class advice impossible in Python3
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-1-ee86003eab97> in <module>() ----> 1 from pytorch_pretrained_bert import BertTokenizer /opt/conda/envs/py3/lib/python3.6/site-packages/pytorch_pretrained_bert/__init__.py in <module>() 1 __version__ = "0.4.0" 2 from .tokenization import BertTokenizer, BasicTokenizer, WordpieceTokenizer ----> 3 from .modeling import (BertConfig, BertModel, BertForPreTraining, 4 BertForMaskedLM, BertForNextSentencePrediction, 5 BertForSequenceClassification, BertForMultipleChoice, /opt/conda/envs/py3/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py in <module>() 152 153 try: --> 154 from apex.normalization.fused_layer_norm import FusedLayerNorm as BertLayerNorm 155 except ImportError: 156 print("Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.") /opt/conda/envs/py3/lib/python3.6/site-packages/apex/__init__.py in <module>() 16 from apex.exceptions import (ApexAuthSecret, 17 ApexSessionSecret) ---> 18 from apex.interfaces import (ApexImplementation, 19 IApex) 20 from apex.lib.libapex import (groupfinder, /opt/conda/envs/py3/lib/python3.6/site-packages/apex/interfaces.py in <module>() 8 pass 9 ---> 10 class ApexImplementation(object): 11 """ Class so that we can tell if Apex is installed from other 12 applications /opt/conda/envs/py3/lib/python3.6/site-packages/apex/interfaces.py in ApexImplementation() 12 applications 13 """ ---> 14 implements(IApex) /opt/conda/envs/py3/lib/python3.6/site-packages/zope/interface/declarations.py in implements(*interfaces) 481 # the coverage for this block there. :( 482 if PYTHON3: --> 483 raise TypeError(_ADVICE_ERROR % 'implementer') 484 _implements("implements", interfaces, classImplements) 485 TypeError: Class advice impossible in Python3. Use the @implementer class decorator instead.
01-04-2019 11:23:43
01-04-2019 11:23:43
Hi, I came across this error after running `import pytorch_pretrained_bert`. My configurations are as follows: torch version 1.0.0 python version 3.6 cuda 9.2<|||||>I uninstalled the old version of apex and reinstalled a new version. It worked. Thanks. git clone https://www.github.com/nvidia/apex cd apex python setup.py install<|||||>I still have the problem in Google Colab <|||||>> I still have the problem in Google Colab Hello! I also had the problem and now I could solve it. Please install apex exactly as described above: git clone https://www.github.com/nvidia/apex cd apex python setup.py install Double check the following: The git command creates a folder called apex. In this folder is another folder called apex. This folder is the folder of interest. Please rename the folder on the top level (e.g. apex-2) and move the lower apex folder to the main level. Then python will also find the folder and it should work. <img width="316" alt="Bildschirmfoto 2020-05-26 um 17 31 44" src="https://user-images.githubusercontent.com/25347417/82919812-cba88e00-9f76-11ea-9d35-8918b9d83f1c.png"> Make sure that you have the version (0.1). Double check it with: "!pip list". <|||||>The following command did the job for me (based on @fbaeumer's answer): `pip install git+https://www.github.com/nvidia/apex`
transformers
162
closed
BertTokenizer에서 do_lower_case에 관계없이
do_lower_case 가 항상 True로 되는 문제 수정 ref #1
01-04-2019 08:03:59
01-04-2019 08:03:59
transformers
161
closed
Predict Mode: Weights of BertForQuestionAnswering not initialized from pretrained model
I am running following in just prediction mode: `(berttorch363) sandeepbhutani304@pytorch-bert-2:~/pytorch-pretrained-BERT/examples$ python run_squad.py --bert_model bert-large-uncased --do_predict --do_lower_case --predict_file $SQUAD_DIR/dev-v1.1_sand.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir debug_squad9` Every time I am getting following **not initialized message** (**Answers are also wrong**, both with bert-base and bert-large). Is something wrong going on ? (I am running prediction only on CPU) ``` 01/03/2019 13:43:09 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias'] 01/03/2019 13:43:09 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] ``` Complete log (if interested) is below: ``` (berttorch363) sandeepbhutani304@pytorch-bert-2:~/pytorch-pretrained-BERT/examples$ python run_squad.py --bert_model bert-base-uncased --do_predict --do_lower_case --predict_file $SQUAD_DIR/dev-v1.1_sand.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir debug_squad09 Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex. /opt/anaconda3/envs/berttorch363/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py 01/03/2019 13:50:06 - INFO - __main__ - device: cpu n_gpu: 0, distributed training: False, 16-bits training: False 01/03/2019 13:50:07 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/sandeepbhutani304/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 01/03/2019 13:50:08 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/sandeepbhutani304/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 01/03/2019 13:50:08 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/sandeepbhutani304/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpm3rr0ye3 01/03/2019 13:50:14 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } 01/03/2019 13:50:18 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] 01/03/2019 13:50:20 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/sandeepbhutani304/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 01/03/2019 13:50:20 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/sandeepbhutani304/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpt86p1sz4 01/03/2019 13:50:32 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } 01/03/2019 13:50:35 - INFO - __main__ - *** Example *** 01/03/2019 13:50:35 - INFO - __main__ - unique_id: 1000000000 01/03/2019 13:50:35 - INFO - __main__ - example_index: 0 01/03/2019 13:50:35 - INFO - __main__ - doc_span_index: 0 ce , a hare saw a tor ##to ##ise walking slowly with a heavy shell on his back . the hare was very proud of himself and he asked the tor ##to ##ise . ' shall we have a race ? ' the tor ##to ##ise agreed . they started the running race . the hare ran very fast . but the tor ##to ##ise walked very slowly . the proud hair rested under a tree and soon slept off . but the tor ##to ##ise walked very fast , slowly and steadily and reached the goal . at last , the tor ##to ##ise won the race . moral : pride goes before a fall . [SEP] 01/03/2019 13:50:35 - INFO - __main__ - token_to_orig_map: --- ommited log --- 01/03/2019 13:50:35 - INFO - __main__ - ***** Running predictions ***** 01/03/2019 13:50:35 - INFO - __main__ - Num orig examples = 1 01/03/2019 13:50:35 - INFO - __main__ - Num split examples = 1 01/03/2019 13:50:35 - INFO - __main__ - Batch size = 8 01/03/2019 13:50:35 - INFO - __main__ - Start evaluating Evaluating: 0%| | 0/1 [00:00<?, ?it/s]01/03/2019 13:50:35 - INFO - __main__ - Processing example: 0 Evaluating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.26s/it] 01/03/2019 13:50:37 - INFO - __main__ - Writing predictions to: debug_squad09/predictions.json 01/03/2019 13:50:37 - INFO - __main__ - Writing nbest to: debug_squad09/nbest_predictions.json (berttorch363) sandeepbhutani304@pytorch-bert-2:~/pytorch-pretrained-BERT/examples$ ````
01-03-2019 13:56:02
01-03-2019 13:56:02
@thomwolf Can you please reply here. This issue is different from issue#160 #160 is for training mode, this issue is for prediction mode. ( I hope prediction can run on CPU)<|||||>Those messages are correct, the pretrained weights that have been released by Google Brain are just the ones of the core network. They did not release task specific (such as SQuAD) weights. To get a model that solves this task, you would have to train one yourself or get it from someone else. To answer your second question, yes, predictions can run on CPU.<|||||>@rodgzilla is right (even though I think prediction on CPU will be very slow, you should use a GPU)<|||||>Hi @rodgzilla and @thomwolf Thanks for reply. I figured out after reply that in CPU, GPU talks, I forgot to train the model with squad training data, that is why those warning messages were coming. training on 1 GPU took 2.5 hours per epoch (I had to reduce the max_seq_length due to large memory consumption). And after training prediction is running fine on CPU. Would you like to comment on below 2 observations: 1. pytorch version is faster than tensorflow version. Good but why. 2. Answers of tensorflow version and pytorch version are different, even though the training, question and data title is same.<|||||> > Those messages are correct, the pretrained weights that have been released by Google Brain are just the ones of the core network. They did not release task-specific (such as SQuAD) weights. To get a model that solves this task, you would have to train one yourself or get it from someone else. Does it mean that training-phase will not train BERT transformer parameters? If BERT params are tuned during the training phase, it should be stored in the output model. During the prediction time, tuned params should be used instead of loading BERT params from the original file.
transformers
160
closed
Weights of BertForQuestionAnswering not initialized from pretrained model
Trying to run cloned code from git but not able to train. Please suggest `python run_squad.py --bert_model bert-base-uncased --do_train --do_predict --do_lower_case --train_file $SQUAD_DIR/train-v1.1.json --predict_file $SQUAD_DIR/dev-v1.1_sand.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir debug_squad9` I am constantly getting this error when run in training mode: ``` 01/03/2019 12:22:39 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias'] 01/03/2019 12:22:39 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] hon3.6/site-packages/torch/nn/functional.py", line 749, in dropout else _VF.dropout(input, p, training)) RuntimeError: $ Torch: not enough memory: you tried to allocate 0GB. Buy new RAM! at /pytorch/aten/src/TH/THGeneral.cpp:201 ```
01-03-2019 12:24:32
01-03-2019 12:24:32
What kind of GPU are you using?<|||||>I am on CPU as of now ``` (berttorch363) sandeepbhutani304@pytorch-bert-2:~/pytorch-pretrained-BERT/examples$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) CPU @ 2.00GHz Stepping: 3 CPU MHz: 2000.146 BogoMIPS: 4000.29 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 56320K NUMA node0 CPU(s): 0,1 ```<|||||>I don't think it is possible to use BERT on CPU (didn't work for me). The model is too big. If you find a way, feel free to re-open the issue.<|||||>Thanks for confirmation. Is "Fine Tuning" training also not possible for "BERT BASE" on CPU? Correct me if I am wrong, following command is doing fine tuning training only `python run_squad.py --bert_model bert-base-uncased --do_train --do_predict --do_lower_case --train_file $SQUAD_DIR/train-v1.1.json --predict_file $SQUAD_DIR/dev-v1.1_sand.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir debug_squad9`<|||||>You are right. And no, unfortunately, fine-tuning is not possible on CPU in my opinion.<|||||>When I use NVIDIA Corporation GM200 [GeForce GTX TITAN X], I also have this problem <|||||>Sorry - why is the model "too big" to be trained on CPU? Shouldn't the memory requirements of the GPU and CPU be basically the same? As far as I can tell, BERT should run on 12GB GPUs, plenty of CPU machines have more RAM than that. Or is there a difference in how the model is materialized in memory between CPU and GPU training?<|||||>My first comment was badly worded. You can `run` the model on CPU but `training` it on CPU is unrealistic.<|||||>Can you elaborate? My machine has 30gb of ram but indeed I've found I am running out of memory. How come a 12gb gpu is enough, what makes the difference? Also I'm talking about fine-tuning a multiple choice model, just for context.<|||||>How big your data @phdowling ? I have TPU for training, if you want I will give access to you
transformers
159
closed
Allow do_eval to be used without do_train and to use the pretrained model in the output folder
If you wanted to use the pre-trained model to redo evaluation without training, it errors because the output directory already exists (the output directory that contains the pre-trained model that one might like to evaluate). Additionally, a couple of fields are not initialised if one does not train and only evaluates.
01-03-2019 11:12:34
01-03-2019 11:12:34
Indeed!
transformers
158
closed
AttributeError: 'BertForPreTraining' object has no attribute 'global_step'
@thomwolf sir, i am also same issue (https://github.com/huggingface/pytorch-pretrained-BERT/issues/50#issuecomment-440624216). it doen't resolve. how i am convert my finetuned pretrained model to pytorch? ``` export BERT_BASE_DIR=/home/dell/backup/NWP/bert-base-uncased/bert_tensorflow_e100 pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \ $BERT_BASE_DIR/model.ckpt-100 \ $BERT_BASE_DIR/bert_config.json \ $BERT_BASE_DIR/pytorch_model.bin ``` ``` Traceback (most recent call last): File "/home/dell/Downloads/Downloads/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/dell/Downloads/Downloads/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/__main__.py", line 19, in <module> convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT) File "/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 69, in convert_tf_checkpoint_to_pytorch pointer = getattr(pointer, l[0]) File "/home/dell/backup/bert_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 535, in __getattr__ type(self).__name__, name)) AttributeError: 'BertForPreTraining' object has no attribute 'global_step' ``` sir how to resolve this issue? thanks.
01-03-2019 10:10:51
01-03-2019 10:10:51
I have the same issue<|||||>Did you find any solution? <|||||>I have the same issue too
transformers
157
closed
Is it feasible to set num_workers>=1 in DataLoader to quickly load data?
01-02-2019 13:54:25
01-02-2019 13:54:25
Are you asking if it possible or do you want this change included to the code? I don't see why this change would cause a problem, if we choose to implement it we should add a command line argument to specify this value.<|||||>Yes, feel free to submit a PR if you have a working implementation.<|||||>@thomwolf @rodgzilla Is it still the case with the new Trainer?
transformers
156
closed
Adding new pretrained model to the help of the `bert_model` argument.
The help for the `bert_model` command line argument has not been updated in the examples files.
01-02-2019 13:02:27
01-02-2019 13:02:27
Thanks Gregory!
transformers
155
closed
Why not the mlm use the information of adjacent sentences?
I prepare two sentences for mlm predict the mask part:"Tom cant run fast. He [mask] his back a few years ago." The result of model (uncased base) is 'got'. That is meaningless. Obviously ,"hurt" is better. I wander how to make mlm to use the information of adjacent sentences.
12-30-2018 13:08:53
12-30-2018 13:08:53
The model is already using adjacent sentences to make its predictions, it just happens to be wrong in your case. If you would like to make it choose from a specific list of words, you could use the code that I mentionned in #80. <|||||>Thanks @rodgzilla !<|||||>What 's the mlm accuracy of pretrained model? I find the scores of candidate in top 10 are very close,but most are not suitable. Is this the same prediction as Google's original project?
transformers
154
closed
the run_squad report "for training,each question should exactly have 1 answer" when I tried to fintune bert on squad2.0
But some questions of train-v2.0.json are unanswerable.
12-30-2018 11:33:29
12-30-2018 11:33:29
transformers
153
closed
Did you suport squad2.0
What is the command to reproduce the results of squad2.0 reported in the BERT. Thanks~
12-30-2018 11:25:55
12-30-2018 11:25:55
This is still not supported but should be soon. You can follow/try this PR by @abeljim here: https://github.com/huggingface/pytorch-pretrained-BERT/pull/152<|||||>This is now on master
transformers
152
closed
Squad 2.0
Added Squad 2.0 support. It has been tested on Bert large with a null threshold of zero with the result of { "exact": 75.26320222353239, "f1": 78.41636742280099, "total": 11873, "HasAns_exact": 74.51079622132254, "HasAns_f1": 80.82616909765808, "HasAns_total": 5928, "NoAns_exact": 76.01345668629101, "NoAns_f1": 76.01345668629101, "NoAns_total": 5945 } I believe the score will match google's 83 with a null threshold between -1 and -5 run with [--version_2_with_negative] flag for SQuAD 2.0 [--null_score_diff_threshold $NULL_Threshold] to change threshold default value 0.0 Tested Squad 1.1 with BERT base and it seems not to break it results: {"exact_match": 79.73509933774834, "f1": 87.67221720784892}
12-29-2018 04:21:56
12-29-2018 04:21:56
The new run_squad.py can train and predict , but can't predict only. ![image](https://user-images.githubusercontent.com/17742385/50643749-e48f5880-0fa9-11e9-9258-9872c6ddaaba.png) <|||||>in the predict only mode, len(nbest) is always 1<|||||>my scripts for predicting is python3 run_squad.py \ --bert_model bert-large-uncased_up \ --do_predict \ --do_lower_case \ --train_file squad/train-v2.0.json \ --predict_file squad/dev-v2.0.json \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir squad2_diff \ --train_batch_size 24 \ --fp16 \ --loss_scale 128 \ --version_2_with_negative \ --null_score_diff_threshold -2.6929588317871094 <|||||>I've found the reason. Previously trained models are needed for prediction, but your code will check whether output_dir already exists and error will be reported if it exists, which is unreasonable.<|||||>the output_dir during evaluation should be consistent with the training process<|||||>the f1 without thersh is 80.8 and the best_f1_thesh is 81.2<|||||>Thanks for verifying the model works. I think predict not working by itself is not because of my code. I can create another pull request and fix it there. I look into it tomorrow morning.<|||||>Hi @abeljim, thanks for this, it looks nice! Would you mind creating a separate example (e.g. called `run_squad_2.py`) instead of modifying `run_squad.py`? It will be less error prone and easier to document/maintain.<|||||>@thomwolf Yeah no problem I will work on that today <|||||>> @zhaoguangxiang i trained as your script, like following: > python run_squad.py > --bert_model bert-large-uncased > --do_train > --do_predict > --do_lower_case > --train_file $SQUAD_DIR/train-v2.0.json > --predict_file $SQUAD_DIR/dev-v2.0.json > --learning_rate 3e-5 > --num_train_epochs 2.0 > --max_seq_length 384 > --doc_stride 128 > --output_dir /tmp/squad3 > --train_batch_size 1 > --loss_scale 128 > --version_2_with_negative > --null_score_diff_threshold -2.6929588317871094 > but i got the result like this: > { > "exact": 10.595468710519667, > "f1": 13.067377840768328, > "total": 11873, > "HasAns_exact": 0.5398110661268556, > "HasAns_f1": 5.490718134858687, > "HasAns_total": 5928, > "NoAns_exact": 20.62237174095879, > "NoAns_f1": 20.62237174095879, > "NoAns_total": 5945 > } > i don't know why i got the wrong results, really need your help. thx. the train_batch_size in your script is too small.<|||||> > my scripts for predicting is > > python3 run_squad.py > --bert_model bert-large-uncased_up > --do_predict > --do_lower_case > --train_file squad/train-v2.0.json > --predict_file squad/dev-v2.0.json > --learning_rate 3e-5 > --num_train_epochs 2 > --max_seq_length 384 > --doc_stride 128 > --output_dir squad2_diff > --train_batch_size 24 > --fp16 > --loss_scale 128 > --version_2_with_negative > --null_score_diff_threshold -2.6929588317871094 Hi, How to find the best `null_score_diff_threshold` ?
transformers
151
closed
Using large model with fp16 enable causes the server down
I am using a server with Ubuntu 16.04 and 4 TITAN X GPUs. The server runs the base model with no problems. But it cannot run the large model with 32-bit float point, so I enabled fp16, and the server went down. (When I successfully ran the base model, it consumes 8G GPU memory for each of the 4 GPUS. )
12-28-2018 16:32:05
12-28-2018 16:32:05
Could you give more informations such as the command that you are using to run the model and the batch size that you are using? Have you tried reducing it?<|||||>Hi, @hguan6 try to adjust the batch size and use gradient accumulation (see [this section](https://github.com/huggingface/pytorch-pretrained-BERT#training-large-models-introduction-tools-and-examples) in the readme and the `run_squad` and `run_classifier` examples) if needed.
transformers
150
closed
BertLayerNorm not loaded in CPU mode
I am running into an exception when loading a model on CPU in one of the example scripts. I suppose this is related to loading the FusedLayerNorm from apex, even when `--no_cuda` has been set. https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L154 Or is this working for anybody else? Example: ``` run_classifier.py --data_dir glue/CoLA --task_name CoLA --do_train --do_eval --bert_model bert-base-cased --max_seq_length 32 --train_batch_size 12 --learning_rate 2e-5 --num_train_epochs 2.0 --output_dir /tmp/mrpc_output/ --no_cuda ``` Exception: ``` [...] File "/home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/normalization/fused_layer_norm.py", line 19, in forward input_, self.normalized_shape, weight_, bias_, self.eps) RuntimeError: input must be a CUDA tensor (layer_norm_affine at apex/normalization/csrc/layer_norm_cuda.cpp:120) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7fe35f6e4cc5 in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: layer_norm_affine(at::Tensor, c10::ArrayRef<long>, at::Tensor, at::Tensor, double) + 0x4bc (0x7fe3591456ac in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #2: <unknown function> + 0x18db4 (0x7fe359152db4 in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #3: <unknown function> + 0x16505 (0x7fe359150505 in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) <omitting python frames> frame #12: THPFunction_do_forward(THPFunction*, _object*) + 0x15c (0x7fe38fb7db7c in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/torch/lib/libtorch_python.so) ```
12-28-2018 09:55:05
12-28-2018 09:55:05
Hi @tholor, apex is a GPU specific extension. What kind of use-case do you have in which you have apex installed but no GPU (also fp16 doesn't work on CPU, it's not supported on PyTorch currently)?<|||||>The two cases I came across this: 1) testing if some code works for both GPU and CPU (on a GPU machine with apex installed) 2) training/debugging small sample models on my laptop. It has a small "toy GPU" with only 2 GB RAM and therefore I am usually using the CPUs here. I agree that these are edge cases, but I thought the flag `--no_cuda` is intended for exactly such cases?<|||||>I see. It's a bit tricky because apex is loaded by default when it can be found and this loading is deep inside the library it-self, not the examples ([here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L153)). I don't think it's worth it to add specific logic inside the loading of the library to handle such a case. I guess the easiest solution in your case is to have two python environments (with conda or virtualenv) and switch to the one in which apex is not installed when don't want to use GPU. Feel free to re-open the issue if this doesn't solve your problem.<|||||>Sure, then it's not worth the effort.<|||||>@thomwolf a solution would be to check `torch.cuda.is_available()` and then we can disable apex by using CUDA_VISIBLE_DEVICES=-1<|||||>Is this also related to the fact then the tests fail when apex is installed? ``` def forward(self, input, weight, bias): input_ = input.contiguous() weight_ = weight.contiguous() bias_ = bias.contiguous() output, mean, invvar = fused_layer_norm_cuda.forward_affine( > input_, self.normalized_shape, weight_, bias_, self.eps) E RuntimeError: input must be a CUDA tensor (layer_norm_affine at apex/normalization/csrc/layer_norm_cuda.cpp:120) E frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f754d802021 in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/torch/lib/libc10.so) E frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f754d8018ea in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/torch/lib/libc10.so) E frame #2: layer_norm_affine(at::Tensor, c10::ArrayRef<long>, at::Tensor, at::Tensor, double) + 0x6b9 (0x7f754a8aafe9 in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-pack ages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) E frame #3: <unknown function> + 0x19b9d (0x7f754a8b8b9d in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpy thon-36m-x86_64-linux-gnu.so) E frame #4: <unknown function> + 0x19d1e (0x7f754a8b8d1e in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpy thon-36m-x86_64-linux-gnu.so) E frame #5: <unknown function> + 0x16971 (0x7f754a8b5971 in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpy thon-36m-x86_64-linux-gnu.so) E <omitting python frames> E frame #13: THPFunction_do_forward(THPFunction*, _object*) + 0x15c (0x7f7587d411ec in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/torch/lib/libtorch_python.so) ../../lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/normalization/fused_layer_norm.py:21: RuntimeError _______________________________________________________________________________ OpenAIGPTModelTest.test_default ```<|||||>Hello @artemisart, What do you mean by "disable apex by CUDA_VISIBLE_DEVICES=-1" ? I tried to do that but the import still work at [this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L153)<|||||>@LamDang You can set the env CUDA_VISIBLE_DEVICES=-1 to disable cuda in pytorch (ex when you launch your script in bash `CUDA_VISIBLE_DEVICES=-1 python script.py`), and then wrap the import apex with a `if torch.cuda.is_available()` in the script.<|||||>Hi all, I came across this issue when my GPU memory was fully loaded and had to make some inference at the same time. For this kind of temporary need, the simplest solution for me is just to `touch apex.py` before the run and remove it afterwards.<|||||>Re-opening this to remember to wrap the apex import with a if `torch.cuda.is_available()` in the next release as advocated by @artemisart <|||||>Hello, I pushed a pull request here to solve this issue upstream https://github.com/NVIDIA/apex/pull/256 Update: it is merged into apex<|||||>> Re-opening this to remember to wrap the apex import with a if `torch.cuda.is_available()` in the next release as advocated by @artemisart Yes please, I also struggle with Apex in CPU mode, i have wrapped Bertmode in my object and when I tried to load the pretrained GPU model with torch.load(model, map_location='cpu') , it shows 'no module named apex' but if I install apex, I get no cuda error(I'm on a CPU machine in inference phase )<|||||>Well it should be solved in apex now. What is the exact error message you have ? By the way, not using apex is also fine, don't worry about it if you don't need t.<|||||>I got `model = torch.load(model_file, map_location='cpu')` ` result = unpickler.load() ModuleNotFoundError: No module named 'apex' ` model_file is a pretrained object with GPU with a bertmodel field , but I want to unpickle it in CPU mode<|||||>Try to use pytorch recommended serialization practice (saving/loading the state dict): https://pytorch.org/docs/stable/notes/serialization.html<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
149
closed
Speedup using NVIDIA Apex
Hi, According to PR https://github.com/huggingface/pytorch-pretrained-BERT/pull/116, we should be able to achieve a 3-4 x speed up for both bert-base and bert-large. However, I can only achieve 2x speed up with bert-base. My docker image uses CUDA9.0 while the discussion in the PR https://github.com/huggingface/pytorch-pretrained-BERT/pull/116 is based on CUDA10.0... I am wondering if that makes the difference.... Thanks
12-27-2018 23:17:42
12-27-2018 23:17:42
Maybe. You can try with pytorch docker image `dockerhub 1.0-cuda10.0-cudnn7` to debug, as we did in the discussion in PR #116.<|||||>Just verified that CUDA10.0 makes 4x speedup. It should be better to include this in the main document.<|||||>What GPU do you run ... and how do you increase such a speedup? Is this possible with gtx 1080? Thanks
transformers
148
closed
Embeddings from BERT for original tokens
I am trying out the `extract_features.py` example program. I noticed that a sentence gets split into tokens and the embeddings are generated. For example, if you had the sentence “Definitely not”, and the corresponding workpieces can be [“Def”, “##in”, “##ite”, “##ly”, “not”]. It then generates the embeddings for these tokens. My question is how do I train an NER system on CoNLL dataset? I want to extract embeddings for original tokens for training an NER with a neural architecture. If you have come across any resource that gives a clear explanation on how to carry this out, post it here.
12-27-2018 06:48:23
12-27-2018 06:48:23
Hi, you should read the discussion in #64. I left this issue open for reference on these questions. Don't hesitate to participate there.
transformers
147
closed
Does the final hidden state contains the <CLS> for Squad2.0
Recently I'm modifying the `run_squad.py` to run on CoQA. In the implementation of TensorFlow from Google, they use the probability on the first token of a context segment, where is the location of `<CLS>` to as the that of the question is unanswerable. So I try to modified the `run_squad.py` in your implementation as this. But when I looked at the predictions, I have found that many questions answers are the first word of the context not the first token, <CLS>, so I wanna know if your implementation have removed the hidden state of start token and end token? Or there may be other problems ? Thank you a lot!
12-26-2018 02:05:34
12-26-2018 02:05:34
I'm sorry that I have found a bug in my code. I have invalidly called a attribute of the `InputFeature` but it have run successfully. Now I have fixed it and re-run it. If I have more questions I will reopen this. Sorry to bother you!
transformers
146
closed
BertForQuestionAnswering: Predicting span on the question?
Hello, I have a question regarding the `BertForQuestionAnswering` implementation. If I am not mistaken, for this model the sequence should be of the form `Question tokens [SEP] Passage tokens`. Therefore, the embedded representation computed by `BertModel` returns the states of both the question and the passage (a tensor of length `passage + question + 1`). If I am not mistaken, the span logits are then calculated for the whole sequence, i.e. **they can be calculated for the question** even if the answer is always in the passage (see [the model code](https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L1097) and the [squad script](https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/examples/run_squad.py#L899)). I wonder if this behavior is really desirable. Doesn't it confuse the model? Thank you for your work!
12-24-2018 12:51:49
12-24-2018 12:51:49
This is the original behavior from the TF implementation. The predictions are filtered afterward (in `write_predictions`) so this is probably not a big issue. Maybe try with another behavior and see if it improve upon the results?
transformers
145
closed
Correct the wrong note
Correct the wrong note in #144
12-22-2018 12:31:58
12-22-2018 12:31:58
Thanks!
transformers
144
closed
Some questions in Loss Function for MaskedLM
Use the same sentence in your **Usage** Section: ``` # Tokenized input text = "Who was Jim Henson ? Jim Henson was a puppeteer" tokenized_text = tokenizer.tokenize(text) # Mask a token that we will try to predict back with `BertForMaskedLM` masked_index = 6 tokenized_text[masked_index] = '[MASK]' ``` Q1. When we use this sentence as training data,according to your code ``` if masked_lm_labels is not None: loss_fct = CrossEntropyLoss(ignore_index=-1) masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1)) return masked_lm_loss ``` seem the loss is a sum of all word in this sentence, not the single word "henson", am I right? But in my opinion, we only need to calculate the **masked** word's loss, not the whole sentence? Q2. It's also a question about masked, "chooses 15% of tokens at random" in the paper, I don't know how to understand it... For each word, a probability of 15% to be masked or just 15% of the sentence is masked? Hope you could help me fix them. By the way, the notes in line 731 in: pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py should be : if `masked_lm_labels` is not `None`, missed a word "not".
12-22-2018 12:24:24
12-22-2018 12:24:24
@julien-c Seem some conflict with the original BERT in tf. The code in tf is as follows: ``` def gather_indexes(sequence_tensor, positions): """Gathers the vectors at the specific positions over a minibatch.""" ... input_tensor = gather_indexes(input_tensor, positions) ``` Or dose you mean that we could set all words that are not masked(random pick from the sentence) and the padding(add to reach max_length) to "-1"(in order to ignore)?<|||||>> Q1:[...] But in my opinion, we only need to calculate the masked word's loss, not the whole sentence? It's exactly what is done in the current implementation. The labels of not masked tokens are set to -1 and the loss function ignores those tokens by setting ignore_index=-1 (see [documentation](https://pytorch.org/docs/stable/nn.html#crossentropyloss)) > Q2. It's also a question about masked, "chooses 15% of tokens at random" in the paper, I don't know how to understand it... For each word, a probability of 15% to be masked or just 15% of the sentence is masked? Each token has a probability of 15% of getting masked. You might wanna checkout [this code](https://github.com/deepset-ai/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L288) to get a better understanding <|||||>So nice to see your reply, it do fix my problem,4K. <|||||>@tholor I want to rebuild BERT on a single GPU, still some problems. May I know your email address ?<|||||>malte.pietsch [at] deepset.ai But if you have issues that are of interest for others, please use github.<|||||>Thanks @tholor!
transformers
143
closed
bug in init_bert_weights
hi , there is a bug in init_bert_weights(). the BERTLayerNorm has twice init, the first init is in the BERTLayerNorm module __init__(). the second init in init_bert_weights(). if you want to get pre-training model that is not from google model, the second init will lead to bad convergence in my experiment 。 gamma is variance , beta is mean, there are usually 1 and 0. the second init change it. first: self.gamma = nn.Parameter(torch.ones(config.hidden_size)) self.beta = nn.Parameter(torch.zeros(config.hidden_size)) second: elif isinstance(module, BERTLayerNorm): module.beta.data.normal_(mean=0.0, std=config.initializer_range) module.gamma.data.normal_(mean=0.0, std=config.initializer_range)
12-21-2018 08:29:40
12-21-2018 08:29:40
Fixed, thanks
transformers
142
closed
change in run_classifier.py
while running the dev set for multi-label classification (more than two), it gives an assertion error. Specifying the num_labels while creating the model again for eval testing solves this problem. Thus the only change is while running classification for multi label is in the starting dict of num labels and specifying the base classes in DataProcessor class that is chosen.
12-21-2018 05:58:14
12-21-2018 05:58:14
Thanks! #141 already addressed this problem.
transformers
141
closed
loading saved model when n_classes != 2
Required to for: Assertion `t >= 0 && t < n_classes` failed, if your default number of classes is not 2.
12-20-2018 21:56:20
12-20-2018 21:56:20
This problem is discussed is #135 and I don't think that this is the right way to patch this problem. The saved model contains the `num_labels` information.<|||||>cf discussion in #135 let's go for the `mandatory-argument`solution for now.
transformers
140
closed
Not able to use FP16 in pytorch-pretrained-BERT. Getting error **Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target**
I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue **Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target** when I enabled fp16. Also when using `logits = logits.half() labels = labels.half()` then the epoch time also increased. The training time without fp16 was 2.5 hrs per epoch after doing logits.half() and labels.half() the runtime per epoch shot up to 8hrs.
12-20-2018 18:46:30
12-20-2018 18:46:30
Which kind of GPU are you using? `fp16` only works on recent GPU (better with Tesla and Volta series).<|||||>I experienced a similar issue with CUDA 9.1. Using 9.2 solved this for me. <|||||>Yes, CUDA 10 is recommended for using fp16 with good performances.
transformers
139
closed
Not able to use FP16 in pytorch-pretrained-BERT
I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue **Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target** when I enabled fp16. Also when using `logits = logits.half() labels = labels.half()` then the epoch time also increased. _Originally posted by @Ashish-Gupta03 in https://github.com/huggingface/pytorch-pretrained-BERT/issue_comments#issuecomment-449096213_
12-20-2018 18:46:14
12-20-2018 18:46:14
transformers
138
closed
Problem loading finetuned model for squad
Hi, i'm trying to load a fine tuned model for question answering which i trained with squad.py: ``` import torch from pytorch_pretrained_bert import BertModel, BertForQuestionAnswering from pytorch_pretrained_bert import modeling config = modeling.BertConfig(attention_probs_dropout_prob=0.1, hidden_dropout_prob=0.1, hidden_size=768, initializer_range=0.02, intermediate_size=3072, max_position_embeddings=512, num_attention_heads=12, num_hidden_layers=12, vocab_size_or_config_json_file=30522) model = modeling.BertForQuestionAnswering(config) model_state_dict = "/home/ubuntu/bert_squad/bert_fine_121918/pytorch_model.bin" model.bert.load_state_dict(torch.load(model_state_dict)) ``` but receiving an error on the last line: > Error(s) in loading state_dict for BertModel: > Missing key(s) in state_dict: "embeddings.word_embeddings.weight", "embeddings.position_embeddings.weight", "embeddings.token_type_embeddings.weight", "embeddings.LayerNorm.weight", "embeddings.LayerNorm.bias", "encoder.layer.0.attention.self.query.weight",.... > Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight",.... it looks like model definition is not in expected format. Could you direct me on what went wrong?
12-20-2018 17:27:40
12-20-2018 17:27:40
Judging from the error message, I would say that the error is caused by the following line: https://github.com/huggingface/pytorch-pretrained-BERT/blob/7fb94ab934b2ad1041613fc93c61d13105faf98a/pytorch_pretrained_bert/modeling.py#L541 Apparently, the proper way to save a model is the following one: https://github.com/huggingface/pytorch-pretrained-BERT/blob/7fb94ab934b2ad1041613fc93c61d13105faf98a/examples/run_classifier.py#L554-L557 Is this what you are doing?<|||||>hi @rodgzilla i see that model is being saved the same way in squad.py: https://github.com/huggingface/pytorch-pretrained-BERT/blob/7fb94ab934b2ad1041613fc93c61d13105faf98a/examples/run_squad.py#L918-L921 so the problem must be elsewhere<|||||>I run into the same problem, using the pytorch_model.bin generated by `run_classifier.py`: ```bash !python pytorch-pretrained-BERT/examples/run_classifier.py \ --task_name=MRPC \ --do_train \ --do_eval \ --data_dir=./ \ --bert_model=bert-base-chinese \ --max_seq_length=64 \ --train_batch_size=32 \ --learning_rate=2e-5 \ --num_train_epochs=3.0 \ --output_dir=./models/ ``` And try to load the fine-tuned model: ```py from pytorch_pretrained_bert import modeling from pytorch_pretrained_bert import BertForSequenceClassification # Load pre-trained model (weights) config = modeling.BertConfig( vocab_size_or_config_json_file=21128, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02) model = BertForSequenceClassification(config) model_state_dict = "models/pytorch_model.bin" model.bert.load_state_dict(torch.load(model_state_dict)) ``` ```py RuntimeError Traceback (most recent call last) <ipython-input-22-cdc19dc2541c> in <module>() 20 # issues: https://github.com/huggingface/pytorch-pretrained-BERT/issues/138 21 model_state_dict = "models/pytorch_model.bin" ---> 22 model.bert.load_state_dict(torch.load(model_state_dict)) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 767 if len(error_msgs) > 0: 768 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( --> 769 self.__class__.__name__, "\n\t".join(error_msgs))) 770 771 def _named_members(self, get_members_fn, prefix='', recurse=True): RuntimeError: Error(s) in loading state_dict for BertModel: Missing key(s) in state_dict: "embeddings.word_embeddings.weight", "embeddings.position_embeddings.weight", "embeddings.token_type_embeddings.weight", "embeddings.LayerNorm.weight", "embeddings.LayerNorm.bias", "encoder.layer.0.attention.self.query.weight", "encoder.layer.0.attention.self.query.bias", "encoder.layer.0.attention.self.key.weight", "encoder.layer.0.attention.self.key.bias", "encoder.layer.0.attention.self.value.weight", "encoder.layer.0.attention.self.value.bias", "encoder.layer.0.attention.output.dense.weight", "encoder.layer.0.attention.output.dense.bias", "encoder.layer.0.attention.output.LayerNorm.weight", "encoder.layer.0.attention.output.LayerNorm.bias", "encoder.layer.0.intermediate.dense.weight", "encoder.layer.0.intermediate.dense.bias", "encoder.layer.0.output.dense.weight", "encoder.layer.0.output.dense.bias", "encoder.layer.0.output.LayerNorm.weight", "encoder.layer.0.output.LayerNorm.bias", "encoder.layer.1.attention.self.query.weight", "encoder.layer.1.attention.self.query.bias", "encoder.layer.1.attention.self.key.weight", "encoder.layer.1.attention.self.key.bias", "encoder.layer.1.attention.self.value.weight", "encoder.layer.1.attention.self.value.bias", "encoder.layer.1.attention.output.dense.weight", "encoder.layer.1.attention.output.dense.bias", "encoder.layer.1.attention.output.LayerNorm.weight", "encoder.layer.1.attention.output.LayerNorm.bias", "encoder.layer.1.intermediate.dense.weight", "encoder.layer.1.intermediate.dense.bias", "enco... Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm.... ``` How can I load a fine-tuned model?<|||||>Hi, here the problem is not with the saving of the model but the loading. You should just use ``` model.load_state_dict(torch.load(model_state_dict)) ``` and not ``` model.bert.load_state_dict(torch.load(model_state_dict)) ``` Alternatively, here is an example on how to save and then load a model using `from_pretrained`: https://github.com/huggingface/pytorch-pretrained-BERT/blob/2e4db64cab198dc241e18221ef088908f2587c61/examples/run_squad.py#L916-L924
transformers
137
closed
run_squad.py without GPU.. Without CUPY
I am trying to run_squad.py for QnA (Squad) case. Its dependency is on GPU.. i.e., cupy is to be installed. In one of my environment I dont have GPU therefore cupy is not getting installed and I am not able to proceed with training. Can I train on CPU itself? following is I am trying to run: ``` python run_squad.py \ --bert_model bert-base-uncased \ --do_train \ --do_predict \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` and I get following error: ``` RuntimeError: CUDA environment is not correctly set up (see https://github.com/chainer/chainer#installation).No module named 'cupy' ```
12-20-2018 14:53:46
12-20-2018 14:53:46
@SandeepBhutani What was the conclusion of this issue? <|||||>Is this issue still open.. It can be closed.. It was an environment issue.. On Sat, 13 Jul, 2019, 5:00 AM Peter, <notifications@github.com> wrote: > @SandeepBhutani <https://github.com/SandeepBhutani> What was the > conclusion of this issue? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/137?email_source=notifications&email_token=AHRBKIGTRSIMUPOHIXZNKWDP7EHYNA5CNFSM4GLRLKFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZ3DFRY#issuecomment-511062727>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AHRBKIEAQAEZLMPO2JPH64TP7EHYNANCNFSM4GLRLKFA> > . >
transformers
136
closed
It's possible to avoid download the pretrained model?
When I run this code `model = BertModel.from_pretrained('bert-base-uncased')` , it would download a big file and sometimes that's very slow. Now I have download the model from [https://github.com/google-research/bert](url). So, It's possible to avoid download the pretrained model when I use pytorch-pretrained-BERT at the first time?
12-20-2018 14:00:03
12-20-2018 14:00:03
I just find the way.<|||||>@rxy1212 could you explain the method used <|||||>@makkunda In `modeling.py`, you can find this codes ``` PRETRAINED_MODEL_ARCHIVE_MAP = { 'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz", 'bert-large-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased.tar.gz", 'bert-base-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased.tar.gz", 'bert-large-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased.tar.gz", 'bert-base-multilingual-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased.tar.gz", 'bert-base-multilingual-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz", 'bert-base-chinese': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese.tar.gz", } ``` just download the model you need by the url and unzip it, then you will get `bert_config.json` and `pytorch_model.bin`. You can put them in a folder X. Now, you can use `model = BertModel.from_pretrained('THE-PATH-OF-X')`
transformers
135
closed
Problem loading a finetuned model.
Hi! There is a problem with the way model are saved and loaded. The following code should crash and doesn't: ```python import torch from pytorch_pretrained_bert import BertForSequenceClassification model_fn = 'model.bin' bert_model = 'bert-base-multilingual-cased' model = BertForSequenceClassification.from_pretrained(bert_model, num_labels = 16) model_to_save = model.module if hasattr(model, 'module') else model torch.save(model_to_save.state_dict(), model_fn) print(model_to_save.num_labels) model_state_dict = torch.load(model_fn) loaded_model = BertForSequenceClassification.from_pretrained(bert_model, state_dict = model_state_dict) print(loaded_model.num_labels) ``` This code prints: ``` 16 2 ``` The code should raise an exception when trying to load the weights of the task specific linear layer. I'm guessing that the problem comes from `PreTrainedBertModel.from_pretrained`. I would be happy to submit a PR fixing this problem but I'm not used to work with the PyTorch loading mechanisms. @thomwolf could you give me some guidance? Cheers!
12-20-2018 13:52:44
12-20-2018 13:52:44
Ok I managed to find the problem. It comes from: https://github.com/huggingface/pytorch-pretrained-BERT/blob/7fb94ab934b2ad1041613fc93c61d13105faf98a/pytorch_pretrained_bert/modeling.py#L534-L540 When trying to load `classifier.weight` and `classifier.bias`, the following line gets added to `error_msgs`: ``` size mismatch for classifier.weight: copying a param with shape torch.Size([16, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([2]). ``` First, I think that we should add a check of `error_msgs` to `from_pretrained`. I don't really know if there is any other way than printing an error message and existing the program since the default behavior (keeping the classifier layer randomly initialized) can be frustrating for the user (I speak from experience ^^). To fix this, we should probably fetch the number of labels of the saved model and use it to instanciate the model being created before loading the saved weights. Unfortunately I don't really know how to do that, any idea? Another possible "fix" would be to force the user to give a `num_labels` argument when loading a pretrained classification model with the following code in `BertForSequenceClassification`: ```python @classmethod def from_pretrained(cls, *args, **kwargs): if 'num_labels' not in kwargs: raise ValueError('num_labels should be given when loading a pre-trained classification model') return super().from_pretrained(*args, **kwargs) ``` And even with this code, we are not able to check that the `num_labels` value is the same as the saved model. I don't really like the idea of forcing the user to give an information that the checkpoint already contains.<|||||>Just use the num_labels when you load your model ```python model_state_dict = torch.load(model_fn) loaded_model = BertForSequenceClassification.from_pretrained(bert_model, state_dict = model_state_dict, num_labels = 16) print(loaded_model.num_labels)``` <|||||>As mentioned in my previous posts, I think that the library should either fetch the number of labels from the save file or force the user to provide a `num_labels` argument. While what you are proposing fixes my problem I would like to prevent this problem for other users in the future by patching the library code.<|||||>I see thanks @rodgzilla. Indeed not using the `error_msg` is bad practice, let's raise these errors. Regarding fetching the number of labels, I understand your point but it will probably add too much custom logic in the library for the moment so let's go for your simple solution of setting the number of labels as mandatory for now (should have done that since the beginning).<|||||>Hi everyone! I had to come here to know that I had to include `num_labels` when loading the model because the error was misleading. Also, I didn't know how many labels there were so I had to guess. The model I was trying to load: [biobert-base-cased-v1.1-mnli](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-mnli#)<|||||>I'm also facing a similar problem using the same model as @ugm2 - [biobert-base-cased-v1.1-mnli](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-mnli#) In my example I know the exact `num_labels` and provide it as an argument while loading the model. How can I solve this? ``` RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([3, 768]) from checkpoint, the shape in current model is torch.Size([10, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([10]). ```<|||||>With the latest transformers versions, you can use the recently introduced (https://github.com/huggingface/transformers/pull/12664) `ignore_mismatched_sizes=True` parameter for `from_pretrained` method in order to specify that you'd rather drop the layers that have incompatible shapes rather than raise a `RuntimeError`.
transformers
134
closed
Fixing various class documentations.
Hi! The documentation of `PretrainedBertModel` was missing the new pre-trained model names and the one of `BertForQuestionAnswering` was wrong (due to a copy-pasting mistake I assume). Cheers!
12-20-2018 12:13:41
12-20-2018 12:13:41
Nice, thanks Gregory!
transformers
133
closed
lower accuracy on OMD(Obama-McCain Debate twitter sentiment dataset)
I run the classification task with BERT pretrianed model, but while it's much lower than other methods on OMD dataset, which has 2 labels. The final accuracy result is only 62% on binary classification task!
12-20-2018 07:27:11
12-20-2018 07:27:11
We need more informations on the parameters you use to run this training in order to understand what might be wrong.<|||||>> We need more informations on the parameters you use to run this training in order to understand what might be wrong. THANK YOU! Because of limited mermory, the batch_size is 8 and epoch is 6, and the content is short, so set the max_length is 50, other parameters are default.<|||||>Try various values for the hyper-parameters and at least 10 different seed values. Limited memory should not be a limitation when you use `gradient accumulation` as indicated in the readme [here](https://github.com/huggingface/pytorch-pretrained-BERT#training-large-models-introduction-tools-and-examples) (see also how it is used in all the examples like `run_classifier`, `run_squad`...)
transformers
132
closed
NONE
12-20-2018 05:42:29
12-20-2018 05:42:29
transformers
131
closed
bert-base-multilingual-cased, do lower case problem
I'm working on fine-tuning squad task with multilingual-cased model. Google says "When using a cased model, make sure to pass --do_lower=False to the training scripts. (Or pass do_lower_case=False directly to FullTokenizer if you're using your own script.)" So, I added "do_lower_case" argument to run squad script. However I got a some wired token converted result like this ['[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '?']. I think that there are two problem on example of run_squad.py. 1. default argument ``` parser.add_argument("--do_lower_case", default=True, action='store_true', help="Whether to lower case the input text. True for uncased models, False for cased models.") ``` "--do_lower_case" 's default value is True also action value is 'store_true which means any case goes to args.do_lower_case value set True. to be changed : default=True -> default=False even though changing like above, tokenizer never know what happen ~ 2. initialize Tokenizer `tokenizer = BertTokenizer.from_pretrained(args.bert_model)` When BertTokenizer's init method, do_lower_case is set True as a default. ``` def __init__(self, do_lower_case=True): """Constructs a BasicTokenizer. Args: do_lower_case: Whether to lower case the input. """ self.do_lower_case = do_lower_case ``` That's why calling classmethod from_pretrained with no additional argument, there is no way to change do_lower_case value. ``` @classmethod def from_pretrained(cls, pretrained_model_name, cache_dir=None, *inputs, **kwargs): """ ''' ''' skip ''' # Instantiate tokenizer. tokenizer = cls(resolved_vocab_file, *inputs, **kwargs) return tokenizer ``` to be changed : BertTokenizer.from_pretrained(args.bert_model, do_lower_case=False) It is not possibly problem but some one can be suffered by this issue. many thanx to fix. BTW, I Do not still understand... Why I got [UNK] tokens except English, punctuations and numbers. The input text is Korean. When reading data, the do_lower_case flag do work only call "token.lower()" and "_run_strip_accent(text)" or not. When do_lower_case value is false, tokenizer work fine. I got a result as expected. This time tokens are not through "token.lower()" and "_run_strip_accent(text)" methods. Even If set do_lower_case value to true then "token.lower()" and "_run_strip_accent(text)" methods are called, there is no difference. because I debug in _run_strip_accent method and input string value and return string value two are same. ``` def _run_strip_accents(self, text): """Strips accents from a piece of text.""" text = unicodedata.normalize("NFD", text) output = [] for char in text: cat = unicodedata.category(char) if cat == "Mn": continue output.append(char) return "".join(output) ``` input string is just splited and checked if there are accent characters or not. but Korean doesn't have accent characters. So, joining output list is completely restoring input text value. any advice ?
12-19-2018 12:40:38
12-19-2018 12:40:38
Hi @itchanghi, thanks for the feedback. Indeed the `run_squad` example was not updated for `cased` models. I fixed that in commits c9fd3505678d581388fb44ba1d79ac41e8fb28a4 and 2e4db64cab198dc241e18221ef088908f2587c61. Please re-open the issue if your problem is not fixed (and maybe summarize it in an updated version).<|||||>It seems that default do_lower_case is still True.
transformers
130
closed
Use entry-points instead of scripts
The recommended approach to create launch scripts is to use entry_points and console_scripts. xref: https://packaging.python.org/guides/distributing-packages-using-setuptools/#scripts
12-19-2018 03:49:28
12-19-2018 03:49:28
Looks great indeed, thanks for that!
transformers
129
closed
BERT + CNN classifier doesn't work after migrating from 0.1.2 to 0.4.0
I used BERT in a very simple sentence classification task: in `__init__` I have ```python3 self.bert = BertModel(config) self.cnn_classifier = CNNClassifier(self.config.hidden_size, intent_cls_num) ``` and in forward it's just ```python3 encoded_layers, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) confidence_score = self.cnn_classifier(encoded_layers) masked_lm_loss = loss_fct(confidence_score, ground_truth_labels) ``` This code works perfectly when I use 0.1.2 version, but in 0.4.0, it: - always predicting the most common class when have a large training set - cannot even learn a dataset with only 4 samples (fed in as one batch); can learn a single sample though Why are these problems happening in 0.4.0? The only change in my code is that I changed `weight_decay_rate` to `weight_decay`...
12-19-2018 01:57:22
12-19-2018 01:57:22
I don't know... If you can open-source a self contained example with data and code I can try to give it a deeper look. Are you using `apex`? That's the main change in 0.4.0.<|||||>Hi Thomas! I've found the problem. I think It's because you modified your `from_pretrained` function and I'm still using a part of the `from_pretrained` function from version 0.1.2, which resulted in some compatibility issues. Thanks!
transformers
128
closed
Add license to source distribution
The `LICENSE` file in the git repository contains the Apache license text but it not included the source `.tar.gz` distribution. This PR adds a `MANIFEST.in` file with a directive to include the LICENSE.
12-19-2018 01:43:46
12-19-2018 01:43:46
Thanks!
transformers
127
closed
raises value error for bert tokenizer for long sequences
addesses #125 (all pre-trained bert models have a positional embedding matrix with 512 embeddings. Sequences longer than 512 tokens will cause indexing errors when you attempt to run a bert forward pass on them) added a max_len arg to bert tokenizer. the function convert_tokens_to_indices will raise a value error if the inputted list of tokens is longer than max_len. if no max_len is supplied, then no value error will be raised, however long the sequence is Pre-trained bert models have max_len set to 512 at object construction time (in BertTokenizer.from_pretrained. It can be overridden by explicitly passing max_len to BertTokenizer.from_pretrained as a kwarg if bert models with larger positional embedding matrices are released, it is possible to have different max_lens for different pretrained_models.
12-18-2018 14:51:40
12-18-2018 14:51:40
Thanks @patrick-s-h-lewis, this is nice. The max number of positional embeddings is also available in the pretrained models configuration files (as `max_position_embeddings`) but accessing this requires some change in the models stored on S3 (not storing them as tar.gz files) so I will take care of it in the next release.
transformers
126
closed
Benchmarking Prediction Speed
In reference to following [tweet](https://twitter.com/Thom_Wolf/status/1074983741716602882): Would it be possible to do a benchmark on the speed of prediction? I was working with the tensorflow version of BERT, but it uses the new Estimators and I'm struggling to find a straight-forward way to benchmark it since it all gets hidden in layers of computation graph. I'd imagine pytorch being more forgiving in this regard.
12-18-2018 13:21:51
12-18-2018 13:21:51
Do you have a dataset in mind for the benchmark? We can do a simple benchmark by timing the duration of evaluation on the SQuAD dev set for example.<|||||>Yes, that would be perfect! Ideally, it would exclude loading and setting up the model (something that the tf implementation literally does not allow for :P) <|||||>Hi Jade, I did some benchmarking on a V100 GPU. You can check the script I used on the `benchmark` branch (mostly added timing to `run_squad`). Here are the results: ![prediction_speed_bert_1](https://user-images.githubusercontent.com/7353373/50219266-f4deeb00-038e-11e9-9bcc-5077707b8b61.png) max_seq_length | fp32 | fp16 -- | -- | -- 384 | 140 | 352 256 | 230 | 751 128 | 488 | 1600 64 | 1030 | 3663 I will give a look on an older K80 (without fp16 support) when I have time. <|||||>This is fantastic! Thank you so so so so much! If you get a chance to do the K80, that would be brilliant. I'll try run it when I get time. Currently doing a cost versus speed comparison just to get a feel. <|||||>You can run it like this for `fp32` (just remove `--do_train`): ```bash python run_squad.py \ --bert_model bert-base-uncased \ --do_predict \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --predict_batch_size 128 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` And like this for `fp16` (add `--predict_fp16`): ```bash python run_squad.py \ --bert_model bert-base-uncased \ --do_predict \ --predict_fp16 \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --predict_batch_size 128 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` Adjust `predict_batch_size 128` to fill your GPU around 50% at least and adjust `--max_seq_length 384` to test with various sequence lengths. For small sequences (under 64 tokens) we should desactivate the windowing (related to `doc_stride`). I didn't take time to do that so the dataset reading didn't work (hence the absence of datapoint).<|||||>Fantastic. Tomorrow I'm going to run it for some smaller max sequence lengths (useful for my use case) and on some other GPUS: The Tesla M60 and then the K80 <|||||>Managed to replicate your results on the V100. :) Also, I've done the experiments below for sequences of length 64 on different GPUS. Will do the other sequence lengths when I get a chance. |GPU | max_seq_length | fp32 | fp16 | | -- | -- | -- | -- | | Tesla M60 | 64 | 210 | N/A | | Tesla K80 | 64 | 143 | N/A | <|||||>@thomwolf @jaderabbit Thank you for the experiments. I think these results deserves more visibility, maybe a dedicated markdown page or a section in the `README.md`?<|||||>Your are right Gregory. The readme is starting to be too big in my opinion. I will try to setup a sphinx/ReadTheDocs online doc later this month (feel free to start a PR if you have experience in these kind of stuff).<|||||>I'm more or less new to sphinx but I would be happy to work on it with you.<|||||>Sure, if you want help that could definitely speed up the process. The first step would be to create a new branch to work on with a `doc`folder and then generate the doc in the folder using sphinx. Good introductions to sphinx and readthedoc are here: http://www.ericholscher.com/blog/2016/jul/1/sphinx-and-rtd-for-writers/ and here: https://docs.readthedocs.io/en/latest/intro/getting-started-with-sphinx.html We will need to add some dependencies for the but we should strive to keep it as light as possible. Here is an example of repo I've worked on recently (still a draft but the doc is functional) https://github.com/huggingface/adversarialnlp<|||||>Hi @thomwolf , I am looking to deploy a pre-trained squad-bert model to make predictions in real-time. Right now when I run: `python run_squad.py \ --bert_model bert-base-uncased \ --do_predict \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/test.json \ --predict_batch_size 128 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/` it takes 22 seconds to generate the prediction. Is there a way to reduce the amount off time taken to less than a second? The "test.json" has one context and 1 question on the same. It looks like this: `{ "data": [ { "title": "Arjun", "paragraphs": [ { "context": "Arjun died in 1920. The American Football Club (AFC) celebrated this death. Arjun now haunts NFC. He used to love playing football. But nobody liked him.", "qas": [ { "question": "When did Arjun die?", "id": "56be4db0acb8001400a502ed" } ] } ] } ] }` Please help me with this. I switched to using the PyTorch implementation hoping that getting a saved model and making predictions using the saved model will be easier in PyTorch. <|||||>@apurvaasf Might be worth opening another ticket since that's slightly different to this. It shouldn't be too hard to write your own code for deployment. The trick is to make sure it does all the loading once, and just calls predict each time you need a prediction. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi @thomwolf and thanks for the amazing implementation. I wonder what is the inference speed with a 512 batch size. It seems to take a lot of time to convert to GPU (1000msec for a batch size of 32) and I wonder if there is any quick speedup/fix. I am concerned with the latency rather than the throughput.<|||||>> Hi @thomwolf and thanks for the amazing implementation. I wonder what is the inference speed with a 512 batch size. It seems to take a lot of time to convert to GPU (1000msec for a batch size of 32) and I wonder if there is any quick speedup/fix. I am concerned with the latency rather than the throughput. Have you found any solutions? I've met the same problem. The inference time is fast, but takes a lot of time to convert to GPU and convert the result to CPU for post-processing.<|||||>> > Hi @thomwolf and thanks for the amazing implementation. I wonder what is the inference speed with a 512 batch size. It seems to take a lot of time to convert to GPU (1000msec for a batch size of 32) and I wonder if there is any quick speedup/fix. I am concerned with the latency rather than the throughput. > > Have you found any solutions? I've met the same problem. > The inference time is fast, but takes a lot of time to convert to GPU and convert the result to CPU for post-processing. > albanD commented on 25 Mar > Hi, > > We use github issues only for bugs or feature requests. > Please use the forum to ask questions: https://discuss.pytorch.org/ as mentionned in the template you used. > > Note that in your case, you are most likely missing torch.cuda.syncrhonize() when timing your GPU code which makes the copy look much slower than it is because it has to wait for the rest of the work to be done. #Pytorch#35292
transformers
125
closed
Warning/Assert when embedding sequences longer than positional embedding size
Hi team, love the work. Just a feature suggestion: when running on GPU (presumably the CPU too), BERT will break when you try to run on sentences longer than 512 tokens (on bert-base). This is because the position embedding matrix size is only 512 (or whatever else it is for the other bert models) Could the tokenizer have an assert/warning on it that doesn't allow you tokenize a sentence longer than the number of positional embeddings, so that you get a better error message than a bit scary (uncatchable) cuda error.
12-18-2018 10:36:23
12-18-2018 10:36:23
Could do that indeed Patrick. In particular when the tokenizer is loaded from one of Google pre-trained model. If you have a working implementation feel free to do a PR. Otherwise I will have a look at that when I start working on the next release.<|||||>Happy to do a PR :) will do today or tomorrow
transformers
124
closed
Add example for fine tuning BERT language model
We are currently working on fine-tuning the language model on a new target corpus. This should improve the model, if the language style in your target corpus differs significantly from the one initially used for training BERT (Wiki + BookCorpus), but is still too small for training BERT from scratch. In our case, we apply this on a rather technical english corpus. The sample script is loading a pre-trained BERT model and fine-tunes it as a language model (masked tokens & nextSentence) on your target corpus. The samples from the target corpus can either be fed to the model directly from memory or read from disk one-by-one. Training the language model from scratch without loading a pre-trained BERT model is also not very difficult to do from here. In contrast, to the original tf repo, you can do the training with multi-GPU instead of TPU. We thought this might be also helpful for others.
12-18-2018 09:48:12
12-18-2018 09:48:12
This looks like a great addition! Is it a full re-implementation of the pre-training script?<|||||>The implementation uses the same sampling parameters and logic, but it's not a one-by-one re-implementation of the original pre-training script. **Main differences:** - In the original repo they first create a training set of TFrecords from a raw corpus ([create_pretraining_data.py](https://github.com/google-research/bert/blob/master/create_pretraining_data.py)) and then perform model training using [run_pretraining.py](https://github.com/google-research/bert/blob/master/run_pretraining.py). We decided against this two step procedure and do the conversion from raw text to sample "on the fly" (more similar to [this repo from codertimo](https://github.com/codertimo/BERT-pytorch)). With this we can actually generate new samples every epoch. - We currently feed in pair of lines (= sentences) as one sample, while the [original repo](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L229) fills 90% of samples up with more sentences until max_seq_length is reached (for our use case this did not make any sense) **Main similarities:** - All sampling / masking probabilities and parameters - Format of raw corpus (one sentence per line & empty line as doc delimiter) - Sampling strategy: Random nextSentence must be from another document - The data reader [of codertimo](https://github.com/codertimo/BERT-pytorch) is similar to our code, but didn't really match the original method of sampling. Happy to clarify further details!<|||||>Hi @deepset-ai this is great and, just a suggestion, maybe if this makes it to the repo it would be great to include something in the README too about this functionality in this pull request?<|||||>Just added some basic documentation to the README. Happy to include more, if @thomwolf thinks that this makes sense.<|||||>Yes, I was going to ask you to add some information in the readme, it's great. The more is the better. If you can also add instructions on how to download a dataset for the training as in the other examples it would be perfect. If your dataset is private, do you have in mind another dataset that would let the users try your script easily? If not it's ok, don't worry. Another thing is that the `fp16` logic has now been switched to NVIDIA's [apex module](https//github.com/nvidia/apex) and we have gotten rid of the `optimize_on_cpu` option (see the [relevant PR](#116) for more details). You can see the changes in the current examples like `run_squad.py`, it's actually a lot simpler since we don't have to manage parameters copy in the example and it's also faster. Do you think you could adapt the fp16 parts of your script similarly?<|||||>This is something I'd been working on as well, congrats on a nice implementation! One question, though: I noticed you stripped out the code for evaluating on a test set, but when fine-tuning the LM on a smaller corpus, would it be worth keeping that in? Overfitting is much more of a risk in a smaller corpus.<|||||>> This is something I'd been working on as well, congrats on a nice implementation! > > One question, though: I noticed you stripped out the code for evaluating on a test set, but when fine-tuning the LM on a smaller corpus, would it be worth keeping that in? Overfitting is much more of a risk in a smaller corpus. @Rocketknight1, you are right that we will probably need some better evaluation here. Currently, I have the feeling though that the evaluation on down-stream tasks is more meaningful (see also Jacob Devlin's comment [here](https://github.com/google-research/bert/issues/95#issuecomment-437599265)). But in addition, some better monitoring of the loss during and after training would be nice. Do you already have something in place and would like to contribute on this? Otherwise, I will try to find some time during the upcoming holidays to add this.<|||||>> > > > This is something I'd been working on as well, congrats on a nice implementation! > > One question, though: I noticed you stripped out the code for evaluating on a test set, but when fine-tuning the LM on a smaller corpus, would it be worth keeping that in? Overfitting is much more of a risk in a smaller corpus. > > @Rocketknight1, you are right that we will probably need some better evaluation here. Currently, I have the feeling though that the evaluation on down-stream tasks is more meaningful (see also Jacob Devlin's comment [here](https://github.com/google-research/bert/issues/95#issuecomment-437599265)). But in addition, some better monitoring of the loss during and after training would be nice. > > Do you already have something in place and would like to contribute on this? Otherwise, I will try to find some time during the upcoming holidays to add this. I don't have any evaluation code either, unfortunately! It might be easier to just evaluate on the final classification task, so it's not really urgent. I'll experiment with LM fine-tuning when I'm back at work in January. If I get good benefits on classification tasks I'll see what effect early stopping based on validation loss has, and if that turns out to be useful too I can submit a PR for it?<|||||>Have you thought about extending the vocabulary after fine-tuning on custom dataset. This could be useful if the custom dataset has specific terms related to that domain. <|||||>> Have you thought about extending the vocabulary after fine-tuning on custom dataset. This could be useful if the custom dataset has specific terms related to that domain. Adjusting the vocabulary before fine-tuning could be interesting, but you would need some smart approach to exchange "less important" tokens from the original byte pair vocab with "important" ones from your custom corpus (while maintaining the pre-trained embeddings for the rest of the vocab meaningful). We don't work on this at the moment. Looking forward to a PR, if you have time to work on this. <|||||>> > Have you thought about extending the vocabulary after fine-tuning on custom dataset. This could be useful if the custom dataset has specific terms related to that domain. > > Adjusting the vocabulary before fine-tuning could be interesting, but you would need some smart approach to exchange "less important" tokens from the original byte pair vocab with "important" ones from your custom corpus (while maintaining the pre-trained embeddings for the rest of the vocab meaningful). > We don't work on this at the moment. Looking forward to a PR, if you have time to work on this. Yes I am working on it. The idea is to add more items to the pretrained vocabulary. Also will adjust the model layers: bert.embeddings.word_embeddings.weight, cls.predictions.decoder.weight with the mean weights and also update cls.predictions.bias with mean bias for additional vocabulary words. Will send out a PR once I test it.<|||||>Ok this looks very good, I am merging, thanks a lot @tholor!
transformers
123
closed
big memory occupied
When I run the examples for MRPC, my program was always killed becaused of big memory occupied. Anyone encounter with this issue?
12-18-2018 03:13:11
12-18-2018 03:13:11
You should lower the batch size probably
transformers
122
closed
_load_from_state_dict() takes 7 positional arguments but 8 were given
12-17-2018 05:38:40
12-17-2018 05:38:40
Full log of the error?<|||||>This is caused by pytorch version. I found , In 0.4.0 version, _load_from_state_dict() only take 7 arguments, but In 0.4.1 and this code, we need feed 8 arguments. ``` module._load_from_state_dict( state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs) ``` local_metadata should be removed in pytorch 0.4.0 <|||||>Ok thanks @SummmerSnow !
transformers
121
closed
High accuracy for CoLA task
I try to reproduce the CoLA results from the BERT paper (BERTBase, Single GPU). Running the following command ``` python run_classifier.py \ --task_name cola \ --do_train \ --do_eval \ --do_lower_case \ --data_dir $GLUE_DIR/CoLA/ \ --bert_model bert-base-uncased \ --max_seq_length 128 \ --train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir $OUT_DIR/cola_output/ ``` I get eval results of ``` 12/16/2018 12:31:34 - INFO - __main__ - ***** Eval results ***** 12/16/2018 12:31:34 - INFO - __main__ - eval_accuracy = 0.8302972195589645 12/16/2018 12:31:34 - INFO - __main__ - eval_loss = 0.5117322660925734 12/16/2018 12:31:34 - INFO - __main__ - global_step = 804 12/16/2018 12:31:34 - INFO - __main__ - loss = 0.17348005173644468 ``` An accuracy of 0.83 would be fantastic, but compared to the 0.521 stated in the paper this doesn't seem very realistic. Any suggestions what I'm doing wrong?
12-16-2018 11:39:56
12-16-2018 11:39:56
The metric used for evaluation of CoLA in the GLUE benchmark is not accuracy but the https://en.wikipedia.org/wiki/Matthews_correlation_coefficient (see https://gluebenchmark.com/tasks). Indeed authors report in https://arxiv.org/abs/1810.04805 0.521 for Matthews correlation with BERT-base.<|||||>Makes sense, looks like I missed that point. Thank you.
transformers
120
closed
RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index'
I am using part of your evaluation code, with slight modifications: https://github.com/danyaljj/pytorch-pretrained-BERT/blob/92e22d710287db1b4aa4fda951714887878fa728/examples/daniel_run.py#L582-L616 Wondering if you have encountered the following error: ``` (env3.6) khashab2@gissing:/shared/shelley/khashab2/pytorch-pretrained-BERT$ python3.6 examples/daniel_run.py Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex. loaded the model to base . . . loading the bert . . . 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1248501532/1248501532 [00:26<00:00, 46643749.96B/s] Evaluating: 0%| | 0/1355 [00:00<?, ?it/s] Traceback (most recent call last): File "examples/daniel_run.py", line 817, in <module> evaluate_model() File "examples/daniel_run.py", line 606, in evaluate_model batch_start_logits, batch_end_logits = model(input_ids, segment_ids, input_mask) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 1096, in forward sequence_output, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 626, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 193, in forward words_embeddings = self.word_embeddings(input_ids) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 110, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 1110, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index' ```
12-15-2018 18:43:53
12-15-2018 18:43:53
The issue was, not properly loading the model file and moving it to GPU.
transformers
119
closed
Minor README fix
I think `optimize_on_cpu` option was dropped in #112
12-14-2018 19:11:14
12-14-2018 19:11:14
Indeed!
transformers
118
closed
Segmentation fault (core dumped)
Hi, I downloaded pretrained model and vocabulary file, and wanted to test BertModel to get hidden states. when this ```encoded_layers, _ = model(tokens_tensor, segments_tensors)``` lines run, I got this error: Segmentation fault (core dumped). I wonder what caused this error
12-14-2018 03:24:58
12-14-2018 03:24:58
Hi, you need to give me more information (a screen copy of a full log of the error).<|||||>Actually, this is all I got: >> python bert.py 12/15/2018 19:43:06 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file /home/snow/bert_models_path/vocab.txt 12/15/2018 19:43:06 - INFO - pytorch_pretrained_bert.modeling - loading archive file /home/snow/bert_models_path 12/15/2018 19:43:06 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } bert_models_path load!!! Segmentation fault (core dumped)<|||||>There is no special c function in our package, it's all python code. Maybe you just don't have enough memory to load BERT? Or some dependency is not well installed like pytorch (or apex if you are using it).<|||||>Thanks for your advice. Maybe because of my pytorch version(0.4.0) I not not sure. I download the source code instead of pip install and using 0.4.1 version and run successfully. Thanks for your code and advice again~<|||||>![图片](https://user-images.githubusercontent.com/26063832/57267682-8d160c00-70b3-11e9-9d59-5866c1321272.png) I also have this problem, and my torch version is 1.0.1 . I have tried to download the source code instead of pip install but also failed.<|||||>I have the same question @wyx518 <|||||>Did you solve this ? @zhaohongjie <|||||>@zhaohongjie did you solve this? I have the same question too.<|||||>Has someone solved this issue by any chance? <|||||>> Has someone solved this issue by any chance? Do you have the same problem? You could try to debug it by import transformers and torch only, then call torch.nn.CrossEntropyLoss() to see if it results in Segmentation fault. I accidentally fixed this error by install more packages<|||||>Hello, Had this error with CamembertForSequenceClassification.from_pretrained(), needed to update torch==1.5.1 and torchvision==0.6.1 <|||||>I had the same issue while loading pretrained models. Updated to the last version of Pytorch (1.5.1) and worked fine.<|||||>Yup, that worked guys! Thank you @Daugit and @gabrer <|||||>Update by pip install torch==1.5.1 and the problem solved
transformers
117
closed
logging.basicConfig overrides user logging
I think logging.basicConfig should not be called inside library code check out this SO thread https://stackoverflow.com/questions/27016870/how-should-logging-be-used-in-a-python-package
12-13-2018 17:58:02
12-13-2018 17:58:02
You're right. It's removed.
transformers
116
closed
Change to use apex for better fp16 and multi-gpu support
Hi there, This PR includes changes to improve FP16 and multi-gpu performance. We get over 3.5x performance increase on Tesla V100 across all examples. NVIDIA Apex([https://github.com/NVIDIA/apex](url)) is added as a new dependency. It fixed issues with existing fp16 implementation(for example not converting loss/grad to float before scaling) as well as provide a more efficient implementation. Below is test results we run on MRPC and SQuAD examples. All test baselines(`before` numbers) are fp32, since we found it actually is the best performing config. Reason being optimizer is forced on cpu under fp16. The `after` numbers are running with `--fp16` after this PR. All tests done on single tesla V100 16GB. MRPC on BERT-base: ``` before: 109 seconds, 9GB memory needed after: 27 seconds, 5.5GB speedup: 4x ``` SQuAD on BERT-base: ``` before: 90 minutes, 12.5GB after: 24 minutes, 7.5GB speedup: 3.75x ``` SQuAD on BERT-large: ``` before: 250 minutes, 15GB, with --train_batch_size 24 --gradient_accumulation_steps 6 after: 68 minutes, 14.5GB, with --train_batch_size 24 --gradient_accumulation_steps 3 speedup: 3.68x ``` `optimize_on_cpu` option is also removed entirely from code since I can't find any situation where it is faster than `gradient_accumulation_steps`. Of course assuming at least batch 1 can fit into GPU memory.
12-12-2018 01:37:25
12-12-2018 01:37:25
That's really awesome! I love the work you guys did on apex and I would be super happy to have an 'official' implementation of BERT using apex (plus it showcases all the major modules: FusedAdam, FusedLayerNorm, 16bits, distributed optimizer...). And the speed improvement is impressive, fine-tuning BERT-large on SQuAD in 1h is amazing! Just three general questions: 1. could you reproduce the numerical results of the examples (SQuAD and MRPC) with this implementation? 2. did you test distributed training? 3. the main issue I see right now is the fact that apex is not on pypi and users have to manually install it. Now that pytorch-pretrained-bert is used as a dependency in downstream librairies like [AllenNLP](https://github.com/allenai/allennlp/blob/master/requirements.txt#L71) it's important to keep a smooth install process. Can you guys put apex on pypi? If not we should add some logic to handle the case when apex is not installed. It's ok for the examples (`run_classifier` and `run_squad`) which are not part of the package per se but the modifications in `modeling.py` needs to be taken care of.<|||||>Hi @thomwolf , 1. I have been able to reproduce numerical results of the examples. It shows some variance with different random seeds, especially with MRPC. But that should be somewhat expected and overall the results seems the same as baseline. For example, I got `{"exact_match": 84.0491958372753, "f1": 90.94106705651285}` running SQuAD BERT-Large with default dynamic loss scaling and seed. I did not store other results since they should be very easy to re-run. 2. I sanity checked distributed training results while developing. I'll run more results and post it here. 3. Adding fallback to modeling.py should be easy since we can use BertLayerNorm in there. We just need to make sure it share the same interface. For example parameter names, in case user wants to build groups base on names. As for pypi, @mcarilli what's you thought? -Deyu<|||||>update: 1. I have tested SQuAD BERT-Large with 4 V100 on a DGX station. Here is the result: ``` training time: 20:56 speedup over 1 V100: 3.2x evaluation result: {"exact_match": 83.6329233680227, "f1": 90.68315529756794} ``` command used: `python3 -m torch.distributed.launch --nproc_per_node=4 ./run_squad.py --bert_model bert-large-uncased --do_train --do_predict --do_lower_case --train_file $SQUAD_DIR/train-v1.1.json --predict_file $SQUAD_DIR/dev-v1.1.json --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/ --train_batch_size 6 --fp16` 2. I modified `model.py` so it now will fallback to BertLayerNorm when apex is not installed. Parameters `gamma, beta` are changed to `weight, bias`. -Deyu<|||||>Ok thanks for the update! It looks good to me, I will do a few tests on various hardwares and it'll be included in the new 0.4.0 release coming out today (hopefully) Congrats on the MLPerf results by the way!<|||||>@FDecaYed I am trying to reproduce your numbers but I can't get very close. I am using an [Azure NDv2 server](https://azure.microsoft.com/en-us/blog/unlocking-innovation-with-the-new-n-series-azure-virtual-machines/) with 8 NVIDIA Tesla V100 NVLINK interconnected GPUs and 40 Intel Skylake cores. Switching to fp16 lowers the memory usage by half indeed but the training time stays about the same ie around (e.g. 100 seconds for `run_classifier` on 1 GPU and about 50 minutes for the 2 epochs of your distributed training command on `run_squad`, with 4 GPUs in that case). I have the new release of PyTorch 1.0.0, CUDA 10 and installed apex with cpp/cuda extensions. I am using the fourth-release branch on the present repo which was rebased from master with your PR. If you have any insight I would be interested. Could the difference come from using a DGX versus an Azure server? Can you give me the exact command you used to train the `run_classifier` example for instance? <|||||>there could be a lot of things, let's sort them out one by one: The command I used for MRPC example is `CUDA_VISIBLE_DEVICES=0 python3 ./run_classifier.py --task_name MRPC --do_train --do_eval --do_lower_case --data_dir $GLUE_DIR/MRPC/ --bert_model bert-base-uncased --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mrpc_output/ --fp16` CUDA_VISIBLE_DEVICES is to make sure only one GPU is used. I noticed the code is using Dataparallel when there is only one process but more than 1 GPU in the box. `torch.nn.DataParallel` may not provide good speed on some cases. Are you running just one GPU on you 100 sec run? I reported time print by tqdm trange, is that the same number you are talking about here? From my past experience with cloud, single GPU number should not be that far from any DGX, unless you are bound by input. I doubt that's the case base on the workload. If we indeed are running and reporting the same thing, there must be some software differences. We are still in the progress moving up to pytorch 1.0, so my test was on 0.4. I'll merge your release branch and try on pytorch 1.0 on my side on DGX today. Meanwhile, this is the container I used for testing. You could try it on Azure and see if you can get my result. Note that it does not have latest apex installed, so you need uninstall apex and build latest inside. https://ngc.nvidia.com/catalog/containers/nvidia%2Fpytorch -Deyu<|||||>Thanks for the quick reply! The timing I was reporting was the full timing for the training (3 iterations for the MRPC example). Using your MRPC example command I get this example from training on a single V100: about 1 min 24 second of training, ie. around 84 seconds (~27 seconds per iteration). Using static loss scale gives the same results. ![image](https://user-images.githubusercontent.com/7353373/49967261-9e068b00-ff22-11e8-8ad3-b60bafbff0f2.png) And training without 16bits gives a total training time roughly similar: 1 min 31 seconds ![image](https://user-images.githubusercontent.com/7353373/49967527-69470380-ff23-11e8-87eb-ecb344b5caea.png) <|||||>I tested on pytorch 1.0 and still getting the same speed up ![screenshot from 2018-12-13 14-52-11](https://user-images.githubusercontent.com/17164548/49972649-28c99480-fee7-11e8-9dd7-1dabd8cdad65.png) I used the foruth-release branch and public dockerhub 1.0-cuda10.0-cudnn7-devel image here: https://hub.docker.com/r/pytorch/pytorch/tags/ Only modification I need was adding `encoding='utf-8'` reading csv. Could you run the same docker image and see if the speed is still the same? If so, could you do a quick profile with `nvprof -o bert-profile.nvvp` with just training 1 epoch and share the output? I don't have access to Azure now. <|||||>Ok, I got the 3-4x speed-up using the pytorch dockerhub 1.0-cuda10.0-cudnn7-devel image 🔥 Thanks a lot for your help! I'm still wondering why I can't get these speedups outside of the docker container so I will try to investigate that a bit further (in particular since other people may start opening issues here :-). If you have any further insight, don't hesitate to share :-)<|||||>Ok nailed it I think it was a question of not installing `cuda100` together with pytorch. Everything seems to work fine now!<|||||>Great! It'll be great if we can later update readme to document V100 expected speed as well.<|||||>Thanks for the nice work! @FDecaYed @thomwolf I tried fp16 training for bert-large. It has the imbalanced memory problem, which wastes gpu power a lot. The nvidia-smi results are shown as follows: ```bash +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-PCIE... Off | 0000A761:00:00.0 Off | 0 | | N/A 39C P0 124W / 250W | 15128MiB / 16130MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-PCIE... Off | 0000C0BA:00:00.0 Off | 0 | | N/A 41C P0 116W / 250W | 10012MiB / 16130MiB | 95% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla V100-PCIE... Off | 0000D481:00:00.0 Off | 0 | | N/A 38C P0 80W / 250W | 10012MiB / 16130MiB | 91% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla V100-PCIE... Off | 0000EC9F:00:00.0 Off | 0 | | N/A 40C P0 61W / 250W | 10012MiB / 16130MiB | 95% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 11870 C python 15117MiB | | 1 11870 C python 10001MiB | | 2 11870 C python 10001MiB | | 3 11870 C python 10001MiB | +-----------------------------------------------------------------------------+ ```<|||||>Is it already in add in pytorch-transformers? If so how do I use it, where should i specify the settings that I want to use Fp16 and apex and is apex already added in installation of pytorch transformers on anaconda 3?
transformers
115
closed
How to run a saved model?
How can you run the model without training the model? If we already trained a model with run_classifer?
12-11-2018 20:58:38
12-11-2018 20:58:38
It looks like @thomwolf is planning to illustrate this in the examples soon. You find some useful code to do what you want to do in https://github.com/huggingface/pytorch-pretrained-BERT/pull/112/<|||||>Hi this is now included in the new release 0.4.0 and there are examples on how you can save and reload the models in the updated run_classifier, run_squad and run_swag.
transformers
114
closed
What is the best dataset structure for BERT?
First I want to say thanks for setting up all this! I am using BertForSequenceClassification and am wondering what the optimal way is to structure my sequences. Right now my sequences are blog post which could be upwards to 400 words long. Would it be better to split my blog posts in sentences and use the sentences as my sequences instead? Thanks!
12-11-2018 16:28:00
12-11-2018 16:28:00
transformers
113
closed
fix compatibility with python 3.5.2
When I run the following command on python 3.5.2 ``` python3 extract_features.py --input_file input.txt --output_file output.txt --bert_model bert-base-uncased --do_lower_case ``` Get this error: ``` Traceback (most recent call last): File "extract_features.py", line 298, in <module> main() File "extract_features.py", line 231, in main tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case) File "/home/huangfei/.local/lib/python3.5/site-packages/pytorch_pretrained_bert/tokenization.py", line 117, in from_pretrained resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir) File "/home/huangfei/.local/lib/python3.5/site-packages/pytorch_pretrained_bert/file_utils.py", line 88, in cached_path return get_from_cache(url_or_filename, cache_dir) File "/home/huangfei/.local/lib/python3.5/site-packages/pytorch_pretrained_bert/file_utils.py", line 169, in get_from_cache os.makedirs(cache_dir, exist_ok=True) File "/usr/lib/python3.5/os.py", line 226, in makedirs head, tail = path.split(name) File "/usr/lib/python3.5/posixpath.py", line 103, in split i = p.rfind(sep) + 1 AttributeError: 'PosixPath' object has no attribute 'rfind' ``` I find makedirs didn't support PosixPath in python3.5, so I make a change to fix this.
12-11-2018 12:29:26
12-11-2018 12:29:26
Thanks, it could be nice to keep Python 3.5 compatibility indeed (see #110) but I think this will break (at least) the other examples (`run_squad` and `run_classifier`) which uses the Pathlib syntax `PATH / 'string'`.<|||||>I'm sorry for my previous stupid workaround, but now I modify some functions in ``file_utils.py``, just convert type for local variables. I think it won't affect the function behaviour. However, I only test the ``extract_features.py`` on python3.5. So I'm not sure it prefectly solve the problems, but it should be unharmful.<|||||>I think this will break [this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L484) at least. I we want to merge this PR you should check that the three examples and the tests are running (at least).<|||||>Did you see my second commit (485adde742)? I think I have fixed the problem you mentioned. Now I have tested all unittests and 3 examples, and they all work right on python 3.5.2.<|||||>Indeed, I missed this commit. Ok, this solution makes sense, let's go for it! Thanks!
transformers
112
closed
Fourth release
New: - 3-4 times speed-up in fp16 thanks to NVIDIA's work on apex - SWAG (multiple-choice) model added + example fine-tuning on SWAG - bump up to PyTorch 1.0 - backward compatibility to python 3.5 - load fine-tuned model with `from_pretrained` - add examples on how to save and load fine-tuned models
12-11-2018 11:00:12
12-11-2018 11:00:12
transformers
111
closed
update: add from_state_dict for PreTrainedBertModel
For restoring the training procedure. Now we can use torch.save to store their model and restore their model by e.g. `model = BertForSequenceClassification.from_state_dict('bert-large-uncased', state_dict=torch.load('xx.pth'))`
12-11-2018 10:33:42
12-11-2018 10:33:42
Hi, I like the idea but I not a big fan of all the code duplication I'ld rather fuse the two loading functions in one. Basically we can just add a `state_dict` argument to `from_pretrained` and add a check in `from_pretrained` to handle the case.<|||||>> Hi, I like the idea but I not a big fan of all the code duplication I'ld rather fuse the two loading functions in one. > Basically we can just add a `state_dict` argument to `from_pretrained` and add a check in `from_pretrained` to handle the case. Hi, thanks for your reply, I just tried to make sure that my code will not incorporate bugs so I added a new function instead of changing code in `from_pretrained`. :)
transformers
110
closed
Pretrained Tokenizer Loading Fails: 'PosixPath' object has no attribute 'rfind'
I was trying to work through the toy tokenization example from the main README, and I hit an error on the step of loading in a pre-trained BERT tokenizer. ``` ~/bert_transfer$ python3 test_tokenizer.py Traceback (most recent call last): File "test_tokenizer.py", line 10, in <module> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/tokenization.py", line 117, in from_pretrained resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir) File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/file_utils.py", line 88, in cached_path return get_from_cache(url_or_filename, cache_dir) File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/file_utils.py", line 169, in get_from_cache os.makedirs(cache_dir, exist_ok=True) File "/usr/lib/python3.5/os.py", line 226, in makedirs head, tail = path.split(name) File "/usr/lib/python3.5/posixpath.py", line 103, in split i = p.rfind(sep) + 1 AttributeError: 'PosixPath' object has no attribute 'rfind' ~/bert_transfer$ python3 --version Python 3.5.2 ``` Exact usage in script: ``` from pytorch_pretrained_bert import BertTokenizer test_sentence = "When PyTorch first launched in early 2017, it quickly became a popular choice among AI researchers, who found it ideal for rapid experimentation due to its flexible, dynamic programming environment and user-friendly interface" if __name__ == "__main__": tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ``` I am curious if you're able to replicate this error on python 3.5.2, since the repo states support for 3.5+.
12-11-2018 00:48:11
12-11-2018 00:48:11
Oh you are right, the file caching utilities requires python 3.6. I don't intend to maintain a lot of backward compatibility in terms of Python versions (I already surrendered maintaining a Python 2 version) so I will bump up the requirements to python 3.6. If you are limited to python 3.5 and find a way around this, don't hesitate to share your solution with a PR though.<|||||>Ok @hzhwcmhf fixed this issue with #113 and we will be compatible with Python 3.5+ again in the coming release (today probably). Thanks @hzhwcmhf!
transformers
109
closed
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
When I convert a TensorFlow checkpoint in a `pytorch_model.bin` and run `run_classifier.py` with `--bert_model /path/to/pytorch_model.bin` option, following error occurs in `tokenization.py`. ```shell 12/10/2018 18:11:59 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file /Users/MAC/bert/model/uncased_L-12_H-768_A-12/pytorch_model.bin Traceback (most recent call last): File "examples/run_classifier.py", line 637, in <module> main() File "examples/run_classifier.py", line 480, in main tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case) File "/Users/MAC/.pyenv/versions/anaconda3-5.3.0/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 133, in from_pretrained tokenizer = cls(resolved_vocab_file, *inputs, **kwargs) File "/Users/MAC/.pyenv/versions/anaconda3-5.3.0/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 76, in __init__ self.vocab = load_vocab(vocab_file) File "/Users/MAC/.pyenv/versions/anaconda3-5.3.0/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 51, in load_vocab token = reader.readline() File "/Users/MAC/.pyenv/versions/anaconda3-5.3.0/lib/python3.6/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte ``` I noticed that `--bert_model` option is not path to `pytorch_model.bin` file but path to directory containing `pytorch_model.bin` and `vocab.txt`. I close it.
12-11-2018 00:04:09
12-11-2018 00:04:09
I had the same question/confusion! Thanks for clarifying it should be the path to the directory and not the filename itself. <|||||>Great help, thanks.<|||||>Thanks
transformers
108
closed
Does max_seq_length specify the maxium number of words
I'm trying to figure out how the `--max_seq_length` parameter works in [run_classifier](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py). Based on the source, it seems like it represents the number of words? Is that correct?
12-10-2018 15:14:22
12-10-2018 15:14:22
`max_seq_length` specifies the maximum number of tokens of the input. The number of token is superior or equal to the number of words of an input. For example, the following sentence: ``` The man hits the saxophone and demonstrates how to properly use the racquet. ``` is tokenized as follows: ``` the man hits the saxophone and demonstrates how to properly use the ra ##c ##quet . ``` And depending on the task 2 to 3 additional special tokens (`[CLS]` and `[SEP]`) are added to the input to format it.<|||||>@rodgzilla thanks!<|||||>could we make it smaller? <|||||>So what if there are sentences where the maximum number of tokens is greater than max_seq_length? Does that mean extra tokens beyond max_seq_length will get cut off?<|||||>@tsungruihon yes, just use smaller sentences @echan00 no automatic cut off but there is a warning from the tokenizer that your inputs are too long and the model will throw an error. You have to limit the size manually.<|||||>Hi All, Does that mean we cannot use BERT for classifying long documents. The documents having 5-6 Paragraphs and each paragraph having 10-15 mins with about 10-12 words in each line ?<|||||>@SaurabhBhatia0211 You can try splitting a document to smaller chunks (e.g. paragraphs or even lines), computing embedding for each of those chunks, and average those vectors to get the document representation. <|||||>@rodgzilla is this true? > HuggingFace's Trainer API, including the SFTrainer, by default pads all sequences to the maximum length within the batch, not to the max_seq_length argument. The max_seq_length argument serves as a hard limit to the sequence length, truncating any examples that are longer than that. The API was designed this way because padding to the maximum sequence length in the batch improves computational efficiency. ?
transformers
107
closed
Fix optimizer to work with horovod
12-10-2018 10:10:34
12-10-2018 10:10:34
Great thanks!
transformers
106
closed
Picking max_sequence_length in run_classifier.py CoLA task
Is there an upper bound for the max_sequence_length parameter when using run_classifier.py with CoLA task? When I tested with the default max_sequence_length of 128, everything worked good, but once I changed it to something else, eg 1024, it started the training and failed on the first iteration with the error shown below: ```` Traceback (most recent call last): File "run_classifier.py", line 643, in <module> main() File "run_classifier.py", line 551, in main loss = model(input_ids, segment_ids, input_mask, label_ids) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 868, in forward _, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 609, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 199, in forward embeddings = self.dropout(embeddings) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 53, in forward return F.dropout(input, self.p, self.training, self.inplace) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/functional.py", line 595, in dropout return _functions.dropout.Dropout.apply(input, p, training, inplace) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/_functions/dropout.py", line 40, in forward ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p) RuntimeError: Creating MTGP constants failed. at /jet/tmp/build/aten/src/THC/THCTensorRandom.cu:34 ```` The command I ran is ``` python run_classifier.py \ --task_name CoLA \ --do_train \ --do_eval \ --do_lower_case \ --data_dir $GLUE_DIR/Test/ \ --bert_model bert-base-uncased \ --max_seq_length 128 \ --train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/BERT/test1 ````
12-10-2018 09:04:47
12-10-2018 09:04:47
As mentioned in #89, the maximum value of `max_sequence_length` is 512. <|||||>@rodgzilla thanks!
transformers
105
closed
weights initialized two times
Hi, I found that you initilized all weights twice: The first one is in BertModel class: https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L586 And the second one is in classes of each tasks such as in BertForSequenceClassification class: https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L674 I think maybe you only need the second one?
12-09-2018 07:06:52
12-09-2018 07:06:52
I think it required for both the places. Because both of them can be used individually. As it is mentioned in the README.md file, the model can be loaded with 7 classes. In fact if you check `BertForMaskedLM` and `BertForNextSentencePrediction` classes it also has the weights initialised. Please correct me if I am wrong :)<|||||>You are right @Arjunsankarlal :-)
transformers
104
closed
BERT for classification example training files
Are there any example training files for `run_classifier.py`?
12-08-2018 15:16:50
12-08-2018 15:16:50
Please read the [example section in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-with-bert-running-the-examples)
transformers
103
closed
Words after tokenization replaced with #
Hello, When training the bert-base-multilingual-cased model for Question and Answering, I see that the tokens look like this : ```tokens: [CLS] what is the ins ##ured _ name ? [SEP] versi ##cherung ##ss ##che ##in erg ##o hau ##srat ##versi ##cherung hr - sv 927 ##26 ##49 ##2 ``` Any idea why words are getting replaced with #? Here is the command I am using : ```python run_squad.py --bert_model bert-base-multilingual-cased --do_train --do_predict --train_file dataset_train.json --predict_file dataset_predict.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 400 --doc_stride 20 --output_dir output_dir```
12-08-2018 11:56:57
12-08-2018 11:56:57
Because it uses WordPiece tokenization, and will introduce the `#` token. Check: https://github.com/google-research/bert#tokenization<|||||>@ymcui okay sweet, thank you. Will use the relevant one. <|||||>@ymcui How do I change this ? or is not possible to do so?<|||||>1. If you are training completely from scratch, then it will be possible to use your own tokenizer. 2. However, if you are fine-tuning on the existing pre-trained BERT models, I think it will not be possible to change the tokenizer, as the pre-trained BERT models are trained using WordPiece tokenizer. <|||||>@ymcui is right. Since the purpose of the present repo is to supply pre-trained model basically you are stuck with WordPiece tokenization. If you build a new model and train it from scratch, you can selected whatever tokenization you want :-)<|||||>@ymcui @thomwolf - Yes, that is quite a problem and thanks for getting back. Evaluating building something on our own now 🗡
transformers
102
closed
How to modify the model config?
Well I am trying to generate embedding for a large sentence. I get this error > Traceback (most recent call last): all_encoder_layers, _ = model(input_ids, token_type_ids=None, attention_mask=input_mask) File "/Users/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/Users/venv/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 611, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/Users/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/Users/venv/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 196, in forward position_embeddings = self.position_embeddings(position_ids) File "/Users/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/Users/venv/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 110, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/Users/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 1110, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range at /Users/soumith/code/builder/wheel/pytorch-src/aten/src/TH/generic/THTensorMath.cpp:352 I find that max_position_embeddings (default size 512) is getting exceeded. Which is taken from the config that is downloaded as part of the initial step. Initially the download was done to the default location `PYTORCH_PRETRAINED_BERT_CACHE` where I was not able to find the config.json other than the model file and vocab.txt (named with random characters). I did it to a specific location in local with the `cache_dir` param, here also I was facing the same problem of finding the bert_config.json. Also I found a file in both the default cache and local cache, named with junk characters of JSON type. When I tried opening it, I could just see this _{"url": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz", "etag": "\"61343686707ed78320e9e7f406946db2-49\""}_ Any help to modify the config.json would be appreciated. Or if this is been caused for a different reason, Please let me know.
12-08-2018 08:38:37
12-08-2018 08:38:37
The problem is because of the `max_position_embeddings` default size is 512 and it is exceeding in the case of my input as I mentioned. For now I have just made hack by hard coding it directly in the [modelling.py](url) file directly 😅. Yet need to know, where to find the bert_config.json file and changing it there would be the correct way of doing it.<|||||>The config file is located in the .tar.gz archive that is getting downloaded, cached, and then extracted on the fly as you create a `BertModel` instance with the static `from_pretrained()` constructor. You'll see a log message like ``` extracting archive file /home/USERNAME/.pytorch_pretrained_bert/bert-base-cased.tar.gz to temp dir /tmp/tmp96bkwrj0 ``` If you extract that archive yourself, you'll find the bert_config.json file. The thing, though, is that it doesn't make sense to modify this file, as it is tied to the pretrained models. If you increase `max_position_embeddings` in the config, you won't be able to use the pretrained models. Instead, you will have to train a model from scratch, which may or -- more likely -- may not be feasible depending on the hardware you have access to.<|||||>Yeah as you said, while debugging I noticed that every time the .tar.gz file was extracted to a new temp cache location and from there models are fetched. Even in that case we are not able to find the json file where it was extracted. Also I think `max_position_embeddings` does not relate with the model training because, when I changed its value(before loading the model with torch.load) like this `config.__dict__['max_position_embeddings'] = 2048` from 512 to 2048 (hard coded way) the code ran properly without any error. And the [lines](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L99-L101) in modelling.py tells that it can be customised if required. But I don't see a way parameterising it so that it will be changed while fetching the config, because it is loaded like [this](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L500-L501). It would be great if customisations are supported for the applicable options.<|||||>It does not make sense to customize options when using pretrained models, it only makes sense when training your own model from scratch. You cannot use the pretrained models with another max_position_embeddings than 512, because the pretrained models contain pretrained embeddings for 512 positions. The original transformer paper introduced a [positional encoding](http://nlp.seas.harvard.edu/2018/04/03/attention.html#positional-encoding) which allows extrapolation to arbitrary input lengths, but this was not used in BERT. You can override max_position_embeddings, but this won't have any effect. The model will probably run fine for shorter inputs, but you will get a `RuntimeError: cuda runtime error (59)` for an input longer than 512 word pieces, because the embedding lookup [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L194) will attempt to use an index that is too large.<|||||>Indeed, it doesn't make sense to go over 512 tokens for a pre-trained model. If you have longer text, you should try the sliding window approach detailed on the original Bert repo: https://github.com/google-research/bert/issues/66<|||||>1. What if my sentences are well within 100 token in length. In that case does it make sense to change max_position_embeddings? 2. Adding 1 more similar question to, during model evaluation if I pass sentence to model and generate embeddings will it take sentence length as total tokens or 512 default? In that scenario if my sentence has 10 unique tokens then what does 512 stands for in hidden layers?
transformers
101
closed
Adding --do_lower_case for all uncased BERTs examples
I had missed those, it should make sense to use them
12-07-2018 19:41:50
12-07-2018 19:41:50
Indeed, thanks for that!
transformers
100
closed
Squad dataset has multiple answers to a question.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/examples/run_squad.py#L143 The confusing part here is that in line 146, only the first answer is considered, so I am wondering why is there a check for multiple answers before. Also, SQuad dataset has multiple answers for the same question. Is this by design or am I fundamentally missing something?
12-07-2018 16:02:00
12-07-2018 16:02:00
Hi, In `train-v2.0.json`, there is only one answer for the question. In `dev-v2.0.json` and hidden `test-v2.0.json`, there are several answers for a given question. I think the code that you mentioned is designed for not mistakenly using `dev-v2.0.json` for training. If you are going to use your own data or other types of data that has multiple answers, you can simply comment out this part. Best<|||||>Hello @ymcui , I did exactly that, thank you for confirming. Just wanted to be sure that there are no other implications. You are right, I have converted our dataset into SQuAD form and using that with the model. Regards, Nischal
transformers
99
closed
run_squad.py stuck on batch size greater than 1
Thanks a lot for the code! I need help figuring out why the script is not working so long the batch_size is set to be above 1. Specifically, it seems to be stuck at Line 908: loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions). I am using 4 k80. Thanks!
12-07-2018 13:44:09
12-07-2018 13:44:09
Please copy paste the command you are using to run this example.<|||||>Here you go ``` python ./run_squad.py --bert_model bert-base-uncased \ --do_train \ --do_predict \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --train_batch_size 32 \ --output_dir /tmp/debug_squad/ \ --gradient_accumulation_steps 2 \ ```<|||||>I don't see why this wouldn't work. Maybe update the repo & module to the latest version? You should also add `--do_lower_case` to the arguments if you are using an uncased model. Maybe post a full log of your output?<|||||>Updated to the latest version but it still does not work. When I terminate the script after it is stuck for some time, I get the message '../python3.7/threading.py", line 1072, in _wait_for_tstate_lock elif lock.acquire(block, timeout)'. Perhaps it is running into some deadlock condition? I'm not sure how to obtain a full log, would you be able to explain how can I do so? Thanks!<|||||>Hi, you can try with the new release 0.4.0.<|||||>Is the problem resolved? I am having the same issue, using 2 gtx1080ti. it stuck when running on multiple gpus. I have to comment out `torch.nn.DataParallel(model)`, to make it work. <|||||>If you are using a multi-GPU setting, pytorch splits the batch dynamically between the 2 GPUs. Example - batch_size =5 GPU 0 may get 3,max_sequence_len GPU 1 may get 2,max_sequence_len This could be a cuda splitting issue, I recommend you try a single GPU setting to debug this. Thanks, Ankit
transformers
98
closed
Problem about convert TF model and pretraining
First of all, Thank you for this great job. I use the official tensorflow implementation to pretrain on my corpus and then save the model. I want to convert this model to pytorch format and use it, but I got the error: Traceback (most recent call last): File "convert_tf_checkpoint_to_pytorch.py", line 105, in <module> convert() File "convert_tf_checkpoint_to_pytorch.py", line 86, in convert pointer = getattr(pointer, l[0]) AttributeError: 'Parameter' object has no attribute 'adam_m' Could you give me some advice? Thank you very much. It is great if you can release the pretrain code. I think it is useful even we cannot use TPU. Because we can fine-tune above google's pertained model.
12-07-2018 13:42:59
12-07-2018 13:42:59
Hi @zhezhaoa, I see, I will fix this in the next release. For now you should be able to fix that by installing the repo from source (git clone the repo and `pip install -e .` and changing [line 53 of convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py#L53) from ```if name[-1] in ["adam_v", "adam_m"]:``` to ```if any(n in ["adam_v", "adam_m"] for n in name):```<|||||>Thank you very much! It could be great if you can provide pertaining code like the official TF implementation.<|||||>Ok this loading issue is now fixed in master and the new 0.4.0 release.
transformers
97
closed
RuntimeError: cuda runtime error (59) : device-side assert triggered
I got this error when using bert model to get the present as a feature for training. Could anyone can help? Thanks a lot. Here is the cuda and python trace. ``` /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. THCudaCheck FAIL file=/pytorch/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu line=266 error=59 : device-side assert triggered Traceback (most recent call last): File "examples/bert_pku_seg.py", line 89, in <module> train() File "examples/bert_pku_seg.py", line 48, in train trainer.train(SAVE_DIR) File "/data/home/liuyang/dlab/dlab/process/trainer.py", line 61, in train after_batch_iter_hook=train_step_hook) File "/data/home/liuyang/dlab/dlab/process/common.py", line 49, in data_runner forward_output = model(batch_sentence) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/dlab/dlab/model/sequence_tagger.py", line 66, in forward batch_words_present, seq_length = self.embedder.embed(sentences) File "/data/home/liuyang/dlab/dlab/embedder/stack_embedder.py", line 23, in embed present, _ = embedder(batch_sentence) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/dlab/dlab/embedder/base_embedder.py", line 28, in forward return self.embed(*input) File "/data/home/liuyang/dlab/dlab/embedder/bert_embedder.py", line 141, in embed output_all_encoded_layers=False) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 607, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 195, in forward embeddings = words_embeddings + position_embeddings + token_type_embeddings RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu:266 ```
12-07-2018 01:39:03
12-07-2018 01:39:03
And here is the trace when running in cpu. ``` File "/data/home/liuyang/dlab/dlab/embedder/stack_embedder.py", line 23, in embed present, _ = embedder(batch_sentence) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/dlab/dlab/embedder/base_embedder.py", line 28, in forward return self.embed(*input) File "/data/home/liuyang/dlab/dlab/embedder/bert_embedder.py", line 141, in embed output_all_encoded_layers=False) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 607, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 192, in forward position_embeddings = self.position_embeddings(position_ids) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 110, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/functional.py", line 1110, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorMath.cpp:352 ``` It seems like embedding indexing out of range, but all of the token ids were generated from `tokenizer.convert_tokens_to_ids`. I think it is cause by indexing the position_embeddings rather than the word_embedding.<|||||>I am also facing the same problem. What is the solution? ``` result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) ~/projects/yct-experimentation-master/pytorch_yct_sidd/pytorch_pretrained_bert_yct/modeling.py in forward(self, input_ids) 956 segment_ids = torch.zeros_like(input_ids) 957 # Zero-pad up to the sequence length. --> 958 _, pooled_output = self.bert(input_ids, segment_ids, input_mask, output_all_encoded_layers=False) 959 pooled_output = self.dropout(pooled_output) 960 return self.classifier(pooled_output) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) ~/projects/yct-experimentation-master/pytorch_yct_sidd/pytorch_pretrained_bert_yct/modeling.py in forward(self, input_ids, token_type_ids, attention_mask, output_all_encoded_layers) 624 extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 625 --> 626 embedding_output = self.embeddings(input_ids, token_type_ids) 627 encoded_layers = self.encoder(embedding_output, 628 extended_attention_mask, ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) ~/projects/yct-experimentation-master/pytorch_yct_sidd/pytorch_pretrained_bert_yct/modeling.py in forward(self, input_ids, token_type_ids) 192 193 words_embeddings = self.word_embeddings(input_ids) --> 194 position_embeddings = self.position_embeddings(position_ids) 195 token_type_embeddings = self.token_type_embeddings(token_type_ids) 196 ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input) 116 return F.embedding( 117 input, self.weight, self.padding_idx, self.max_norm, --> 118 self.norm_type, self.scale_grad_by_freq, self.sparse) 119 120 def extra_repr(self): ~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1452 # remove once script supports set_grad_enabled 1453 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1454 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1455 1456 RuntimeError: CUDA error: device-side assert triggered ``` <|||||>@siddBanPsu I got the problem because I didn't limit the max length of the sentence so that the position embedder get the position token id lager than its length.<|||||>> @siddBanPsu I got the problem because I didn't limit the max length of the sentence so that the position embedder get the position token id lager than its length. 3q,I meet the same problem too, and I solve the problem after I set the max length of input sequence, but here how the position embedder get the position token id?<|||||>@yyHaker hello,i have set the max length,but i still get the same error,could you tell me how to set the max length? my code is as following: parser.add_argument("--max_seq_length", default=64, type=int, help="The maximum total input sequence length after tokenization.") error: TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [117,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed<|||||>@yihenglu these are old issues related to `pytorch_pretrained_bert`, you should rather open a new issue with a clear description of the model you are using, the version of the library and the error message you have.<|||||>If you are using a tokenizer try: `tokenizer(input, truncation=True)` This will truncate the input to the max_length<|||||>> This actually solved my issue...<|||||>Hi I am using LayoutLM V2 model. I am trying to finetune the the model by using my custom dataset. I got bellow error message. Please tell me how to resolve the error. ../aten/src/ATen/native/cuda/Indexing.cu:703: indexSelectLargeIndex: block: [79,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "layoutlmV2/train.py", line 124, in <module> trainer.train() File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1371, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1609, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2300, in training_step loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2332, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 1238, in forward return_dict=return_dict, File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 906, in forward inputs_embeds=inputs_embeds, File "/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 752, in _calc_text_embeddings spatial_position_embeddings = self.embeddings._calc_spatial_position_embeddings(bbox) File "/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 92, in _calc_spatial_position_embeddings h_position_embeddings = self.h_position_embeddings(bbox[:, :, 3] - bbox[:, :, 1]) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2183, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: CUDA error: device-side assert triggered 0% 0/240 [00:00<?, ?it/s] you can download the code and dataset along with notebook https://drive.google.com/file/d/1VdTvn580pGgVBlN03UX5alaFqSbc8Q5_/view?usp=sharing Github issue: https://github.com/microsoft/unilm/issues/755 Please help
transformers
96
closed
BertForMultipleChoice and Swag dataset example.
Hi! This is the code that enables Bert models to be used for Multiple Choice problems (such as [Swag](https://github.com/rowanz/swagaf) and [ROCStories](http://cs.rochester.edu/nlp/rocstories/). For my implementation, I use the algorithm described in #90 and issue [#38](https://github.com/google-research/bert/issues/38) from the tensorflow implementation repo. The commentaries and the `README.md` files have been updated but I would very much appreciate if someone could check my changes. I am also unable to test my code on multiple GPUs so I can't check whether it works or not. I will let a training run during the night to see what kind of result we get although I won't be able to do a proper hyper-parameter search due my computing power limitations. I also have a question. I used `pandas` to load the Swag dataset, do I need to specify it somewhere in a file to add it as a dependency for `pip`? I have never published a module on `pip`.
12-06-2018 18:34:05
12-06-2018 18:34:05
Hi Gregory, I will take some time to review and test that this week. Just a word on additional dependencies, I would like to keep the package as light as possible (currently it's aligned with the dependencies of AllenNLP) so if you can manage to avoid adding any additional dependency it would be better.<|||||>Did you get good results fine-tuning the model on SWAG? We should also indicate (reproducible) numbers in the `README.md` if we want to add this example to the repo (like the other examples).<|||||>Completely forgot to run the training, I've running it right now and I should have the results by the end of the day. My parameters are the following ones: ```bash python examples/run_swag.py \ --do_train \ --do_eval \ --do_lower_case \ --data_dir $SWAG_DIR/data/ \ --bert_model bert-base-uncased \ --max_seq_length 100 \ --train_batch_size 4 --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/swag ``` The batch size of 4 isn't ideal but my GPU memory is pretty already full with these parameters. <|||||>I get a 77.76% accuracy on SWAG, I would be interested in the results with a batch size of 16 like in the Bert paper. I've added my results to the readme and precised that the difference in performance was probably caused by the difference in `training_batch_size`. @thomwolf Any chance you could run it on multiple GPUs? I will commit a patch later to remove the pandas dependency.<|||||>Yes, I'll give a try on a bigger machine. You can use gradient accumulation to get bigger batch size on a single GPU, you know right? (I wrote a lengthy blurb on that [here](https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255))<|||||>I didn't know actually, this is super cool! I'll give it try. Thanks for the link.<|||||>I did another finetuning using a `training_batch_size` of 16 and a `gradient_accumulation_steps` of 4 and I get a much better accuracy (80.62% instead of 77.76%). I have updated the readme accordingly and if everything is okay the branch should be ready for a merge. @thomwolf Thanks for introducing to gradient accumulation, it's quite a neat trick and I think I will use it a lot more in the future. <|||||>Oh that's great, now we are in the ballpark of the 81.6 reported in the BERT paper! Looks good to me, I'm merging!<|||||>still not able to get 81.6.. after 3 epochs it get to 79.97.. any help?<|||||>Probably relevant #461 <|||||>Any chance someone can post the command for only testing, after training by using a model from a checkpoint? Because even when I test it on swag original data, it complains about having label column even though there is none. I set —do_test and but it’s still not working.<|||||>Can anybody help me to understand this code? i looking for an example of using "Bertformultiplechoice" for training swag datasets. Thank you in advance<|||||> When I run run_multiple_choice.py I get this error, I am not sure why? I need help here. Error: 2020-11-30 21:53:38.693828: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Traceback (most recent call last): File "./transformers/examples/multiple-choice/run_multiple_choice.py", line 26, in <module> import transformers File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 20, in <module> from . import dependency_versions_check File "/usr/local/lib/python3.6/dist-packages/transformers/dependency_versions_check.py", line 21, in <module> from .file_utils import is_tokenizers_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 88, in <module> import datasets # noqa: F401 File "/usr/local/lib/python3.6/dist-packages/datasets/__init__.py", line 26, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 41, in <module> from .arrow_writer import ArrowWriter, TypedSequence File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_writer.py", line 26, in <module> from .features import Features, _ArrayXDExtensionType File "/usr/local/lib/python3.6/dist-packages/datasets/features.py", line 210, in <module> class _ArrayXDExtensionType(pa.PyExtensionType): AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'<|||||>I uninstalled pyarrow and installed it again. it fixed the error.
transformers
95
closed
Not updating the BERT embeddings during the fine tuning process
Is there any way of not updating the BERT embeddings during the fine tuning process? For example while running on SQUAD, I want to see the effect of not updating the parameters associated with the BERT embeddings. I saw that `required_grad` is set to True for cpu and fp16. Which makes me think that it's assuming `do_grad` for all the parameters. I'm asking if there's any quick way to disable the update to those embeddings but let the model update other parameters.
12-06-2018 14:40:33
12-06-2018 14:40:33
You can do it by setting the `requires_grad` attribute of the embedding layer in `BertModel`. That will look something like this: ``` model = BertForQuestionAnswering.from_pretrained(args.bert_model, cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(args.local_rank)) model.bert.embeddings.requires_grad = False ``` I haven't tested this code but it should do what you are asking. More explanation are available on the [PyTorch forums](https://discuss.pytorch.org/t/how-the-pytorch-freeze-network-in-some-layers-only-the-rest-of-the-training/7088)<|||||>Thanks let me try this..I was thinking of going through the BERTAdam optimizer. <|||||>With `requires_grad=false`: {"exact_match": 80.65279091769158, "f1": 88.04683744174879} Without `requires_grad=false`: {"exact_match": 81.33396404919584, "f1": 88.43774959214048}<|||||>Thanks for your feedback. Have you checked that the values of the embedding matrix are indeed unchanged by the finetuning?<|||||>Nope I haven't.. I plan to do that today. @thomwolf do you think we're on the right track? Just need your 2 cents :) <|||||>Yes you can do ```python for p in model.bert.embeddings.parameters(): p.requires_grad = False ``` You can also just not send these parameters to the optimizer (when you create the optimizer) as detailed on the PyTorch forums. Both methods will work. Combining the two will gives the lowest overhead (no un-necessary computation of gradient and no un-necessary check of update during the optimizer step). The PyTorch forum is the best reference for this kind of general question.<|||||>Correct me if I'm wrong, but setting `model.bert.embeddings.requires_grad = False` does not seem to propagate. ```python bert = BertModel.from_pretrained('bert-base-uncased') bert.embeddings.requires_grad = False for name, param in bert.named_parameters(): if param.requires_grad: print(name) ``` Output: > embeddings.word_embeddings.weight > embeddings.position_embeddings.weight > embeddings.token_type_embeddings.weight > embeddings.LayerNorm.weight > embeddings.LayerNorm.bias > encoder.layer.0.attention.self.query.weight > encoder.layer.0.attention.self.query.bias > encoder.layer.0.attention.self.key.weight > encoder.layer.0.attention.self.key.bias > encoder.layer.0.attention.self.value.weight > encoder.layer.0.attention.self.value.bias > encoder.layer.0.attention.output.dense.weight > encoder.layer.0.attention.output.dense.bias > encoder.layer.0.attention.output.LayerNorm.weight > encoder.layer.0.attention.output.LayerNorm.bias > encoder.layer.0.intermediate.dense.weight > encoder.layer.0.intermediate.dense.bias > encoder.layer.0.output.dense.weight > encoder.layer.0.output.dense.bias > encoder.layer.0.output.LayerNorm.weight > encoder.layer.0.output.LayerNorm.bias > encoder.layer.1.attention.self.query.weight > encoder.layer.1.attention.self.query.bias > encoder.layer.1.attention.self.key.weight > encoder.layer.1.attention.self.key.bias > encoder.layer.1.attention.self.value.weight > encoder.layer.1.attention.self.value.bias > encoder.layer.1.attention.output.dense.weight > encoder.layer.1.attention.output.dense.bias > encoder.layer.1.attention.output.LayerNorm.weight > encoder.layer.1.attention.output.LayerNorm.bias > encoder.layer.1.intermediate.dense.weight > encoder.layer.1.intermediate.dense.bias > encoder.layer.1.output.dense.weight > encoder.layer.1.output.dense.bias > encoder.layer.1.output.LayerNorm.weight > encoder.layer.1.output.LayerNorm.bias > encoder.layer.2.attention.self.query.weight > encoder.layer.2.attention.self.query.bias > encoder.layer.2.attention.self.key.weight > encoder.layer.2.attention.self.key.bias > encoder.layer.2.attention.self.value.weight > encoder.layer.2.attention.self.value.bias > encoder.layer.2.attention.output.dense.weight > encoder.layer.2.attention.output.dense.bias > encoder.layer.2.attention.output.LayerNorm.weight > encoder.layer.2.attention.output.LayerNorm.bias > encoder.layer.2.intermediate.dense.weight > encoder.layer.2.intermediate.dense.bias > encoder.layer.2.output.dense.weight > encoder.layer.2.output.dense.bias > encoder.layer.2.output.LayerNorm.weight > encoder.layer.2.output.LayerNorm.bias > encoder.layer.3.attention.self.query.weight > encoder.layer.3.attention.self.query.bias > encoder.layer.3.attention.self.key.weight > encoder.layer.3.attention.self.key.bias > encoder.layer.3.attention.self.value.weight > encoder.layer.3.attention.self.value.bias > encoder.layer.3.attention.output.dense.weight > encoder.layer.3.attention.output.dense.bias > encoder.layer.3.attention.output.LayerNorm.weight > encoder.layer.3.attention.output.LayerNorm.bias > encoder.layer.3.intermediate.dense.weight > encoder.layer.3.intermediate.dense.bias > encoder.layer.3.output.dense.weight > encoder.layer.3.output.dense.bias > encoder.layer.3.output.LayerNorm.weight > encoder.layer.3.output.LayerNorm.bias > encoder.layer.4.attention.self.query.weight > encoder.layer.4.attention.self.query.bias > encoder.layer.4.attention.self.key.weight > encoder.layer.4.attention.self.key.bias > encoder.layer.4.attention.self.value.weight > encoder.layer.4.attention.self.value.bias > encoder.layer.4.attention.output.dense.weight > encoder.layer.4.attention.output.dense.bias > encoder.layer.4.attention.output.LayerNorm.weight > encoder.layer.4.attention.output.LayerNorm.bias > encoder.layer.4.intermediate.dense.weight > encoder.layer.4.intermediate.dense.bias > encoder.layer.4.output.dense.weight > encoder.layer.4.output.dense.bias > encoder.layer.4.output.LayerNorm.weight > encoder.layer.4.output.LayerNorm.bias > encoder.layer.5.attention.self.query.weight > encoder.layer.5.attention.self.query.bias > encoder.layer.5.attention.self.key.weight > encoder.layer.5.attention.self.key.bias > encoder.layer.5.attention.self.value.weight > encoder.layer.5.attention.self.value.bias > encoder.layer.5.attention.output.dense.weight > encoder.layer.5.attention.output.dense.bias > encoder.layer.5.attention.output.LayerNorm.weight > encoder.layer.5.attention.output.LayerNorm.bias > encoder.layer.5.intermediate.dense.weight > encoder.layer.5.intermediate.dense.bias > encoder.layer.5.output.dense.weight > encoder.layer.5.output.dense.bias > encoder.layer.5.output.LayerNorm.weight > encoder.layer.5.output.LayerNorm.bias > encoder.layer.6.attention.self.query.weight > encoder.layer.6.attention.self.query.bias > encoder.layer.6.attention.self.key.weight > encoder.layer.6.attention.self.key.bias > encoder.layer.6.attention.self.value.weight > encoder.layer.6.attention.self.value.bias > encoder.layer.6.attention.output.dense.weight > encoder.layer.6.attention.output.dense.bias > encoder.layer.6.attention.output.LayerNorm.weight > encoder.layer.6.attention.output.LayerNorm.bias > encoder.layer.6.intermediate.dense.weight > encoder.layer.6.intermediate.dense.bias > encoder.layer.6.output.dense.weight > encoder.layer.6.output.dense.bias > encoder.layer.6.output.LayerNorm.weight > encoder.layer.6.output.LayerNorm.bias > encoder.layer.7.attention.self.query.weight > encoder.layer.7.attention.self.query.bias > encoder.layer.7.attention.self.key.weight > encoder.layer.7.attention.self.key.bias > encoder.layer.7.attention.self.value.weight > encoder.layer.7.attention.self.value.bias > encoder.layer.7.attention.output.dense.weight > encoder.layer.7.attention.output.dense.bias > encoder.layer.7.attention.output.LayerNorm.weight > encoder.layer.7.attention.output.LayerNorm.bias > encoder.layer.7.intermediate.dense.weight > encoder.layer.7.intermediate.dense.bias > encoder.layer.7.output.dense.weight > encoder.layer.7.output.dense.bias > encoder.layer.7.output.LayerNorm.weight > encoder.layer.7.output.LayerNorm.bias > encoder.layer.8.attention.self.query.weight > encoder.layer.8.attention.self.query.bias > encoder.layer.8.attention.self.key.weight > encoder.layer.8.attention.self.key.bias > encoder.layer.8.attention.self.value.weight > encoder.layer.8.attention.self.value.bias > encoder.layer.8.attention.output.dense.weight > encoder.layer.8.attention.output.dense.bias > encoder.layer.8.attention.output.LayerNorm.weight > encoder.layer.8.attention.output.LayerNorm.bias > encoder.layer.8.intermediate.dense.weight > encoder.layer.8.intermediate.dense.bias > encoder.layer.8.output.dense.weight > encoder.layer.8.output.dense.bias > encoder.layer.8.output.LayerNorm.weight > encoder.layer.8.output.LayerNorm.bias > encoder.layer.9.attention.self.query.weight > encoder.layer.9.attention.self.query.bias > encoder.layer.9.attention.self.key.weight > encoder.layer.9.attention.self.key.bias > encoder.layer.9.attention.self.value.weight > encoder.layer.9.attention.self.value.bias > encoder.layer.9.attention.output.dense.weight > encoder.layer.9.attention.output.dense.bias > encoder.layer.9.attention.output.LayerNorm.weight > encoder.layer.9.attention.output.LayerNorm.bias > encoder.layer.9.intermediate.dense.weight > encoder.layer.9.intermediate.dense.bias > encoder.layer.9.output.dense.weight > encoder.layer.9.output.dense.bias > encoder.layer.9.output.LayerNorm.weight > encoder.layer.9.output.LayerNorm.bias > encoder.layer.10.attention.self.query.weight > encoder.layer.10.attention.self.query.bias > encoder.layer.10.attention.self.key.weight > encoder.layer.10.attention.self.key.bias > encoder.layer.10.attention.self.value.weight > encoder.layer.10.attention.self.value.bias > encoder.layer.10.attention.output.dense.weight > encoder.layer.10.attention.output.dense.bias > encoder.layer.10.attention.output.LayerNorm.weight > encoder.layer.10.attention.output.LayerNorm.bias > encoder.layer.10.intermediate.dense.weight > encoder.layer.10.intermediate.dense.bias > encoder.layer.10.output.dense.weight > encoder.layer.10.output.dense.bias > encoder.layer.10.output.LayerNorm.weight > encoder.layer.10.output.LayerNorm.bias > encoder.layer.11.attention.self.query.weight > encoder.layer.11.attention.self.query.bias > encoder.layer.11.attention.self.key.weight > encoder.layer.11.attention.self.key.bias > encoder.layer.11.attention.self.value.weight > encoder.layer.11.attention.self.value.bias > encoder.layer.11.attention.output.dense.weight > encoder.layer.11.attention.output.dense.bias > encoder.layer.11.attention.output.LayerNorm.weight > encoder.layer.11.attention.output.LayerNorm.bias > encoder.layer.11.intermediate.dense.weight > encoder.layer.11.intermediate.dense.bias > encoder.layer.11.output.dense.weight > encoder.layer.11.output.dense.bias > encoder.layer.11.output.LayerNorm.weight > encoder.layer.11.output.LayerNorm.bias > pooler.dense.weight > pooler.dense.bias Instead using the following does give the expected output. ```python bert = BertModel.from_pretrained('bert-base-uncased') for name, param in bert.named_parameters(): if name.startswith('embeddings'): param.requires_grad = False ```` <|||||>> Nope I haven't. I plan to do that today. @thomwolf do you think we're on the right track? Just need your 2 cents :) How to check the values of the embedding matrix change or not?<|||||>> Correct me if I'm wrong, but setting `model.bert.embeddings.requires_grad = False` does not seem to propagate. > > ```python > bert = BertModel.from_pretrained('bert-base-uncased') > bert.embeddings.requires_grad = False > for name, param in bert.named_parameters(): > if param.requires_grad: > print(name) > ``` > > Output: > > > embeddings.word_embeddings.weight > > embeddings.position_embeddings.weight > > embeddings.token_type_embeddings.weight > > embeddings.LayerNorm.weight > > embeddings.LayerNorm.bias > > encoder.layer.0.attention.self.query.weight > > encoder.layer.0.attention.self.query.bias > > encoder.layer.0.attention.self.key.weight > > encoder.layer.0.attention.self.key.bias > > encoder.layer.0.attention.self.value.weight > > encoder.layer.0.attention.self.value.bias > > encoder.layer.0.attention.output.dense.weight > > encoder.layer.0.attention.output.dense.bias > > encoder.layer.0.attention.output.LayerNorm.weight > > encoder.layer.0.attention.output.LayerNorm.bias > > encoder.layer.0.intermediate.dense.weight > > encoder.layer.0.intermediate.dense.bias > > encoder.layer.0.output.dense.weight > > encoder.layer.0.output.dense.bias > > encoder.layer.0.output.LayerNorm.weight > > encoder.layer.0.output.LayerNorm.bias > > encoder.layer.1.attention.self.query.weight > > encoder.layer.1.attention.self.query.bias > > encoder.layer.1.attention.self.key.weight > > encoder.layer.1.attention.self.key.bias > > encoder.layer.1.attention.self.value.weight > > encoder.layer.1.attention.self.value.bias > > encoder.layer.1.attention.output.dense.weight > > encoder.layer.1.attention.output.dense.bias > > encoder.layer.1.attention.output.LayerNorm.weight > > encoder.layer.1.attention.output.LayerNorm.bias > > encoder.layer.1.intermediate.dense.weight > > encoder.layer.1.intermediate.dense.bias > > encoder.layer.1.output.dense.weight > > encoder.layer.1.output.dense.bias > > encoder.layer.1.output.LayerNorm.weight > > encoder.layer.1.output.LayerNorm.bias > > encoder.layer.2.attention.self.query.weight > > encoder.layer.2.attention.self.query.bias > > encoder.layer.2.attention.self.key.weight > > encoder.layer.2.attention.self.key.bias > > encoder.layer.2.attention.self.value.weight > > encoder.layer.2.attention.self.value.bias > > encoder.layer.2.attention.output.dense.weight > > encoder.layer.2.attention.output.dense.bias > > encoder.layer.2.attention.output.LayerNorm.weight > > encoder.layer.2.attention.output.LayerNorm.bias > > encoder.layer.2.intermediate.dense.weight > > encoder.layer.2.intermediate.dense.bias > > encoder.layer.2.output.dense.weight > > encoder.layer.2.output.dense.bias > > encoder.layer.2.output.LayerNorm.weight > > encoder.layer.2.output.LayerNorm.bias > > encoder.layer.3.attention.self.query.weight > > encoder.layer.3.attention.self.query.bias > > encoder.layer.3.attention.self.key.weight > > encoder.layer.3.attention.self.key.bias > > encoder.layer.3.attention.self.value.weight > > encoder.layer.3.attention.self.value.bias > > encoder.layer.3.attention.output.dense.weight > > encoder.layer.3.attention.output.dense.bias > > encoder.layer.3.attention.output.LayerNorm.weight > > encoder.layer.3.attention.output.LayerNorm.bias > > encoder.layer.3.intermediate.dense.weight > > encoder.layer.3.intermediate.dense.bias > > encoder.layer.3.output.dense.weight > > encoder.layer.3.output.dense.bias > > encoder.layer.3.output.LayerNorm.weight > > encoder.layer.3.output.LayerNorm.bias > > encoder.layer.4.attention.self.query.weight > > encoder.layer.4.attention.self.query.bias > > encoder.layer.4.attention.self.key.weight > > encoder.layer.4.attention.self.key.bias > > encoder.layer.4.attention.self.value.weight > > encoder.layer.4.attention.self.value.bias > > encoder.layer.4.attention.output.dense.weight > > encoder.layer.4.attention.output.dense.bias > > encoder.layer.4.attention.output.LayerNorm.weight > > encoder.layer.4.attention.output.LayerNorm.bias > > encoder.layer.4.intermediate.dense.weight > > encoder.layer.4.intermediate.dense.bias > > encoder.layer.4.output.dense.weight > > encoder.layer.4.output.dense.bias > > encoder.layer.4.output.LayerNorm.weight > > encoder.layer.4.output.LayerNorm.bias > > encoder.layer.5.attention.self.query.weight > > encoder.layer.5.attention.self.query.bias > > encoder.layer.5.attention.self.key.weight > > encoder.layer.5.attention.self.key.bias > > encoder.layer.5.attention.self.value.weight > > encoder.layer.5.attention.self.value.bias > > encoder.layer.5.attention.output.dense.weight > > encoder.layer.5.attention.output.dense.bias > > encoder.layer.5.attention.output.LayerNorm.weight > > encoder.layer.5.attention.output.LayerNorm.bias > > encoder.layer.5.intermediate.dense.weight > > encoder.layer.5.intermediate.dense.bias > > encoder.layer.5.output.dense.weight > > encoder.layer.5.output.dense.bias > > encoder.layer.5.output.LayerNorm.weight > > encoder.layer.5.output.LayerNorm.bias > > encoder.layer.6.attention.self.query.weight > > encoder.layer.6.attention.self.query.bias > > encoder.layer.6.attention.self.key.weight > > encoder.layer.6.attention.self.key.bias > > encoder.layer.6.attention.self.value.weight > > encoder.layer.6.attention.self.value.bias > > encoder.layer.6.attention.output.dense.weight > > encoder.layer.6.attention.output.dense.bias > > encoder.layer.6.attention.output.LayerNorm.weight > > encoder.layer.6.attention.output.LayerNorm.bias > > encoder.layer.6.intermediate.dense.weight > > encoder.layer.6.intermediate.dense.bias > > encoder.layer.6.output.dense.weight > > encoder.layer.6.output.dense.bias > > encoder.layer.6.output.LayerNorm.weight > > encoder.layer.6.output.LayerNorm.bias > > encoder.layer.7.attention.self.query.weight > > encoder.layer.7.attention.self.query.bias > > encoder.layer.7.attention.self.key.weight > > encoder.layer.7.attention.self.key.bias > > encoder.layer.7.attention.self.value.weight > > encoder.layer.7.attention.self.value.bias > > encoder.layer.7.attention.output.dense.weight > > encoder.layer.7.attention.output.dense.bias > > encoder.layer.7.attention.output.LayerNorm.weight > > encoder.layer.7.attention.output.LayerNorm.bias > > encoder.layer.7.intermediate.dense.weight > > encoder.layer.7.intermediate.dense.bias > > encoder.layer.7.output.dense.weight > > encoder.layer.7.output.dense.bias > > encoder.layer.7.output.LayerNorm.weight > > encoder.layer.7.output.LayerNorm.bias > > encoder.layer.8.attention.self.query.weight > > encoder.layer.8.attention.self.query.bias > > encoder.layer.8.attention.self.key.weight > > encoder.layer.8.attention.self.key.bias > > encoder.layer.8.attention.self.value.weight > > encoder.layer.8.attention.self.value.bias > > encoder.layer.8.attention.output.dense.weight > > encoder.layer.8.attention.output.dense.bias > > encoder.layer.8.attention.output.LayerNorm.weight > > encoder.layer.8.attention.output.LayerNorm.bias > > encoder.layer.8.intermediate.dense.weight > > encoder.layer.8.intermediate.dense.bias > > encoder.layer.8.output.dense.weight > > encoder.layer.8.output.dense.bias > > encoder.layer.8.output.LayerNorm.weight > > encoder.layer.8.output.LayerNorm.bias > > encoder.layer.9.attention.self.query.weight > > encoder.layer.9.attention.self.query.bias > > encoder.layer.9.attention.self.key.weight > > encoder.layer.9.attention.self.key.bias > > encoder.layer.9.attention.self.value.weight > > encoder.layer.9.attention.self.value.bias > > encoder.layer.9.attention.output.dense.weight > > encoder.layer.9.attention.output.dense.bias > > encoder.layer.9.attention.output.LayerNorm.weight > > encoder.layer.9.attention.output.LayerNorm.bias > > encoder.layer.9.intermediate.dense.weight > > encoder.layer.9.intermediate.dense.bias > > encoder.layer.9.output.dense.weight > > encoder.layer.9.output.dense.bias > > encoder.layer.9.output.LayerNorm.weight > > encoder.layer.9.output.LayerNorm.bias > > encoder.layer.10.attention.self.query.weight > > encoder.layer.10.attention.self.query.bias > > encoder.layer.10.attention.self.key.weight > > encoder.layer.10.attention.self.key.bias > > encoder.layer.10.attention.self.value.weight > > encoder.layer.10.attention.self.value.bias > > encoder.layer.10.attention.output.dense.weight > > encoder.layer.10.attention.output.dense.bias > > encoder.layer.10.attention.output.LayerNorm.weight > > encoder.layer.10.attention.output.LayerNorm.bias > > encoder.layer.10.intermediate.dense.weight > > encoder.layer.10.intermediate.dense.bias > > encoder.layer.10.output.dense.weight > > encoder.layer.10.output.dense.bias > > encoder.layer.10.output.LayerNorm.weight > > encoder.layer.10.output.LayerNorm.bias > > encoder.layer.11.attention.self.query.weight > > encoder.layer.11.attention.self.query.bias > > encoder.layer.11.attention.self.key.weight > > encoder.layer.11.attention.self.key.bias > > encoder.layer.11.attention.self.value.weight > > encoder.layer.11.attention.self.value.bias > > encoder.layer.11.attention.output.dense.weight > > encoder.layer.11.attention.output.dense.bias > > encoder.layer.11.attention.output.LayerNorm.weight > > encoder.layer.11.attention.output.LayerNorm.bias > > encoder.layer.11.intermediate.dense.weight > > encoder.layer.11.intermediate.dense.bias > > encoder.layer.11.output.dense.weight > > encoder.layer.11.output.dense.bias > > encoder.layer.11.output.LayerNorm.weight > > encoder.layer.11.output.LayerNorm.bias > > pooler.dense.weight > > pooler.dense.bias > > Instead using the following does give the expected output. > > ```python > bert = BertModel.from_pretrained('bert-base-uncased') > for name, param in bert.named_parameters(): > if name.startswith('embeddings'): > param.requires_grad = False > ``` Hi How to tell the optimizer that freezing the embedding?<|||||>> > Nope I haven't. I plan to do that today. @thomwolf do you think we're on the right track? Just need your 2 cents :) > > How to check the values of the embedding matrix change or not? Hi! You can use the following code in order to check if any layer has been modified (it should work for any pytorch code if I am not wrong, not just BERT): ```python3 import copy from transformers import BertModel bert = BertModel.from_pretrained('bert-base-uncased') layer = bert.embeddings frozen_parameters = {} # Copy tensors for name, p in layer.named_parameters(): frozen_parameters[name] = copy.deepcopy(p.data) # Freeze in order to be able to compare later # Do stuff ... # Check if the value of the tensors have been updated for name, p in layer.named_parameters(): updated = (frozen_parameters[name] != p.data).any().cpu().detach().numpy() print(f"Layer '{name}' has been updated? {'yes' if updated else 'no'}") ``` It is very similar to the code I use in order to check if a layer has been updated (remember that it won't be updated if `grad_fn` is `None`), but I have not tested this exactly code. I hope it helps!
transformers
94
closed
Fixing the commentary of the `SquadExample` class.
Fixing the commentary of `SquadExample` that have been copy-pasted from `InputExample`.
12-06-2018 12:16:55
12-06-2018 12:16:55
transformers
93
closed
Zoeliao/dev
RT
12-06-2018 02:15:01
12-06-2018 02:15:01
transformers
92
closed
Bert uncased and Bert large giving much lower results than Bert cased base
Is there a reason why the Bert uncased model and the Bert large model give lower results that the cased model on downstream tasks?
12-05-2018 19:13:24
12-05-2018 19:13:24
Any specific example that we could investigate?<|||||>I've implemented a version of SQuAD 2.0 on top of the current SQuAD that is similar to the way Google implemented their's on the official Bert repo. The base cased model works fine, but I noticed that uncased models tend to give worse results, even the large model.<|||||>@kh522 would love to try it out. Are you planing to share your code?<|||||>Sorry, but not quite yet. I was wondering if anyone had an intuition behind the error. If I recall correctly the SQuAD file lowercases the inputs for the tokenizer as a default. Shouldn't this mean that the pretrained uncased actually does better than the cased version?<|||||>Hi @kh522, were you carefull no to lower case the input in the case of the uncased models? By default the tokenizer will lower the input [see here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#tokenizer-berttokenizer)<|||||>Ah, I see. That would be a problem. I assume that the difference in accent markers will lead to a lower result. That being said, what is / is there a difference between lowercasing before and after the wordpiece tokenization?<|||||>I've tried running it with the --do_lower_case flag set to False, and the results are still not good yet. Is there another possible idea?<|||||>Try 10 different seeds maybe? A bigger batch-size can help too. More generally, you should try to explore the space of hyper-parameters for fine-tuning, there is often a high variance in the fine-tuning of bert so you will need to compute mean/variances of several results to get meaningful numbers.<|||||>In the run_squad.py, the seed is set to 42, therefore the results reported in the repo should be reproducible, as there would not be any other randomness. <|||||>> Hi @kh522, were you carefull no to lower case the input in the case of the uncased models? By default the tokenizer will lower the input [see here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#tokenizer-berttokenizer) @thomwolf Just want to be sure (as the link does not land me in anything specific). If we are pre-training/fine tuning an uncased model still we don't need to lower case input text as tokenizer takes care of it; is this correct understanding? In case, someone lowercases the text, what problem it can cause?<|||||>@abmitra84 Yes, the tokenizer of the uncased model takes care of it. **Lowercasing the text before by yourself doesn't affect it at all, it's just not necessary since the tokenizer takes care of it**. Code below should clarify it. ``` # Install last Hugging Face libraries (datasets & transformers) !pip install datasets git+https://github.com/huggingface/transformers/ from transformers import AutoModelForMaskedLM, AutoTokenizer model = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(model, use_fast=True) model = AutoModelForMaskedLM.from_pretrained(model) sent = "SARS-CoV-2 is a type of the Coronavirus" sent_lower = sent.lower() print("[1] ", sent) print("[2] ",sent_lower) print("[3] ",tokenizer.tokenize(sent)) print("[4] ",tokenizer.tokenize(sent_lower)) ``` Output: > [1] SARS-CoV-2 is a type of the Coronavirus > [2] sars-cov-2 is a type of the coronavirus > [3] ['sar', '##s', '-', 'co', '##v', '-', '2', 'is', 'a', 'type', 'of', 'the', 'corona', '##virus'] > [4] ['sar', '##s', '-', 'co', '##v', '-', '2', 'is', 'a', 'type', 'of', 'the', 'corona', '##virus']
transformers
91
closed
run_classifier.py improvements
Hi ! This PR contains multiple improvements to the `run_classifier.py` file. The changes are: - removing trailing whitespaces ([PEP 8](https://www.python.org/dev/peps/pep-0008/)), - simplifying a bit the data processing code, in particular tensor formatting, - fixing issue #83 by adapting the value of the `num_labels` argument of `BertForSequenceClassification.from_pretrained` to the dataset being used.
12-05-2018 17:22:31
12-05-2018 17:22:31
Neat!
transformers
90
closed
Fine tuned to Multi-choice dataset?
Is it posible to fine tuned to the multi choices problems , which usually has one passage, question and ABCD four options?
12-05-2018 14:01:41
12-05-2018 14:01:41
Yes it is, the code is not written yet but I'm planning to work on it. The idea is to format the input data the same way the authors of [Improving Language Understanding with Unsupervised Learning](https://blog.openai.com/language-unsupervised/) ![Multiple choice GPT](https://i.imgur.com/z0Eanvy.png) You run an inference `(context, choice)` for each choice, you compute the image of the `[CLS]` token by a linear layer with 1 output and then compute a softmax over the output of all choices. I will try to create a PR with this code very soon. <|||||>Thx for the reply. Actually, I have the same plan. But I am not sure whether it will work. Anyway, I will have a try.<|||||>If it worked in the OpenAI paper, I don't really see why it wouldn't work with this architecture.<|||||>@Qzsl123 The code for multiple choice task is available in PR #96 if you want to test it.<|||||>@rodgzilla yeah, I am trying to run it. Thanks for the wonderful job!<|||||>> Yes it is, the code is not written yet but I'm planning to work on it. The idea is to format the input data the same way the authors of [Improving Language Understanding with Unsupervised Learning](https://blog.openai.com/language-unsupervised/) > > ![Multiple choice GPT](https://camo.githubusercontent.com/e5d95abc42ca2acb493a710383c949eb01c10bfb/68747470733a2f2f692e696d6775722e636f6d2f7a3045616e76792e706e67) > > You run an inference `(context, choice)` for each choice, you compute the image of the `[CLS]` token by a linear layer with 1 output and then compute a softmax over the output of all choices. > > I will try to create a PR with this code very soon. hi,The multi choices problem usually has one passage, question and ABCD four options。In your model, dose context means passage&question ? <|||||>Any update on this issue?
transformers
89
closed
bert-base-multilingual-cased - Text bigger than 512
Hello, I am trying to extract features from German text using bert-base-multilingual-cased. However, my text is bigger than 512 words. Is there any way to use the pertained Bert for text greater than 512 words
12-05-2018 10:11:21
12-05-2018 10:11:21
Hello, I do not think that it is possible out of the box. The article states the following: > We use learned positional embeddings with supported sequence lengths up to 512 tokens. The positional embeddings are therefore limited to 512 tokens. You may be able to add positional embeddings for position greater than 512 and learn them on your specific dataset but I don't know how efficient that would be.<|||||>Hi @agemagician, you cannot really use pretrained bert for text longer than 512 tokens per se but you can use the sliding window approach. Check this issue of the original bert repo for more details: https://github.com/google-research/bert/issues/66
transformers
88
closed
Error when calculating loss and running backward
I'm using the sentence classification example. I used my own dataset for emotionclassification (4 classes). The hyper-parameters are as follows: <pre> args.max_seq_length = 100 args.do_train = True args.do_eval = True args.do_lower_case = True args.train_batch_size = 32 args.eval_batch_size = 8 args.learning_rate = 2e-5 args.num_train_epochs = 3 args.warmup_proportion = 0.1 args.no_cuda = False args.local_rank = -1 args.gpu_id = 1 args.seed = 412 args.gradient_accumulation_steps = 1 args.optimize_on_cpu = False args.fp16 = False args.loss_scale = 128 </pre> I prepared my dataset accordingly and properly: <pre> 12/04/2018 21:23:02 - INFO - __main__ - *** Example *** 12/04/2018 21:23:02 - INFO - __main__ - guid: train-1 12/04/2018 21:23:02 - INFO - __main__ - tokens: [CLS] but i don ' t [ sep ] u just did [ sep ] i don ##t want to talk to u [SEP] 12/04/2018 21:23:02 - INFO - __main__ - input_ids: 101 2021 1045 2123 1005 1056 1031 19802 1033 1057 2074 2106 1031 19802 1033 1045 2123 2102 2215 2000 2831 2000 1057 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/04/2018 21:23:02 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/04/2018 21:23:02 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/04/2018 21:23:02 - INFO - __main__ - label: angry (id = 3) </pre> When I run the following code, a runtime error occurred: <pre> for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration")): &nbsp &nbsp batch = tuple(t.to(device) for t in batch) &nbsp &nbsp input_ids, input_mask, segment_ids, label_ids = batch &nbsp &nbsp loss = model(input_ids, segment_ids, input_mask, label_ids) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-20-1977b86302ed> in <module>() 17 try: ---> 18 loss.backward() 19 except RuntimeError: /raid5/peixiang/anaconda3/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 92 """ ---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph) 94 /raid5/peixiang/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 89 tensors, grad_tensors, retain_graph, create_graph, ---> 90 allow_unreachable=True) # allow_unreachable flag 91 RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1532581333611/work/aten/src/THC/THCBlas.cu:411 </pre> What might be the cause? The dataset? I run the MRPC example without any issue.
12-04-2018 13:30:58
12-04-2018 13:30:58
I probably know the bug. The final output layer is for binary classification but I use it for 4-class classification. I thought BERT can automatically decide between sigmoid and soft max. I will replace it with my own classifier tomorrow and see how it goes.<|||||>The mismatched output size between BERT and our dataset is the bug. Also, remember to set the num_labels to your output size: <pre> output_size = 4 model.classifier = nn.Linear(768, output_size) model.num_labels = output_size </pre>
transformers
87
closed
Readme file links
Adding links to examples files in `README.md`.
12-04-2018 12:46:36
12-04-2018 12:46:36
Thanks Grégory!