repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
βŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
387
closed
run_squad.py cannot predict only
Existing code cannot load fine-tuned model properly. https://github.com/huggingface/pytorch-pretrained-BERT/blob/f3e5404880902a1bdfed2b1d47d10a6c672dc430/examples/run_squad.py#L1011-L1025
03-17-2019 09:01:40
03-17-2019 09:01:40
It is better to imitate original Bert repo to add separate argument `args.vocab_file`, and during prediction, argument `bert_model` is the directory containing the fine-tuned model. <|||||>Make sense indeed. Would you like to submit a PR on that?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
386
closed
Shared tokenizer interface
Up to this point, `tokenize()` and `encode()` mean different places. In GPT-land, `tokenize` doesn't get us all the way to token IDs. Idteally, he tokenizers would share a common interface so that they can be plugged in and out of places just like the models. I don't know if you want breaking changes, so I just created `encode()` and `decode()` as aliases on tokenizers that did not adhere to that spec already. There is more cleanup to do. Since it seems that non-BERT models were added on later on, the BERT files should probably be renamed into `tokenizer_bert`, etc., but I left that in place to maintain compatibility. BERT is still missing `decode`. Please advise and I can clean it up.
03-17-2019 05:51:57
03-17-2019 05:51:57
Hi Catalin, thanks for that! Yes backward compatibility is an important things to keep in mind here. I think the changes look nice. We should also: - document them in the readme, and - add associated tests in the relevant test files.<|||||>Just before this gets merged - I've noticed that the GPT(1) tokenizer ignores whether or not a string contains special tokens, and hence doesn't encode them properly. We've got around this by splitting `text` on whitespace inside the `tokenize` method and iterating, checking if a word is a special token or not, like so: ``` def tokenize(self, text): split_tokens = [] for t in text.split(' '): if t in self.special_tokens: split_tokens.extend([t]) else: #current tokenization stuff ``` Would be nice if this could be included :)<|||||>@andrewPoulton we can include this, but only for the whitespace tokenizer fallback of GPT-1. The original tokenizer (SpaCy) would split tokens like `<\w>` in pieces which is the reason it was not included originally. Overall I must say I'm not a huge fan of the not-split-specific-tokens feature (the `never_split` option). We've added it due to popular request but it is very dependent on the underlying behavior of the tokenizer and the character content of special tokens (does it contains spaces, dashes...) and from the issues it looks like a common source of bugs and un-intended behavior (see #410 for a latest example).<|||||>Hi @CatalinVoss, from the state of the PR I understand you are still working on it. Maybe ping me when you think it's ready for merging in master and I'll have a look again?<|||||>Hey @thomwolf yeah, just didn't get around to it yet, but then I needed the BERT decoding piece yesterday so I merged it in. It's imperfect. If we renamed everything to be consistent with words, tokens, token IDs, etc. we would have to change the method names, per my comment above. Do you want to do that? Otherwise perhaps better to do that in a separate PR and target some sort of v0.7 branch?<|||||>Ok we still have to add the docstrings, test and details in the readme for these methods. Haven't find time this week. I will see if I can find time next week. <|||||>@CatalinVoss I also was wondering about this while writing some extra functionality for the GPT2 tokenizer to encode new strings for finetuning tasks, but ended up writing a `special_token_to_id` functionality for getting IDs of special tokens -- the pattern of usage of the GPT1 tokenizer for finetuning tasks seems to be to add the special token IDs after encoding the rest of the string to process.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hey @thomwolf, what was the decision on this? I can revisit if you want. We still wanted this in our fork… thanks much!<|||||>I never really had time to add tests and documentation to the PR but it's a good idea. Let's add this feature in the new release.<|||||>Looks like this was taken care of with your refactor in `tokenization_utils.py`. Very nice!!
transformers
385
closed
pre-training a BERT from scratch
I am wondering whether I can train a new BERT from scratch with this pytorch BERT.
03-17-2019 00:45:26
03-17-2019 00:45:26
We can't now. The code is still incomplete. Is it possible recently? I really want to help but not familiar with tensorflow.<|||||>A related issue is #376. However, pytorch-pretraned-BERT was mostly designed to provide easy and fast access to pretrained models. If you want to train a BERT model from scratch you will need a more robust code base for training and data-processing than the simple examples that are provided in this repo. I would probably advise to move to a more integrated codebase like the nice [XLM repo](https://github.com/facebookresearch/XLM) of @glample and @aconneau.<|||||>I've been able to use the codebase for this, and didn't see much issues, however I might be overlooking something. If you construct and initialize a new model instead of loading from pretrained, you can use the `simple_lm_finetuning` script to train on new data. Thomas, did you have any specific other issues in mind? <|||||>NVidia recently [released](https://medium.com/future-vision/bert-meets-gpus-403d3fbed848?fbclid=IwAR0bFskUVVKDRyYF-9cQGgRXeq7dTvteGHi10HaTG5zI7_eE8oW-BfrxYQw) TF and PyTorch code to pretrain Bert from scratch. I wrapped it in a script to launch on multiple machines on AWS [here](https://github.com/cybertronai/Megatron-LM/blob/master/launch_pretrain_bert.py). Currently I'm still figuring out why the 64-GPU AWS throughput is 2x worse than what they are getting locally<|||||>Thanks @yaroslavvb!<|||||>Thanks! @yaroslavvb<|||||>@yaroslavvb [this article](https://medium.com/the-mission/why-building-your-own-deep-learning-computer-is-10x-cheaper-than-aws-b1c91b55ce8c) explains why cloud computing can have inconsistent throughput. I think it's a great read, and I've been working on setting up my own rig. I see in [the script](https://github.com/cybertronai/Megatron-LM/blob/master/launch_pretrain_bert.py#L49) that you're using 8 GPUs. have long is the pretraining taking with that? I'm not sure whether to go with gcloud TPUs or AWS. the Bert readme said that a single TPU will take up to 2 weeks to finish pretaining..<|||||>@yaroslavvb hi, did you train bert successfully? I trained it with https://github.com/NVIDIA/Megatron-LM/scripts/pretrain_bert_tfrecords_distributed.sh on 2 machines with 16 GPUS, but when it was sotpped after ' > number of parameters: 336226108' and i got nothing else after that, the GPU-Util is 0%.<|||||>@MarvinLong yes, I was able to launch it on multiple machines and observe the model training, and it's about 600ms per step. I did not try training it to completion as the scaling efficiency on p3dn instances on AWS is only about 50% because of NCCL bug currently. I'm wondering if your machines can't communicate to each other on the right ports. @jrc2139 I have not observed inconsistent throughput, I've used this [codebase](https://github.com/cybertronai/imagenet18) to train imagenet in 19 minutes on 64 GPUs on AWS p3 instances.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> I've been able to use the codebase for this, and didn't see much issues, however I might be overlooking something. If you construct and initialize a new model instead of loading from pretrained, you can use the `simple_lm_finetuning` script to train on new data. > > Thomas, did you have any specific other issues in mind? I'm trying to train on my own custom data and I'm a bit confused about how to "construct and initialize a new model"β€”i.e., when not working with pretrained models. Any help appreciated.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@yaroslavvb Hi, I can launch Megatron-LM to pretrain bert, but my MLM loss stay around 6.8. How about you? Can you pretrain BERT successfully?<|||||>> @yaroslavvb Hi, I can launch Megatron-LM to pretrain bert, but my MLM loss stay around 6.8. How about you? Can you pretrain BERT successfully? I was able to pre-train using this repo [https://github.com/google-research/bert]. However, even with one million step, the MLM accuracy was 64.69% and it's loss was 2.4. I am eager to know if someone else has pre-trained and got MLM accuracy higher than this.<|||||>> > @yaroslavvb Hi, I can launch Megatron-LM to pretrain bert, but my MLM loss stay around 6.8. How about you? Can you pretrain BERT successfully? > > I was able to pre-train using this repo [https://github.com/google-research/bert]. However, even with one million step, the MLM accuracy was 64.69% and it's loss was 2.4. I am eager to know if someone else has pre-trained and got MLM accuracy higher than this. According to the pretrian log from gloun-nlp[https://github.com/dmlc/web-data/blob/master/gluonnlp/logs/bert/bert_base_pretrain.log](url), your MLM accuracy seems right though with a higher loss. I think you can try to check it with fintuning. <|||||>@ibrahimishag I want to know if you pretrain your BERT with Bookscorpus. I cannot find a copy of that. For my pretraining, my bert loss is decreasing so so slowly after removing clip-grad-norm. There must be something wrong with me.<|||||>@JF-D I pre-trained on other domain-specific corpus.<|||||>Can someone please specify why Thomas mention/refers XLM repo from facebook? Is there any fault from huggingface? I thought I would just use hugging face repo without using "pretrained paramater" they generously provided for us. Just struggling with Facebook repo"span bert" and seems it is hard to even run this due to distributed launch issue. Hope it is ok to use hugging face's one to reproduce paper result<|||||>Is it possible to train from scratch using the run_language_modeling.py code? does hugging face support training from scratch. I looked at this example https://huggingface.co/blog/how-to-train but this thread is hitting that training from scratch is not currently supported.<|||||>Any update on training from scratch BERT-like models with huggingface? <|||||>Yes this has been supported for close to a year now ;)<|||||>@julien-c Thanks. I really appreacite the prompt response. Is there any tutorial/example specifically for BERT (/ALBERT) pretraining ?<|||||>Pretraining from scratch is a very rigid demand for users.<|||||>> @julien-c Thanks. I really appreacite the prompt response. > > Is there any tutorial/example specifically for BERT (/ALBERT) pretraining ? wait example<|||||>This is all there is to pretraining: ``` import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" from pathlib import Path from transformers import BertTokenizer from tokenizers.processors import BertProcessing from transformers import RobertaConfig from transformers import RobertaForMaskedLM from transformers import LineByLineTextDataset from transformers import DataCollatorForLanguageModeling from transformers import Trainer, TrainingArguments import torch tokenizer = BertTokenizer('./data/vocab.txt') tokens = tokenizer.encode("b140 m33 c230") print('token ids: {}'.format(tokens)) config = RobertaConfig( vocab_size=1458, max_position_embeddings=130, hidden_size=384, intermediate_size=1536, num_attention_heads=4, num_hidden_layers=4, type_vocab_size=1, ) # FROM SCRATCH model = RobertaForMaskedLM(config=config) # CONTINUE TRAINING -- i.e., just load your saved model using "from_pretrained" # model = RobertaForMaskedLM.from_pretrained('./trained_model') print(model.num_parameters()) # We should save this dataset since it's a bit slow to build each time dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./data/my_data.txt", block_size=128, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) training_args = TrainingArguments( output_dir="./out/my_run", overwrite_output_dir=True, num_train_epochs=100, per_device_train_batch_size=128, save_steps=100, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() trainer.save_model("./trained_model") ``` Note that this is a small model, with a specialized, fixed vocabulary, so I'm using the old BERT tokenizer I had working from a previous project. For "real" languages you'd use one of the RobertaTokenizer options. I'm just getting back to this project after being away for a while, and I'm noticing I'm getting a warning about switching to the Datasets Library. I'll do that at some point, but it's working for now so I won't mess with it. Also, I'm curious if anyone can tell me how to set the maximum length of inputs, so that longer inputs truncate? UPDATE: Duh, sorry, looks like `tokenizer.encode()` takes `max_length` and `truncation` parameters. Simple.<|||||>One question; I'm noticing that creating the dataset... ``` dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./data/my_data.txt", block_size=128, ) ``` ...is taking a long time. Is it possible to save that as a file, to avoid the wait when I (re)run training?<|||||>Hi, Is there any specifications of how to generate dataset for "pretraining from scratch" with raw texts ?<|||||>> One question; I'm noticing that creating the dataset... > > ``` > dataset = LineByLineTextDataset( > tokenizer=tokenizer, > file_path="./data/my_data.txt", > block_size=128, > ) > ``` > > ...is taking a long time. Is it possible to save that as a file, to avoid the wait when I (re)run training? the same question<|||||>Detailed Tutorial https://mlcom.github.io/Create-Language-Model/
transformers
384
closed
Incrementally Train BERT with minimum QnA records - to get improved results
This question is posted in stackexchange too, but is pointing to BERT group: https://datascience.stackexchange.com/questions/47406/incrementally-train-bert-with-minimum-qna-records Question is after training on my data on some new questions and answers, new checkpoints are generated. With new checkpoints, when asked same question the answer is not correct, and is wrong again. Why is training not helping making answer right? Though the questions are point to tensorflow version, but same is tried in pytorch version too and results are same. Some experts on BERT or transformers or neural network probably can better point the issue. Details: We are using Google BERT for Question and Answering. We have fine tuned BERT with SQUAD QnA release train data set (https://github.com/google-research/bert , https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json) It generated new checkpoints and BERT is giving good answers for most of questions we asked on our text documents. However, there are some questions which it is answering wrong, so we are trying to further fine tune with our Question and known answer on our text document. We further trained based on last generated checkpoint and got new checkpoint. With new checkpoint when we are asking the same question, the answer did not got corrected! Previously BERT was giving wrong answer with 99% confidence and now also giving same wrong answer with 95% confidence. Can someone suggest, if they have same or similar experience, and suggest please. Following are questions in BERT github Issues, and are unanswered for quite some time: BERT accuracy reduced after providing custom training..The answer is also not correct : https://github.com/google-research/bert/issues/492 Unable to incrementally train BERT with custom training: https://github.com/google-research/bert/issues/482 Little training has no impact: https://github.com/google-research/bert/issues/481 Custom Domain Training: https://github.com/google-research/bert/issues/498
03-16-2019 10:54:15
03-16-2019 10:54:15
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Wondering Noone is replying On Sat, 25 May, 2019, 4:09 PM stale[bot], <notifications@github.com> wrote: > Closed #384 > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/384>. > > β€” > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/384?email_source=notifications&email_token=AHRBKICW2VUPHXBBGSQD5O3PXEJN5A5CNFSM4G666UB2YY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGORUM5MZY#event-2367280743>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AHRBKIG2BUNIOBDUV6QGHQDPXEJN5ANCNFSM4G666UBQ> > . >
transformers
383
closed
pull from original
03-16-2019 03:04:18
03-16-2019 03:04:18
transformers
382
closed
fp16 overflow in GPT-2
When trying to train in mixed precision, after casting model weights to fp16 overflow is bound to occur since multiplication by 1e10 is used to mask the attention weights. I noticed BERT multiplies by 1e4 (within fp16 range) instead, and the overflow problem doesn't occur and now it's training happily :) I'm happy to make the various changes and PR if wanted?
03-15-2019 18:16:07
03-15-2019 18:16:07
Hi @andrewPoulton, yes indeed we could update that for GPT-2, would be happy to get a PR. Can you check the generations are identical for a few seeds (it should be)?<|||||>Yeah, sure - what generations do you mean?<|||||>Fixed with #495
transformers
381
closed
Added missing imports.
03-15-2019 11:50:18
03-15-2019 11:50:18
Thanks @tseretelitornike!
transformers
380
closed
typo in annotation
modify `heruistic` to `heuristic` in line 660, `charcter` to `character` in line 661.
03-14-2019 09:32:56
03-14-2019 09:32:56
transformers
379
closed
typo
modify `mull` to `null` in line 474 annotation.
03-14-2019 09:03:52
03-14-2019 09:03:52
transformers
378
closed
Add absolute imports to GPT, GPT-2, Transfo-XL and and fix empty nbest_predictions.json
Fix #377 Fix #374
03-14-2019 08:57:56
03-14-2019 08:57:56
transformers
377
closed
Empty nbest_predictions.json for run_squad.py
This is due to extra indentation on line 623 in run_squad.py It should be outside of the "if else" loop.
03-13-2019 21:15:05
03-13-2019 21:15:05
Good catch! Do you want to submit a PR? Otherwise, I'll fix it in the next release.<|||||>Hi @thomwolf The issue still persists, there were two extra indentations and you removed only one to move the line out of inner if-else but, one more indentation should be removed to bring [L620](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L620) out of outer if-else. Doing this produces non-empty `nbest_predictions.json` file.
transformers
376
closed
run_lm_finetuning generates short training cases
In the original Tensorflow BERT repo, training cases for the Next Sentence task are generated by [concatenating multiple sentences](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L219) up to the maximum sequence length. In other words the "sentences" used are actually longer chunks of text split at sentence boundaries, which may include more than one sentence. The LM finetuning example script doesn't do this, and just uses two single sentences as a training example, which means that most training examples are significantly shorter than max_seq_length. Would you like me to submit a patch to bring our implementation in line with theirs, or was leaving it out intentional?
03-13-2019 16:05:30
03-13-2019 16:05:30
Sorry, I just realized this was mentioned in the [original PR](https://github.com/huggingface/pytorch-pretrained-BERT/pull/124).<|||||>Indeed. Happy to welcome a PR if you want to improve this example!<|||||>Working on it now! One question, though: It seems likely that I'll have to make significant changes. The reason for this is there is a significant random element in the concatenation of sentences, and so the exact number of training examples is difficult to know in advance. The concept of an index into the list of training examples also stops making sense because of this. This makes a simple implementation based on a Dataset object difficult. I can see two possible approaches to resolve this: 1) Before training begins, the script does a pass over the dataset and pregenerates all training cases as InputExample objects. These could also be regenerated each epoch to increase diversity. The training cases could be packed into a Dataset object and sampled with RandomSampler, so we could still have meaningful progress bars. This is similar to Google's original implementation, where the data is pregenerated and stored in example files. 2) The random sampling could occur on the fly at train time. This would avoid the time and storage needed for pregenerating training cases, but would become harder to measure the 'length' of an epoch in advance. I think either of these could be different enough that it might be better to implement it as a separate script (though it would still use a lot of the helper functions from the existing script). What do you think?<|||||>Hi @Rocketknight1, yes both solutions make sense and it seems better to have independent scripts indeed. Maybe you can start by drafting an independent script focusing on the pregenerated case and then see if the current `run_lm_finetuning` script can be updated for the on-the-fly case?<|||||>I think starting with pregenerated makes sense. However, maybe instead of replacing the old script, maybe we can keep that one as is? The original authors mentioned that concatenating sentences into training examples didn't make sense for their use-case. Possibly their dataset was something like chatbot conversations instead of long documents? Either way, they (and probably others) have use for the simple "one sentence per example" training system so I don't want to delete it entirely!<|||||>Sounds good to me!<|||||>@thomwolf I created PR #392 which includes the new functionality.
transformers
375
closed
How to input the fine-tuned model?
I run the finetuning as instructed in the example "LM Fine-tuning" python run_lm_finetuning.py \ --bert_model bert-base-uncased \ . --output_dir models \ ... As a result the fine-tuned model is now in models/pytorch_model.bin But how do I use it to classify? The example doesn't mention that. I don't find any parameter to feed the finetuned model to be used. I can run classifiying with only pretrained model as this: export GLUE_DIR=~/git/GLUE/glue_data/ python run_classifier.py \ --task_name SST-2 \ --do_train \ --do_eval \ --do_lower_case \ --data_dir ~/git/x/data/input-sst2/ \ --bert_model bert-base-uncased \ --max_seq_length 128 \ --train_batch_size 16 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir out/ The instruction on google-bert says "Once you have trained your classifier you can use it in inference mode by using the --do_predict=true command." If I try that, it gives: "run_classifier.py: error: unrecognized arguments: --do_predict=true"
03-13-2019 15:16:41
03-13-2019 15:16:41
maybe you can take a look at the "cache_dir" argument. For the run_classifier.py file, it is located at line 498<|||||>I tried with --cache_dir , giving the fine-tunings output directory as cache_dir. I added these 2 files to the directory: bert_config.json and vocab.txt from the original bert_basic_uncased (finetune out folder has finetuned model pytorch_model.bin file, which I am not sure if it used at all) It gave exactly same accuracy (by 16 digits) as direct train/eval run_classifier.py directly with bert_basic_uncased. It would seem that with --cache_dir, it saves the original bert_base_uncased to that given --cache_dir. I am not sure if there is another difference. (I am runnin a 3-label classifier, for which I used the SST-2 from GLUE as basis, saved data in same format and added 3rd label to code in run_classifier.py python run_classifier.py \ --task_name SST-2 \ --do_train \ --do_eval \ --do_lower_case \ --data_dir ~/git/x/data/input-sst2/ \ --bert_model bert-base-uncased \ --cache_dir out_finetune_140/ \ --max_seq_length 140 \ --train_batch_size 16 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir out_testcache/ <|||||>I can get classifier running on finetuned model by replacing bert-base-uncased with output folder of fine-tuning: --bert_model out_finetune_140/ \ (and adding bert_config.json , vocab.txt to that folder) But as a result, eval_accuracy went down from 0.918 to 0.916. (I wonder is it correct to use vocab.txt and bert_config.json from original bert_base_uncased, or would fine-tuned model need updated ones?) step1: python run_lm_finetuning.py \ --bert_model bert-base-uncased \ --do_lower_case \ --do_train \ --train_file ~/git/xdata/lm-file.txt \ --output_dir out_finetune_140/ \ --num_train_epochs 2.0 \ --learning_rate 3e-5 \ --train_batch_size 16 \ --max_seq_length 140 \ step2: python run_classifier.py \ --task_name SST-2 \ --do_train \ --do_eval \ --do_lower_case \ --data_dir ~/git/x/data/input-sst2/ \ --bert_model out_finetune_140/ \ --max_seq_length 140 \ --train_batch_size 16 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir out_finetune+class_140/ <|||||>Hi, I think I find a workaround for this issue. You can first load the original model, and then insert this line into your python file (for example, after line 607 and 610 in run_classifier.py): model.load_state_dict(torch.load("output_dir/pytorch_model.bin")) then the model will be your customized fine-tuned model. And there is no need to change anything else (for example, the config file or vocab file)<|||||>I would suggest that you add a separate logic to load your fine-tuned model and perform prediction. Your code will be very similar to the eval but you won't need (actually won't have access to) labels during prediction and hence no need of the accuracy code in eval etc. Simply collect your predictions in a list and write to a file called "pred_results.txt". I added some new flags ("do_pred" and "model_path"), modified the eval logic little bit to ignore labels, and wrote outputs to a file. Things are working for me. <|||||>Hi LeenaShekhar Would you mind showing the code you wrote to perform predictions of a trained model? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
374
closed
handle ImportError exception when used from projects outside
The relative path that starts with . does not work when a file is used from an outside project. I added a safe code to handle the ImportError exception in this case, so I can use the source file without having to make local changes to it.
03-13-2019 12:56:00
03-13-2019 12:56:00
Just doing `from pytorch_pretrained_bert import BertTokenizer, BertModel` in your project doesn't work? That's what they do in [AllenNLP](https://github.com/allenai/allennlp/blob/3f0953d19de3676ea82e642659fc96d90690e34d/allennlp/modules/token_embedders/bert_token_embedder.py#L14) or [flair](https://github.com/zalandoresearch/flair/blob/797c958d0e8c256531f2cea37508e7becb2026cb/flair/embeddings.py#L14)<|||||>it does work, however, I don't need the higher level classes for my model, so I am sub classing the OpenAIGPTPreTrainedModel and Block and they are not included into the __init__.py. Including them into the __init__.py also solves the issue with the importException: from .modeling_openai import (OpenAIGPTConfig, OpenAIGPTModel, OpenAIGPTLMHeadModel, OpenAIGPTDoubleHeadsModel, OpenAIGPTPreTrainedModel, Block, load_tf_weights_in_openai_gpt) <|||||>I see. I think we can fix this the same way I did in the `bert` case by adding `from __future__ import absolute_import`. If you don't mind I'll do a quick PR on that.
transformers
373
closed
performance degraded when using paddings between queries and contexts.
I just want to ask this here and see whether other people encountered the same situation. I am doing modifications on the run_squad.py example. So for the original training feature, the input ids are [cls]qqqqq[sep]cccccc000000. The attention mask is just something like 111111100000 where first k inputs were masked with 1 and 0 for the laters. I tried to generate input ids look like [cls]qqqq[sep]0000[sep]cccccccc00000, the I can have fixed length of query and context with proper padding. Also, I changed the attention mask accordingly, namely, to have something like 11110001111100000. However, when I trained the model on this new feature, the score degraded from 76 to 44 for the bertforquestionanswering model. I am wondering if there is any catastrophic results by doing this kind of masking? Did anyone experienced with similar situations?
03-13-2019 08:14:02
03-13-2019 08:14:02
Similar problem. My token ids is something like "[cls]qqqq[sep]0000cccccccc[sep]00000". Have you solve it? or is there anyone met the similar problem?<|||||>> Similar problem. My token ids is something like "[cls]qqqq[sep]0000cccccccc[sep]00000". Have you solve it? or is there anyone met the similar problem? I felt like this is caused by the way how we pretrain BERT. BERT is pretrained on contiguous texts. The way we pad zeros in between broke such continuity so essentially we need to re-train the whole model. Since I was trying to separate query and context, the way I tackled this is just masking all query tokens to create the context and same for the query. <|||||>> > Similar problem. My token ids is something like "[cls]qqqq[sep]0000cccccccc[sep]00000". Have you solve it? or is there anyone met the similar problem? > > I felt like this is caused by the way how we pretrain BERT. BERT is pretrained on contiguous texts. The way we pad zeros in between broke such continuity so essentially we need to re-train the whole model. > > Since I was trying to separate query and context, the way I tackled this is just masking all query tokens to create the context and same for the query. It could be the same way as inputing the query and context separately if we mask the query or context part. Anyway, I will try it. Thanks.
transformers
372
closed
a single sentence classification task, should the max length of sentence limited to half of 512, that is to say 256
hi, if i have a single sentence classification task, should the max length of sentence limited to half of 512, that is to say 256?
03-13-2019 01:47:10
03-13-2019 01:47:10
Why should it be limited to half of 512?<|||||>> Why should it be limited to half of 512? cause when do train, we have sentence embedding 0 and 1, but in a single sentence classification task ,we just embedding 0, if this get bad influence<|||||>You can just set the whole sequence to sentence 0. Create a DataProcessor class for your task and set the whole input sequence to `text_a`, example: ``` class MyProcessor(DataProcessor): # some other methods here def _create_examples(self, lines, set_type): """Creates examples for the training and dev sets.""" examples = [] for (i, line) in enumerate(lines): guid = "%s-%s" % (set_type, i) text_a = line[1] label = line[0] examples.append( InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) return examples ``` Notice `text_b=None`.<|||||>How should I do if I have not only a sentence, but a whole text? I don't clearly understand how to extend `BertForSequenceClassification` with my own dataset for training/evaluating. I have a dataset consisting of text/label pairs, where text can have multiple sentences. <|||||>Just send in the whole text as one "sentence", the limit on a sequence length that can be sent at once to BERT is 512 tokens<|||||>Ok, thanks. One more question related to classification. BERT tokenizes my sentences pretty strange: > 04/27/2019 16:08:32 - INFO - __main__ - tokens: [CLS] @ bra ##yy ##yy ##ant Π’Π°ΠΊ Π°ΠΊΡ‚ ##ΠΈΠ²ΠΈ ##Ρ€ΠΎΠ²Π°Π»Π° ##сь новая ΠΊΠ°Ρ€Ρ‚Π° , ст ##Π°Ρ€Π° ##я ΠΈ Π±Ρ‹Π»Π° Π½Π΅ ##Π°ΠΊ ##Ρ‚ΠΈΠ² ##Π½Π° . [SEP] Why are more than a half of the words a separated with `#`? I mean, these words are on russian and many of them are splited to several parts with `#`, though it is one word. Should this be fixed during training?<|||||>That's the WordPiece tokenization, its a way to match subwords when an out of vocabulary word is encountered. It's explained in the bert paper with references. It's as it should be. <|||||>Ok, thank you so much.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
371
closed
Simplify code, delete redundancy line
delete redundancy line 597 `if args.train` which is the same function to line 547, in order to simplify code.
03-13-2019 01:45:46
03-13-2019 01:45:46
wouldn't this cause some kind of indentation error? (I don't have time to test the change sorry)
transformers
370
closed
What is Synthetic Self-Training?
The current best performing model on[ SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) is BERT + N-Gram Masking + Synthetic Self-Training (ensemble): ![image](https://user-images.githubusercontent.com/2398765/54234467-24466380-454a-11e9-8674-d9e7004da027.png) What is Synthetic Self-Training?
03-12-2019 20:40:50
03-12-2019 20:40:50
Check Jacob Devlin slides starting from slide 26 [here](https://nlp.stanford.edu/seminar/details/jdevlin.pdf?fbclid=IwAR2TBFCJOeZ9cGhxB-z5cJJ17vHN4W25oWsjI8NqJoTEmlYIYEKG7oh4tlY)<|||||>@thomwolf thanks, the slides were helpful. Do you know if there is a recording of the talk publicly available somewhere?<|||||>I don't know!<|||||>It was a very crowded room and these talks are generally not recorded, sorry…<|||||>Is the synthetic self training module in this code?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Same question, will the synthetic self training module be in this code?<|||||>Presently not in the library and there are no short-term plans to add synthetic self-training to the library.
transformers
369
closed
BertForQuestionAnswering: How to split output between query hidden state and context hidden state
I've made several attempts to, but all seem to fail. Do you have a good way to do this? Right now, passing what i thought to just be the context hidden state to the final output layer in run_squad.py drops my scores (F1) by 10 points.
03-12-2019 18:46:25
03-12-2019 18:46:25
If you want only the context you can find the index from the segment vector by finding the last first 1 in the vector and splitting the query_context on that index. Then the context will be everything after the 1 index and everything before will be the question.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
368
closed
When i fine tune the BERT on my serve, it always says Segmentation fault?
I have done some modification on BertForSequenceClassification to apply a mutilabel prediction task, but when i run my code on my serve, it always says that Segmentation fault when it runs to "loss = model(input_ids, segment_ids, input_mask,label_ids)", and even if i don't use GPU and fp16, it comes the same fault. but if i run the code on my PC ,it works well,just very slow.... what 's wrong??
03-12-2019 09:24:10
03-12-2019 09:24:10
I had the same issue. I'm getting seg fault on an aws deep learning ami with a tesla v100 gpu instance. I get the error with or with out fp16<|||||>I also have the same problem. Did you guys figure out any solution? I am able to load the data, however at the first epoch 0, I see the error segmentation fault. <|||||>Can you try running the model: - with a very small batch size to check if it's an OOM error - with `CUDA_LAUNCH_BLOCKING=1` to see the exact line causing the error ?<|||||>setting the env variable CUDA_LAUNCH_BLOCKING=1 doesn't give me any additional error messages. I still see the same error. Tried it with batch size 1 and I get the same issue<|||||>I got the code to run on my school server. I updated the gcc using this `conda install -c psi4 gcc-5`. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I am also experiencing this issue. Anyone else figure it out?
transformers
367
closed
how does the run_squad.py deal with non-answerable questions
Hi, I want to do a similar reading comprehension task with non-answerable questions but i didn't figure out how you deal with it from the codes. Did you add additional token on this? Or only outputs the no-answer when the start logit = end logit = -1? Thanks!
03-12-2019 07:38:40
03-12-2019 07:38:40
@Liangtaiwan and @abeljim were the contributors of the `run_squad.py` example. Maybe they can help.<|||||>Hi @elephantomkk, The non-answerable solving is using [CLS] token as the ground truth. As a result, the start login = and end logit = -1. You can find it out in this repo code or official Bert code. Also, the method is mentioned in Jacob Devlin's slide. https://nlp.stanford.edu/seminar/details/jdevlin.pdf?fbclid=IwAR2TBFCJOeZ9cGhxB-z5cJJ17vHN4W25oWsjI8NqJoTEmlYIYEKG7oh4tlY<|||||>Thanks for the reply! I tested that and found the performance on the non-answerable is not so good compared with the answerables:(
transformers
366
closed
Vocabularly file not available for Squad predictions
There appears to be a bug in the way the vocabulary file is handled. For example, if we execute [`run_squad.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py) with `--do_train`, and set the `--output_dir` to `/tmp/debug_squad/`, we successfully build a model and the resulting model files (`bert_config.json` and `pytorch_model.bin`) are saved in the appropriate directory. Then we execute [`run_squad.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py) with `--do_predict`, and this time set `--bert_model` to `/tmp/debug_squad`, which according to [`modelling.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py) is all that is required (see the [`from_pretrained`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L532) method). However, this raises a bug as the tokenizer cannot load a vocabulary file. If we inspect [`tokenizer.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization.py#L138) we can see what happens: if you are using a pre-trained BERT model, it will look for the vocab file at a specific URL. If you input a directory as your `bert_model`, it assumes you have a file `vocab.txt` (`VOCAB_NAME`) in that same directory. It also appears to check the cache which may or may not be present. We were able to fix this by simply downloading the appropriate vocab file for our base BERT model (e.g., `'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt"`), renaming it to `vocab.txt`, and placing it in `/tmp/debug_squad`, however it feels as though this should be better handled by the train/predict pipeline.
03-11-2019 21:21:06
03-11-2019 21:21:06
Indeed, this example could be improved. I would happy to welcome a PR on that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
365
closed
BERT accuracy reduced after providing custom training..The answer is also not correct
I have trained Google BERT with a custom training. I have included the exact question and answer along with the context from the input document in the training file and trained BERT. With new generated checkpoints (ckpt) I am still getting the same wrong answer as obtained before training. However it is observed the probability returned is reduced this time, in nbest_predictions.json.
03-11-2019 13:36:58
03-11-2019 13:36:58
Can you give a simple self-contained script to reproduce your issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
364
closed
Potential redundancy in run_classifier.py example script
https://github.com/huggingface/pytorch-pretrained-BERT/blob/7cc35c31040d8bdfcadc274c087d6a73c2036210/examples/run_classifier.py#L641-L642 Here we are calling the model twice. I understand that the model returns different things depending on the presence of `label_ids`, but this could actually be quite expensive. I think we can change the `if label is not None` branch in the model code below to return two items instead, but I'm not sure if it will break things elsewhere. https://github.com/huggingface/pytorch-pretrained-BERT/blob/7cc35c31040d8bdfcadc274c087d6a73c2036210/pytorch_pretrained_bert/modeling.py#L969-L979
03-11-2019 04:44:38
03-11-2019 04:44:38
I agree with you, this part of the API could be improved. The BERT model is now used in several third-party libraries like AllenNLP and FLAIR so we have to be careful not to make any breaking change on this model. We could add a flag to get full output maybe.<|||||>Mind if I submit a PR adding a flag that allows getting the full output in the case of `if labels is not None`?<|||||>I'm ok to welcome a PR on that but there is one issue with flags in `forward()` call and multi-GPU you may or may not be aware of and we need to think about: All the inputs to the `forward()` call are split across GPUs so having a non-batched input like a flag break DataParallel for multi-gpu. So maybe we need to add a general flag in the models which can be set. Maybe you can try to draft a PR and we check it behave well on the examples then. Maybe you can also have a deep look at the way inputs are split in DataParallel and check whether a solution with flag in the arguments of the `forward()` call work.<|||||>Hmm, okay, I was not aware of that. I think adding it as a property on the model itself is arguably an unnecessary over-complication for a simple matter. Returning a tuple value is much cleaner and the user can always discard the information that they don't care. Although I do understand the concern of backward compatibility. So I won't do this PR and will let you guys make the decision between the two options. Personally I lean more towards the API-breaking option.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
363
closed
Separator token for custom QA input (multi paragraph, longer than 512)
Hello! I'm trying to extract features for a QA task where the document is composed of multiple disparate paragraphs. So my input is: question ||| document where document is {para1 SEP para2 SEP para3 SEP}, so overall, it's something like: question ||| para1 SEP para2 SEP para3 SEP My question is: Is it okay to use the default BERT [SEP] token for the paragraph separation token as above? Or should I use something like the NULL token instead, or simply remove the paragraph separation token completely? Secondly, my input is longer than 512, so I'm thinking of doing sliding windows like: question ||| doc[:512] question ||| doc[256:768] and so on, finally merging the overlaps by averaging. Would this be correct? Thanks!
03-10-2019 02:56:29
03-10-2019 02:56:29
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
362
closed
Make the hyperlink of NVIDIA Apex clickable
In the case of the ImportError in modeling.py [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/7cc35c31040d8bdfcadc274c087d6a73c2036210/pytorch_pretrained_bert/modeling.py#L219), make the hyperlink to NVIDIA Apex redirect properly by spacing the '.'
03-09-2019 14:39:38
03-09-2019 14:39:38
Thanks!
transformers
361
closed
Correct line number in README for classes
Correct the linked line number in README for classes
03-09-2019 00:28:56
03-09-2019 00:28:56
Thanks @junjieqian!
transformers
360
closed
Ranking predictions with BertForQuestionAnswering
I am using `BertForQuestionAnswering` I am trying to make a prediction from the same question asked on different paragraphs. It outputs an `OrderedDict` of tuples with format `(paragraphID, answer)`. How can I rank those predictions to get the most probable answer across all paragraphs? Thanks for great repo!
03-08-2019 17:26:03
03-08-2019 17:26:03
I am also interested in this, it looks like we would have to append the prediction probability to the `all_predictions` JSON output.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
359
closed
Update run_gpt2.py
03-08-2019 16:59:32
03-08-2019 16:59:32
Thanks Elon
transformers
358
closed
add 'padding_idx=0' for BertEmbeddings
03-07-2019 12:03:08
03-07-2019 12:03:08
Thanks @cdjhz
transformers
357
closed
Use Dropout Layer in OpenAIGPTMultipleChoiceHead
closes #354
03-07-2019 09:15:06
03-07-2019 09:15:06
Seems good to me, thanks for that. Let me just check why we don't have Circle-CI tests on the PR anymore and I'll merge it.
transformers
356
closed
How to add input mask to GPT?
I use `attention_mask` when I do `bert.forward(input, attention_mask)`. But in GPT, when I try to pass a batch of input to `OpenAIGPTModel` to extract a batch of features, and the lengths of sentences in a batch are different, I have no idea how to do it. Or maybe it doesn't need the mask to be given? If so, is zero the padding_index? For a quick review, this is the code for bert to extract embeddings. ``` python all_encoder_layers, pooled_output = self.bert(inputs[:, :seq_max_len], token_type_ids=None, attention_mask=att_mask.to(device)) embeds = torch.cat(all_encoder_layers[-self.bert_n_layers:],-1) ```
03-06-2019 21:41:43
03-06-2019 21:41:43
GPT is a causal model so each tokens only attend to the left context and masking is not really needed. Just mask the output according to your lengths (and be such that each input sample start at the very first left token).
transformers
355
closed
[Question] Best choice for Sentence Compression model?
I'm trying to develop a model that will do "word level extractive summarization" e.g. that it will delete unimportant words or tokens and summarize a document. This is also known as "Sentence Compression" in the NLP community. I'm thinking using the BertforTokenClassification module. Will it work with a large dataset, or must it all fit in my VRAM at once? In my case, I also have access to a human made abstractive summary of each document - I was wondering if I could do contextual Sentence Compression by having a model do sequence to sequence conversion between the Abstractive Summary and the original document, and extract all words with the highest attention scores. Anyone know if this is a good idea or not?
03-06-2019 19:54:37
03-06-2019 19:54:37
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
354
closed
Dropout Layer in OpenAIGPTMultipleChoiceHead not used
[OpenAIGPTMultipleChoiceHead](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_openai.py#L363) defines an additional dropout layer, which is not used in `forward`.
03-06-2019 15:56:16
03-06-2019 15:56:16
transformers
353
closed
can't load the model
>>> model = BertModel.from_pretrained('bert-large-cased') Model name 'bert-large-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased.tar.gz' was a path or url but couldn't find any file associated to this path or url.
03-06-2019 14:24:23
03-06-2019 14:24:23
Strange error. Can you try: ```python import pytorch_pretrained_bert as ppb assert 'bert-large-cased' in ppb.modeling.PRETRAINED_MODEL_ARCHIVE_MAP ``` Do you have an open internet connection on the server that run the script?<|||||>@thomwolf Is there a way to point to a model on disk? This question seems related enough to daisychain with this issue. :-)<|||||>I noticed that this error happens when you exceed the disk space in the temporary directory while downloading BERT.<|||||>I ran into the same problem. When I used the Chinese pre-training model, it was sometimes good and sometimes bad.<|||||>@thomwolf I've been having the same error, and I received an AssertionError when I try assert 'bert-based-uncased' in bert.modeling.PRETRAINED_MODEL_ARCHIVE_MAP I've tried using both conda install and Pip install to get the package but in both cases I am not able to load any models<|||||>Hi @DuncanCam-Stein, Which version of python do you have? Can you try to install from source?<|||||>@thomwolf @countback I finally fixed the problem by downloading the tf checkpoints directly from [here](https://github.com/google-research/bert), and then using the '[convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py)' function to create a `pytorch_model.bin` file. I then used the path to pytorch_model.bin and bert_config.json file in BertModel.from_pretrained('path/to/bin/and/json') instead of 'bert-base-uncased'. πŸ‘ Helpful info was found [here](https://devhub.io/repos/huggingface-pytorch-pretrained-BERT).<|||||>The network connection check has been relaxed in the now merged #500. Serialization of the model have also been simplified a lot with #489. These improvements will be included in the next PyPI release (probably next week). In the meantime you can install from `master` and already use the serialization best-practices described in the README [here](https://github.com/huggingface/pytorch-pretrained-BERT#serialization-best-practices)<|||||>As @martiansideofthemoon said, I met this error because I didn't have enough space on disk. Check if you can download the file with : `wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz`<|||||>@martiansideofthemoon What does that mean if we can download it via wget but not when we use from_pretrained? is it a disk space problem? <|||||>@Hannabrahman If you can download it via wget, it means you have enough disk space, so the issue is from somewhere else.<|||||>@Colanim Thanks. I figured out it was memory issue on the cache directory. <|||||>@Hannabrahman > @Colanim Thanks. I figured out it was memory issue on the cache directory. how did you solve this issue?<|||||>@raj5287 Free some disk space on the cache directory or specify another cache directory <|||||>@Colanim i have enough disk space since i have downloaded the file using `wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz` but i am not sure how to specify another cache directory or use the downloaded file (i am new to pytorch and ubuntu :| )<|||||>> @thomwolf @countback > I finally fixed the problem by downloading the tf checkpoints directly from [here](https://github.com/google-research/bert), and then using the '[convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py)' function to create a `pytorch_model.bin` file. > I then used the path to pytorch_model.bin and bert_config.json file in BertModel.from_pretrained('path/to/bin/and/json') instead of 'bert-base-uncased'. > +1 > Helpful info was found [here](https://devhub.io/repos/huggingface-pytorch-pretrained-BERT). @DuncanCam-Stein i have downloaded and placed _pytorch_model.bin_ and _bert_config.json_ in _bert_tagger_ folder but when i am doing `tokenizer = BertModel.from_pretrained('home/user/Download/bert_pos_tagger/bert_tagger/')` i am still getting the error : `Model name 'home/user/Downloads/bert_pos_tagger/bert_tagger/' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'home/user/Downloads/bert_pos_tagger/bert_tagger/' was a path or url but couldn't find any file associated to this path or url.`<|||||>try to delete cahe file and rerun the command<|||||>I noticed that the error appears when I execute my script in debug mode (in Visual Studio Code). I fixed it by executing the script over the terminal `python myscriptname.py` once. Afterwards Debug mode works fine. Btw. I got the same problem with the tokenizer and this also fixed it.<|||||>> > > > model = BertModel.from_pretrained('bert-large-cased') > > > > Model name 'bert-large-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased.tar.gz' was a path or url but couldn't find any file associated to this path or url. hello,I meet the problem when run the torch bert code πŸ‘ OSError: Can't load weights for 'bert-base-uncased'. Make sure that: - 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. if I can download the bert-base-uncased weight, where I should put the file in ? hope your reply~<|||||>@DTW1004 check your network connection. This happens when I'm behind a proxy and SSL/proxy isn't configured appropriately.<|||||>bro,I've been having the same error. and then I try to debug the specific code BertTokenizer.from_pretrained(MODEL_NAME) step in to the origin code, I find that I could step every line of the transformer in the debug mode. when step out origin code, the tokenizer tool could be used. whatβ€˜s more , the code could be run normaly in the next time I run the code. <|||||>I met the issue and I found the reason is that my server connecting was offline.<|||||>Running into the same issue on AWS Lambda. Neither relative and absolute paths will allow the model to load from pre-trained. <|||||>Here's what I am doing: ```shell !wget -q https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz !tar xf bert-base-multilingual-cased.tar.gz ``` Now, if I do: ```python encoder = TFBertModel.from_pretrained("bert-base-multilingual-cased") ``` I still get: ```shell OSError: Can't load config for 'bert-base-multilingual-cased'. Make sure that: - 'bert-base-multilingual-cased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-multilingual-cased' is the correct path to a directory containing a config.json file ```<|||||>Here's what I am doing: from transformers import pipeline def corret_sentence(sentence,unmasker): res = unmasker(sentence) return res if __name__=='__main__': sentence = "关小彀" new_sentence = "" unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-2_H-512') for idx,ch in enumerate(sentence): new_sentence = sentence[:idx] + "[MASK]" + sentence[idx+1:] print(corret_sentence(new_sentence,unmasker)) I get: ValueError: Could not load model uer/chinese_roberta_L-2_H-512 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForMaskedLM'>, <class 'transformers.models.bert.modeling_bert.BertForMaskedLM'>).<|||||>> Here's what I am doing: > > from transformers import pipeline > > def corret_sentence(sentence,unmasker): res = unmasker(sentence) return res > > if **name**=='**main**': sentence = "关小彀" new_sentence = "" unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-2_H-512') for idx,ch in enumerate(sentence): new_sentence = sentence[:idx] + "[MASK]" + sentence[idx+1:] print(corret_sentence(new_sentence,unmasker)) > > I get: > > ValueError: Could not load model uer/chinese_roberta_L-2_H-512 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForMaskedLM'>, <class 'transformers.models.bert.modeling_bert.BertForMaskedLM'>). how could solve this? did you solve this problem? i am also having sample<|||||>> Free some disk space how can I free some disk space. which shell command should i use?<|||||>> @thomwolf @countback I finally fixed the problem by downloading the tf checkpoints directly from [here](https://github.com/google-research/bert), and then using the '[convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py)' function to create a `pytorch_model.bin` file. I then used the path to pytorch_model.bin and bert_config.json file in BertModel.from_pretrained('path/to/bin/and/json') instead of 'bert-base-uncased'. πŸ‘ Helpful info was found [here](https://devhub.io/repos/huggingface-pytorch-pretrained-BERT). Can you please specify which model exactly you downloaded and how you ran the function? Thanks
transformers
352
closed
How to incrementally do fine tune train
I am using Bert for Question Answering. After fine tuning with Squad data set, I want to further train new questions of my own domain. Please suggest how can I use newly generated pytorch_model.bin file and then increment it with my own training weights to get my own pytorch_suqad_plus_my_model.bin ?
03-06-2019 12:11:20
03-06-2019 12:11:20
I think my answer here can help you: https://github.com/huggingface/pytorch-pretrained-BERT/issues/332<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hey @shuvadibp, did you figure out a way of doing it? I would like to talk to you about the same..
transformers
351
closed
Little training has no impact
When tried to enter few training data in trainxx.json (few questions and few answers) and ran the training, then new pytorch_model.bin file got generated ( = uncased + squad training + few my questions). However, when same question was put in devxx.json the answer is not same which was put in training. Why is there no positive impact of training?
03-06-2019 10:09:43
03-06-2019 10:09:43
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
350
closed
Bert Uncased Large giving very low results with SQUAD v1.1 dataset
**Configuration:** - do_lower_case=True - max_answer_length=30 - max_answer_length=30 - n_best_size=20 - verbose_logging=False - bert_model="bert-large-uncased" - max_seq_length=384 - doc_stride=128 - max_query_length=192 - local_rank=-1 - train_batch_size=12 - predict_batch_size=12 - num_train_epochs=2.0 - gradient_accumulation_steps=1 - fp16=True - warmup_proportion=0.1 - learning_rate=3e-5 **Results:** 1. Getting very bad results with the given configuration. 2. Kindly let me know if there is an issue with the configuration. 3. Getting many repeated terms for different questions under the same context [dev1.1_squad_best_results.txt](https://github.com/huggingface/pytorch-pretrained-BERT/files/2935422/dev1.1_squad_best_results.txt) Sample Dev1.1 Squad(First 9 Questions from development dataset): "56be4db0acb8001400a502ec": "Levi's Stadium in the San Francisco Bay Area at Santa Clara,", "56be4db0acb8001400a502ed": "Levi's Stadium in the San Francisco Bay Area at Santa Clara,", "56be4db0acb8001400a502ee": "Levi's", "56be4db0acb8001400a502ef": "Levi's", "56be4db0acb8001400a502f0": "Levi's Stadium in the San Francisco Bay Area at Santa Clara,", "56be8e613aeaaa14008c90d1": "7", "56be8e613aeaaa14008c90d2": "Levi's", "56be8e613aeaaa14008c90d3": "7, 2016, at Levi's", "56bea9923aeaaa14008c91b9": "7", Looking forward for the help. Thanks
03-06-2019 09:30:23
03-06-2019 09:30:23
transformers
349
closed
Unable to train (fine-tuning) BERT with small training set
I am trying to train BERT with 1 context and 1 answer in the train.json, I am getting the below error. _lr_this_step = args.learning_rate * warmup_linear(global_step/t_total, args.warmup_proportion) ZeroDivisionError: division by zero_ After training with 1 context and 5 answers, the error is avoided, but I do not see any change in the answer obtained from BERT. Please help regarding this and let me know if anyone has tried this fine tuning training.
03-06-2019 02:44:27
03-06-2019 02:44:27
Probably an issue with `t_total` and the number of training optimization steps similarly to #329. Could you check the number of total training step sent to the optimizer? Which example script are you using?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
348
closed
output data
03-05-2019 19:50:04
03-05-2019 19:50:04
Wrong upstream I guess. Closing.
transformers
347
closed
Processor for SST-2 task
Added a processor for SST-2 to the `run_classifier` script.
03-05-2019 19:39:29
03-05-2019 19:39:29
Thanks @jplehmann!
transformers
346
closed
MRPC Score Lower than Expected
I expect to see MRPC scores between 84-88% as advertised. What I am seeing with different seeds is 79-84% consistently. (I thought perhaps the weight initialization was the issue but seems not to be the case #339.) I am running with the provided command and fp16, using a GCE instance with a Tesla T4. > time python run_classifier.py --task_name MRPC --do_train --do_eval --do_lower_case --data_dir $GLUE_DIR/MRPC/ --bert_model bert-base-uncased --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mrpc_output18/ --seed 18 --fp16 Example outputs: ``` mrpc_output15/eval_results.txt eval_accuracy = 0.803921568627451 eval_loss = 0.44585930088571474 global_step = 345 loss = 0.42245775305706523 mrpc_output16/eval_results.txt eval_accuracy = 0.8455882352941176 eval_loss = 0.38226841594658645 global_step = 345 loss = 0.25925399116847825 mrpc_output17/eval_results.txt eval_accuracy = 0.7916666666666666 eval_loss = 0.4917685123635273 global_step = 345 loss = 0.24811905570652174 mrpc_output3/eval_results.txt eval_accuracy = 0.8431372549019608 eval_loss = 0.42019053533965467 global_step = 345 loss = 0.2503709876019022 mrpc_output42/eval_results.txt eval_accuracy = 0.8406862745098039 eval_loss = 0.44909875124108556 global_step = 345 loss = 0.21309310249660326 mrpc_output44/eval_results.txt eval_accuracy = 0.8406862745098039 eval_loss = 0.45059946084431574 global_step = 345 loss = 0.10150747223068839 mrpc_output45/eval_results.txt eval_accuracy = 0.8063725490196079 eval_loss = 0.42597512491777834 global_step = 345 loss = 0.29104428498641305 mrpc_output18/eval_results.txt eval_accuracy = 0.8161764705882353 eval_loss = 0.4096583215629353 global_step = 345 ``` Other output: ``` 03/05/2019 18:23:31 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } ``` Any ideas on why this is the case? Happy to provide more output.
03-05-2019 18:25:44
03-05-2019 18:25:44
Argh, just realized I was on `0.3.0` which is what pip installed due to some dependencies. Upgrading to `0.6.1` and now I'm getting expected scores: ``` eval_accuracy = 0.8529411764705882 eval_loss = 0.39120761538837473 global_step = 345 loss = 0.17308216924252717 eval_accuracy = 0.8431372549019608 eval_loss = 0.49456917187746835 global_step = 345 loss = 0.12193756103515625 eval_accuracy = 0.875 eval_loss = 0.4023934503396352 global_step = 345 loss = 0.14832657523777174 eval_accuracy = 0.8553921568627451 eval_loss = 0.44353585865567713 global_step = 345 loss = 0.17814078952955162 ```
transformers
345
closed
Not able to import RandomSampler, Getting error "ImportError: cannot import name 'RandomSampler'"?
Not able to import RandomSampler, Getting error "ImportError: cannot import name 'RandomSampler'"? Did I get a wrong torch version?
03-05-2019 09:24:19
03-05-2019 09:24:19
how do you fix this issue?<|||||>> how do you fix this issue? try to update your torch version,i found it didn't work in torch 4.0.0, try "torch >=4.0.1"
transformers
344
closed
BertEmbedding not initialized with `padding_idx=0`
https://github.com/huggingface/pytorch-pretrained-BERT/blob/2152bfeae82439600dc5b5deab057a3c4331c62d/pytorch_pretrained_bert/modeling.py#L696 The bert-embeddings are not initialized with `padding_idx=0`, which may potentially result in none zero embeddings for zeros paddings in some early version of pytorch.
03-05-2019 06:23:48
03-05-2019 06:23:48
Could be, do you want to submit a PR to update this?<|||||>Closed by #358, thanks @cdjhz!
transformers
343
closed
Tokenizer defaults lowercase even when bert_model is cased
https://github.com/huggingface/pytorch-pretrained-BERT/blob/2152bfeae82439600dc5b5deab057a3c4331c62d/pytorch_pretrained_bert/tokenization.py#L77 A more clear behavior would be to use whether or not 'uncased' is in bert_model, and set the default behavior of do_lower_case accordingly.
03-05-2019 04:11:14
03-05-2019 04:11:14
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
342
closed
Usage example needs [CLS] and [SEP] added post-tokenization
Since probably #176, the usage example results in the special tokens getting normalized in a bad way and the assertion clearly fails. ``` ['[', 'cl', '##s', ']', 'who', 'was', 'jim', 'henson', '[MASK]', '[', 'sep', ']', 'jim', 'henson', 'was', 'a', 'puppet', '##eer', '[', 'sep', ']'] ``` I believe something like this is intended: ``` text1 = "Who was Jim Henson ?" text2 = "Jim Henson was a puppeteer" tokenized_text = ['[CLS]'] + tokenizer.tokenize(text1) + ['[SEP]'] + tokenizer.tokenize(text2) + ['[SEP]'] ``` I really appreciate this project!
03-05-2019 00:00:37
03-05-2019 00:00:37
Argh, I just realized that due to dependency conflicts, pip had installed an old version `0.3.0`. Was fixed here: https://github.com/huggingface/pytorch-pretrained-BERT/issues/303
transformers
341
closed
catch exception if pathlib not install
03-04-2019 22:31:41
03-04-2019 22:31:41
Thanks!
transformers
340
closed
optimizer.zero_grad() in run_openai_gpt.py?
In `run_openai_gpt.py`, should there be a call to `optimizer.zero_grad()` after updating parameters so that we zero out the gradients between minibatches? https://github.com/huggingface/pytorch-pretrained-BERT/blob/2152bfeae82439600dc5b5deab057a3c4331c62d/examples/run_openai_gpt.py#L212
03-03-2019 23:51:47
03-03-2019 23:51:47
Oh that's a mistake indeed, thanks for pointing out. Fixed on master.
transformers
339
closed
Why the weights are not intialized ?
03/03/2019 14:13:01 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForMultiLabelSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] 03/03/2019 14:13:01 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForMultiLabelSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
03-03-2019 06:14:33
03-03-2019 06:14:33
I was wondering this myself. It looks like there's some configuration mismatch -- some parameters found which aren't used, and a few expected that aren't found. I'm not sure if this is expected, since the top-level task-specific classifier is correctly NOT pre-trained... or if it's something more. (question about lower performance moved into new issue) <|||||>I dug up some related issues which confirms my guess above -- this kind of message is expected since the models are not yet find-tuned to the task. https://github.com/huggingface/pytorch-pretrained-BERT/issues/161 https://github.com/huggingface/pytorch-pretrained-BERT/issues/180 <|||||>Yes this is the expected behavior. I don't want to make the warning messages says this is "all good" because in some case, depending on the model you are loading in, this could be an unwanted behavior (not loading all the weights).<|||||>Hello @thomwolf : I continued pre-training with bert-base-uncased without fine tuning on round about 22K sequences and the precision @ K for MaskedLM task did not change at all. Is the result legitimate or do I rather have a problem loading the weights? I received the same warning message/ INFO. The data set is from the automotive domain. At what point can I expect the weights to change? Thank you very much for experience values. <|||||>@viva2202, I did the same here using directly the "run_language_modeling.py" script, but with 11k sequences (I continued pretraining using training data only), and then fine-tuned it using BertForSequenceClassification. Got 1.75% increase in accuracy compared to not continuing pretraining.
transformers
338
closed
Fix top k generation for k != 0
Seems like the shapes didn't line up for the comparison. Logits are `(batch_size, values)`. The minima had shape `(batchsize)` and couldn't be directly compared
03-03-2019 05:57:09
03-03-2019 05:57:09
Thanks @CatalinVoss!
transformers
337
closed
Allow tokenization of sequences > 512 for caching
For many applications requiring randomized data access, it's easier to cache the tokenized representations than the words. So why not turn this into a warning?
03-03-2019 00:30:43
03-03-2019 00:30:43
OK, done. Thanks!<|||||>Nice, thanks @CatalinVoss (and @rodgzilla)!
transformers
336
closed
F1 and EM scores output for run_squad.py
Hi, I was doing prediction after fine-tuning the bert-base model and I was wondering whether the f1 and em scores will show automatically since I only saw the following two log outputs 03/02/2019 22:20:05 - INFO - __main__ - Writing predictions to: /tmp/debug_squad/predictions.json 03/02/2019 22:20:05 - INFO - __main__ - Writing nbest to: /tmp/debug_squad/nbest_predictions.json Where am I able to get those scores? Thanks for any help!
03-02-2019 22:42:27
03-02-2019 22:42:27
Use the Squad python scripts available on their website<|||||>Is that run_squad.py? I used that one but didn’t see output scores, having the output predictions files though. Thanks! abeljim <notifications@github.com>于2019εΉ΄3月3ζ—₯ 周ζ—₯上午3:03ε†™ι“οΌš > Use the Squad python scripts available on their website > > β€” > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/336#issuecomment-469011833>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/At7y4r3yC9C41y_BrZptqWiUtrmLwnuMks5vS6vtgaJpZM4baqZe> > . > <|||||>https://rajpurkar.github.io/SQuAD-explorer/ get the eval script for the correct version<|||||>Thanks so much, really helps! abeljim <notifications@github.com>于2019εΉ΄3月3ζ—₯ 周ζ—₯上午3:08ε†™ι“οΌš > https://rajpurkar.github.io/SQuAD-explorer/ get the eval script for the > correct version > > β€” > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/336#issuecomment-469012224>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/At7y4u9BkK-km5KUXJuBfzM3loMnr7gwks5vS60wgaJpZM4baqZe> > . > <|||||>@abeljim Hi Abel, I got a very strange problem in running the prediction only for run_squad.py and wonder if you have any idea about why this happens. I first ran the following codes to both do train and predict on the files: `python run_squad_final.py --bert_model bert-base-uncased --do_train --do_predict --do_lower_case --train_file train-v2.0.json --predict_file dev-v2.0.json --train_batch_size 6 --learning_rate 3e-5 --num_train_epochs 1.0 --max_seq_length 384 --doc_stride 128 --fp16 --version_2_with_negative --null_score_diff_threshold -1 --output_dir ./temp/ /` The output predictions.json file looks normal, but when I tried to delete the "--do_train" part and only do the prediction on the same file, it gives very different and strange outputs, many of the answers are repetitive as below and the scores are like only 0.1: (The output in predictions are like:) > "68cf05f67fd29c6f129fe2fb9": "mands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their", > "f5fead9187d56af2bdbfcb921": "mands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their", > "f9183ead5bb93aaa12ea37245": "mands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their", > "d583847c96cbbbfaaa99dfcad": "Rollo, agreed to swear fealty", > "e06fbbdb50af7ab3faefde618": "Rollo, agreed to swear fealty", > "cbb48eaacbbbfcccefb7aab7f": "Rollo, agreed to swear fealty", Do you know what caused the problem?<|||||>Yeah sorry I forgot to respond. The way the it runs if the train flag is off that it will load a pretrained version of bert and run the prediction on that. A way to get this to work is to modify the file to load a saved trained version instead. I could add this functionality but I'm busy with school for the next two weeks. Modify lines 1011 to 1025 for a quick fix
transformers
335
closed
Feature Request: GPT2 fine tuning
03-01-2019 17:05:14
03-01-2019 17:05:14
Yes, feel free to open a PR if you want. It's just a regular PyTorch model so all the standard ways of training a PyTorch model work.<|||||>Is is possible to fine-tune GPT2 on downstream tasks currently?<|||||>same questions
transformers
334
closed
pip install [--editable] . ---> Error
Hi, when using "pip install [--editable] . ", after cloned the git. I'm getting this error: Exception: Traceback (most recent call last): File "/venv/lib/python3.5/site-packages/pip/_vendor/packaging/requirements.py", line 93, in __init__ req = REQUIREMENT.parseString(requirement_string) File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 1814, in parseString raise exc File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 1804, in parseString loc, tokens = self._parse( instring, 0 ) File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 1548, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 3722, in parseImpl loc, exprtokens = e._parse( instring, loc, doActions ) File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 1552, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 3502, in parseImpl raise ParseException(instring, loc, self.errmsg, self) pip._vendor.pyparsing.ParseException: Expected stringEnd (at char 11), (line:1, col:12) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/venv/lib/python3.5/site-packages/pip/_internal/cli/base_command.py", line 179, in main status = self.run(options, args) File "/venv/lib/python3.5/site-packages/pip/_internal/commands/install.py", line 289, in run self.name, wheel_cache File "/venv/lib/python3.5/site-packages/pip/_internal/cli/base_command.py", line 269, in populate_requirement_set wheel_cache=wheel_cache File "/venv/lib/python3.5/site-packages/pip/_internal/req/constructors.py", line 280, in install_req_from_line extras = Requirement("placeholder" + extras_as_string.lower()).extras File "/venv/lib/python3.5/site-packages/pip/_vendor/packaging/requirements.py", line 97, in __init__ requirement_string[e.loc : e.loc + 8], e.msg pip._vendor.packaging.requirements.InvalidRequirement: Parse error at "'[--edita'": Expected stringEnd Did someone saw anything like that? any idea?
03-01-2019 08:45:48
03-01-2019 08:45:48
Did you run python install --editable .<|||||>There's a way to install cloned repositories with pip, but the easiest way is to use plain python for this: After cloning and changing into the pytorch-pretrained-BERT directory, run `python setup.py develop`.<|||||>Yes, please follow the installation instructions on the readme [here](https://github.com/huggingface/pytorch-pretrained-BERT#installation)<|||||>@thomwolf I have exactly the same problem after following readme installation (mentioned). I am using pytorch. python -m pytest -sv ./transformers/tests/ have two failed tests. transformers/tests/modeling_bert_test.py::BertModelTest::test_bert_model PASSED transformers/tests/modeling_bert_test.py::BertModelTest::test_bert_model_as_decoder FAILED transformers/tests/modeling_bert_test.py::BertModelTest::test_config PASSED transformers/tests/modeling_bert_test.py::BertModelTest::test_determinism PASSED transformers/tests/modeling_bert_test.py::BertModelTest::test_for_masked_lm PASSED transformers/tests/modeling_bert_test.py::BertModelTest::test_for_masked_lm_decoder FAILED transformers/tests/modeling_bert_test.py::BertModelTest::test_for_multiple_choice PASSED ======================================================= 2 failed, 403 passed, 227 skipped, 36 warnings in 49.14s ====================================================== @bheinzerling, python setup.py develop can go through ok. But the test result is the same as above: two are two failed tests. <|||||>Anybody know why "pip install [--editable] ." failed here? It is some missing python package needed for this?<|||||>Please open a command line and enter `pip install git+https://github.com/huggingface/transformers.git` for installing Transformers library from source. However, **Transformers v-2.2.0 has been just released yesterday** and you can install it from PyPi with `pip install transformers` Try to install this latest version and launch the tests suite and keep us updated on the result! > Anybody know why "pip install [--editable] ." failed here? It is some missing python package needed for this?<|||||>@TheEdoardo93 This is indeed the latest version installed( installed a few hours before) Name: transformers Version: 2.2.0 Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch Home-page: https://github.com/huggingface/transformers Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors Author-email: thomas@huggingface.co License: Apache Location: /home/pcl/venvpytorch/lib/python3.6/site-packages Requires: sacremoses, numpy, requests, boto3, regex, tqdm, sentencepiece Required-by: <|||||>@TheEdoardo93 After uninstall and reinstall with pip install git+https://github.com/huggingface/transformers.git. Still the same results as before (two are failed) ======================================================= 2 failed, 403 passed, 227 skipped, 36 warnings in 49.13s ======= Name: transformers Version: 2.2.0 Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch Home-page: https://github.com/huggingface/transformers Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors Author-email: thomas@huggingface.co License: Apache Location: /home/pcl/venvpytorch/opensource/transformers Requires: numpy, boto3, requests, tqdm, regex, sentencepiece, sacremoses Required-by: <|||||>When I've executed `python -m pytest -sv ./transformers/tests/`, I've obtained the following result: `595 passed, 37 skipped, 36 warnings in 427.58s (0:07:07)`. When I've executed `python -m pytest -sv ./examples/`, I've obtained the following result: `15 passed, 7 warnings in 77.09s (0:01:17)`. > @TheEdoardo93 > After uninstall and reinstall with pip install git+https://github.com/huggingface/transformers.git. > Still the same results as before (two are failed) > > ======================================================= 2 failed, 403 passed, 227 skipped, 36 warnings in 49.13s ======= > > Name: transformers > Version: 2.2.0 > Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch > Home-page: https://github.com/huggingface/transformers > Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors > Author-email: [thomas@huggingface.co](mailto:thomas@huggingface.co) > License: Apache > Location: /home/pcl/venvpytorch/opensource/transformers > Requires: numpy, boto3, requests, tqdm, regex, sentencepiece, sacremoses > Required-by:<|||||>I did not install TensorFlow which is the reason for skips. I need reasons for failure. I guess I will install TensorFlow and see how it goes. <|||||>In the `README.md` file, Transformers' authors says to install TensorFlow 2.0 and PyTorch 1.0.0+ **before** installing Transformers library. > I did not install TensorFlow which is the reason for skips. I need reasons for failure. I guess I will install TensorFlow and see how it goes.<|||||>"First you need to install one of, or both, TensorFlow 2.0 and PyTorch." I don't think that is the reason for failure. <|||||>Hi, I believe these two tests fail with an error similar to: ``` RuntimeError: expected device cpu and dtype Long but got device cpu and dtype Bool ``` If I'm not mistaken you're running with torch 1.2 and we're testing with torch 1.3. This is a bug as we aim to support torch from 1.0.1+. Thank you for raising the issue, you can fix it by installing torch 1.3+ while we work on fixing this.<|||||>Thanks! Yeah, I found it too by verbose mode. I suddenly remember some tensorflow code have similar problem before. In my case,it is some const, I just changed it from int to float. Indeed I am using torch1.2. Will see whether it works here or not. Any idea why the pip -e option is not working? On Wed, Nov 27, 2019 at 22:49 Lysandre Debut <notifications@github.com> wrote:r > Hi, I believe these two tests fail with an error similar to: > > RuntimeError: expected device cpu and dtype Long but got device cpu and dtype Bool > > If I'm not mistaken you're running with torch 1.2 and we're testing with > torch 1.3. This is a bug as we aim to support torch from 1.0.1+. Thank you > for raising the issue, you can fix it by installing torch 1.3+ while we > work on fixing this. > > β€” > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/334?email_source=notifications&email_token=AA6O5IG4IUK6Z3ESWAIYOXLQV2CJDA5CNFSM4G3CE3DKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFJXR4I#issuecomment-559118577>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AA6O5IFKBX3QB5AVMTXA5P3QV2CJDANCNFSM4G3CE3DA> > . > <|||||>The `pip install -e .` is probably working, it's just that some tests are failing due to code not tests on Torch 1.2.0. The install should have worked fine, and you should be fine with using every component in the library with torch 1.2.0 except the decoder architectures on which we are working now. Updating to torch 1.3.0 means it will work with decoder architectures too.<|||||>1.3 torch must work with cuda10.1? I have 10.0 for tensorflow which is still having problem with 10.1. Thanks for the info. Really appreciate ur fast response! On Wed, Nov 27, 2019 at 23:23 Lysandre Debut <notifications@github.com> wrote: > The pip install -e . is probably working, it's just that some tests are > failing due to code not tests on Torch 1.2.0. > > The install should have worked fine, and you should be fine with using > every component in the library with torch 1.2.0 except the decoder > architectures on which we are working now. Updating to torch 1.3.0 means it > will work with decoder architectures too. > > β€” > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/334?email_source=notifications&email_token=AA6O5ICNJ4IRK65JEA6X2DTQV2GIBA5CNFSM4G3CE3DKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFJ3AOQ#issuecomment-559132730>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AA6O5IDZATDEY7PA5YMYF6TQV2GIBANCNFSM4G3CE3DA> > . > <|||||>In the official PyTorch documentation, in the [installation](https://pytorch.org/get-started/locally/) section, you can see that you can install PyTorch 1.3 with CUDA 9.2 or CUDA 10.1, so PyTorch 1.3 + CUDA 10.1 works! > 1.3 torch must work with cuda10.1? I have 10.0 for tensorflow which is still having problem with 10.1. Thanks for the info. Really appreciate ur fast response! > […](#) > On Wed, Nov 27, 2019 at 23:23 Lysandre Debut ***@***.***> wrote: The pip install -e . is probably working, it's just that some tests are failing due to code not tests on Torch 1.2.0. The install should have worked fine, and you should be fine with using every component in the library with torch 1.2.0 except the decoder architectures on which we are working now. Updating to torch 1.3.0 means it will work with decoder architectures too. β€” You are receiving this because you commented. Reply to this email directly, view it on GitHub <#334?email_source=notifications&email_token=AA6O5ICNJ4IRK65JEA6X2DTQV2GIBA5CNFSM4G3CE3DKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFJ3AOQ#issuecomment-559132730>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AA6O5IDZATDEY7PA5YMYF6TQV2GIBANCNFSM4G3CE3DA> .<|||||>What is the difference between the following? - `pip install [--editable] .` - `pip install -e .` - `python setup.py develop` The first works doesn't work for me, yet is in the readme. The other two do. If this is system-dependent, shouldn't this be added to the readme?<|||||>@internetcoffeephone, using square brackets in a command line interface is a [common way](https://en.wikipedia.org/wiki/Command-line_interface#Command_description_syntax) to refer to optional parameters. The first command means that you can either use `pip install .` or `pip install --editable .`<|||||>@LysandreJik That makes sense, thanks for your answer! Still, I'd argue against putting it in the readme like that. Firstly because it doesn't produce a sensible error message - secondly because anyone who wants an editable installation will know about that optional parameter already. As for the difference between the above commands, I found [this](https://stackoverflow.com/a/30306403) page: > Try to avoid calling setup.py directly, it will not properly tell pip that you've installed your package. >With pip install -e: >>For local projects, the β€œSomeProject.egg-info” directory is created relative to the project path. This is one advantage over just using setup.py develop, which creates the β€œegg-info” directly relative the current working directory.<|||||>I removed `[--editable]` from the instructions because I found them confusing (before stumbling upon this issue).
transformers
333
closed
Add lm and next sentence accuracy for run_lm_finetuning example
03-01-2019 08:34:15
03-01-2019 08:34:15
Yes, feel free to submit a PR for that.
transformers
332
closed
Train with custom data on bert question answering
Hi all I have trained bert question answering on squad v 1 data set. As I was using colab which was slow . so I used 5000 examples from squad and trained the model which took 2 hrs and gave accuracy of 51%. My question is that 1) As i saved pytorch_bin file after trainining. Can i use this new bin file and again train next 5000 from squad.should i replace this bin file with old pytorch bin file created in uncased folder. What steps i need to follow 2) i have a custom data. To train on custom qyestion answer. Do i need to include same dataset(append) in squad / put this new file in training data=custom data.? How can i leverage squad trained model to further train on custom data 3) can anybody help me with script to convert my data to squad format Detailed steps are appreciated fo leveraging squad traines model and train for custom data on top of same
02-28-2019 08:42:09
02-28-2019 08:42:09
1. You can put the `pytorch_model.bin` file that was output from your finetuning on squad in some other folder and set that folder as the bert_model='path/to/this/folder'. The folder needs to have the files `bert_config.json` and `vocab.txt` from the first pretrained model you used though. 2. I think you can first train on squad, then use the model to further train on your custom QA dataset, using that model (i.e. set bert_model as explained in 1.) 3. You can read the squad training data with: ``` import json input_file = 'train-v1.1.json' with open(input_file, "r", encoding='utf-8') as reader: input_data = json.load(reader)["data"] ``` The input data, under the top level "data" tag, holds "paragraphs" tags, which in turn holds texts in "context" tags, and questions and answers in "qas" tags. You can check the structure of the texts/questions/answers like this. ``` from pprint import pprint pprint(input_data[0]) {'paragraphs': [{'context': 'Architecturally, the school has a Catholic ' "character. Atop the Main Building's gold dome is " 'a golden statue of the Virgin Mary. Immediately ' 'in front of the Main Building and facing it, is a ' 'copper statue of Christ with arms upraised with ' 'the legend "Venite Ad Me Omnes". Next to the Main ' 'Building is the Basilica of the Sacred Heart. ' 'Immediately behind the basilica is the Grotto, a ' 'Marian place of prayer and reflection. It is a ' 'replica of the grotto at Lourdes, France where ' 'the Virgin Mary reputedly appeared to Saint ' 'Bernadette Soubirous in 1858. At the end of the ' 'main drive (and in a direct line that connects ' 'through 3 statues and the Gold Dome), is a ' 'simple, modern stone statue of Mary.', 'qas': [{'answers': [{'answer_start': 515, 'text': 'Saint Bernadette Soubirous'}], 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly ' 'appear in 1858 in Lourdes France?'}, {'answers': [{'answer_start': 188, 'text': 'a copper statue of Christ'}], 'id': '5733be284776f4190066117f', 'question': 'What is in front of the Notre Dame Main ' 'Building?'}, {'answers': [{'answer_start': 279, 'text': 'the Main Building'}], 'id': '5733be284776f41900661180', 'question': 'The Basilica of the Sacred heart at ' 'Notre Dame is beside to which ' 'structure?'}, {'answers': [{'answer_start': 381, 'text': 'a Marian place of prayer and ' 'reflection'}], 'id': '5733be284776f41900661181', 'question': 'What is the Grotto at Notre Dame?'}, {'answers': [{'answer_start': 92, 'text': 'a golden statue of the Virgin ' 'Mary'}], 'id': '5733be284776f4190066117e', 'question': 'What sits on top of the Main Building ' 'at Notre Dame?'}]}, {'context': "As at most other universities, Notre Dame's .... (many more context and qas tags are printed here) ``` The conversion from your custom data to this format depends on the current format of your data. But if you can create a python dict looking like this with your data, you can make a json file from it and use it as training data in the run_squad.py script.<|||||>@navdeep1604 or @maxlund or @thomwolf : Was a custom training done and tested? We faced few issues like: - After training, previous correct questions started getting wrong. - All questions are started answering same answer - All questions started answering something wrong Would anyone like to share observations, if same or different problems faced. And curious to know what actions or tricks were made to fix these issues.<|||||>This might help you setup a QA system with custom data, it's built on top of this repo: https://github.com/cdqa-suite/cdQA<|||||>Hi @SandeepBhutani , I faced similar issue, since my custom training data (240 QA pairs) was very less. <|||||>Hi, for anyone who has made a custom QA dataset, how did you go about get the start position and end position for the answers or did you already have them easily accessible? I have a large dataset set of questions with corresponding context given by people; however, I don't have the specific answers as there can be many acceptable answers. My goal is to determine whether the context contains an answer to the question (similar to squad 2.0). Preliminary results after fine tuning on Squad 2.0 weren't super great so I wanted to add more examples. Any recs on how I could label my data in the correct format for say a bert or would I need to crowd source labels from a vendor?<|||||>Hi @cformosa, The package for QA system mentioned above also has an annotation tool that can help you with that task: https://github.com/cdqa-suite/cdQA-annotator<|||||>Thanks for the link @andrelmfarias . I was looking over it and it seems extremely useful for sure. Seems like it will take a long time to generate a large corpus of training data but nevertheless its seems quite helpful. Thanks!<|||||>Hi @andrelmfarias, thank you for sharing this great resource! The cdQA-suite seems to cater to a specific kind of question answering, as described in your [Medium](https://towardsdatascience.com/how-to-create-your-own-question-answering-system-easily-with-python-2ef8abc8eb5) article. To summarise, it looks for the answer to a question from a collection of documents -- all these documents most likely contain different kinds of information regarding a particular topic. For example, a system built using cdQA could contain 10 different documents regarding 10 different historical periods, and you could ask it a question about any of these time periods, and it would search for the relevant answer within these 10 documents. However, if the system you want to build is such: you have 10 court orders, and you want to ask the system the same set of questions for each court order. For example: 1. When was the order filed? 2. Who filed the order? 3. Who was the order filed against? 4. Was there a settlement? In this case, I wouldn't want the system to search through every document, but instead look for answers within the document itself. Exactly like SQuaD 2.0. My assessment is that I wouldn't be able to build such a system using [cdQA](https://github.com/cdqa-suite/cdQA) but I could use the [cdQA annotator](https://github.com/cdqa-suite/cdQA-annotator) to build my dataset. Is that a sound assessment? Also, I'm curious to hear your thoughts on how feasible it would be to expect good results when the `context` is rather long (anywhere between 2-10 pages). Thank you :)<|||||>Hi @rsomani95 , As your questions are particularly related to `cdQA` I opened an issue with your questions in our repository to avoid spamming here: https://github.com/cdqa-suite/cdQA/issues/275 I just answered them there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
331
closed
Can BERT do the next-word-predict task? As it is bidirectional.
How can we edit BERT to do the next-word-predict task? Thank you very much!
02-28-2019 08:36:02
02-28-2019 08:36:02
transformers
330
closed
Can we fine tune our model on Chinese corpus
Is this pre-trained BERT good for NER or classification on Chinese corpus? Thanks.
02-28-2019 06:43:35
02-28-2019 06:43:35
You should probably use the `bert-base-chinese` model to start from. Please refer to the original bert tensorflow implementation from Google. There are a lot of discussion about chinese models in the issues of this repo.
transformers
329
closed
run_lm_finetuning - ZeroDivisionError
Trying to get run_lm_finetunning example on working on below GPU machine but finding getting zeroDivisonError.Any idea what could be causing this error? ![image](https://user-images.githubusercontent.com/47925301/53542425-6c966800-3aec-11e9-9681-87432bcd6aee.png) python /home/ec2-user/SageMaker/bert_pytorch/pytorch-pretrained-BERT/examples/run_lm_finetuning.py --bert_model bert-base-uncased --do_train --train_file /home/ec2-user/SageMaker/bert_pytorch/pytorch-pretrained-BERT/samples/sample_text.txt --output_dir /home/ec2-user/SageMaker/bert_pytorch/pytorch-pretrained-BERT/models --num_train_epochs 5.0 --learning_rate 3e-5 --train_batch_size 32 --max_seq_length 32 ![image](https://user-images.githubusercontent.com/47925301/53542336-01e52c80-3aec-11e9-9966-beecb8755d01.png)
02-28-2019 05:05:55
02-28-2019 05:05:55
Seems like an error with `t_total`. `t_total` is the number of training optimization steps of the optimizer defined [here](num_train_optimization_steps) in the `run_lm_finetuning` example. Can you make sure it's not zero?<|||||>Your `batch_size` of 32 is too big for such a small `train_file`, i.e. sample_text.txt. Try setting `batch_size` to 16 or send in a larger train file with more text. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
328
closed
PyTorch Huggingface BERT-NLP for Named Entity Recognition
I have been using your PyTorch implementation of Google’s [BERT][1] by [HuggingFace][2] for the MADE 1.0 dataset for quite some time now. Up until last time (11-Feb), I had been using the library and getting an **F-Score** of **0.81** for my Named Entity Recognition task by Fine Tuning the model. But this week when I ran the exact same code which had compiled and run earlier, it threw an error when executing this statement: input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts], maxlen=MAX_LEN, dtype=”long”, truncating=”post”, padding=”post”) > ValueError: Token indices sequence length is longer than the specified > maximum sequence length for this BERT model (632 > 512). Running this > sequence through BERT will result in indexing errors The full code is available in this [colab notebook][3]. To get around this error I modified the above statement to the one below by taking the first 512 tokens of any sequence and made the necessary changes to add the index of [SEP] to the end of the truncated/padded sequence as required by BERT. input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt[:512]) for txt in tokenized_texts], maxlen=MAX_LEN, dtype=”long”, truncating=”post”, padding=”post”) The result shouldn’t have changed because I am only considering the first 512 tokens in the sequence and later truncating to 75 as my (MAX_LEN=75) but my **F-Score** has dropped to **0.40** and my **precision** to **0.27** while the **Recall** remains the same **(0.85)**. I am unable to share the dataset as I have signed a confidentiality clause but I can assure all the preprocessing as required by BERT has been done and all extended tokens like (Johanson –> Johan ##son) have been tagged with X and replaced later after the prediction as said in the [BERT Paper][4]. Has anyone else faced a similar issue or can elaborate on what might be the issue or what changes the PyTorch (Huggingface) has done on their end recently? [1]: https://github.com/google-research/bert#fine-tuning-with-bert [2]: https://github.com/huggingface/pytorch-pretrained-BERT [3]: https://colab.research.google.com/drive/1JxWdw1BjXZCFC2a8IwtZxvvq4rFGcxas [4]: https://arxiv.org/abs/1810.04805
02-27-2019 20:57:13
02-27-2019 20:57:13
I've found a fix to get around this. Running the same code with pytorch-pretrained-bert==0.4.0 solves the issue and the performance is restored to normal. There's something messing with the model performance in BERT Tokenizer or BERTForTokenClassification in the new update which is affecting the model performance. Hoping that HuggingFace clears this up soon. :) Thanks.<|||||>> There's something messing with the model performance in BERT Tokenizer or BERTForTokenClassification in the new update which is affecting the model performance. > Hoping that HuggingFace clears this up soon. :) Sounds like the issue should remain open?<|||||>Oh. I didn't know I closed the issue. Let me reopen it now. Thanks. On Tue, 5 Mar, 2019, 10:57 AM John Lehmann, <notifications@github.com> wrote: > There's something messing with the model performance in BERT Tokenizer or > BERTForTokenClassification in the new update which is affecting the model > performance. > Hoping that HuggingFace clears this up soon. :) > > Sounds like the issue should remain open? > > β€” > You are receiving this because you modified the open/close state. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/328#issuecomment-469814216>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AcM_oJw9xYJ3ppFG_egEJTrMQq22ERKhks5vTr4wgaJpZM4bVcDQ> > . > <|||||>Sorry about that. Didn't realise I closed the issue. Reopened it now. :)<|||||>Seems strange that the tokenization changed. So you were only having sequence with less than 512 tokens before and now some sequences are longer? Without having access to your dataset I can't really help you but if you can compare the tokenized sequences in your dataset with pytorch-pretrained-bert==0.4.0 versus sequences tokenized with the current pytorch-pretrained-bert==0.6.1 to identify a sequence which is tokenized differently it could help find the root of the issue. Then maybe you can just post some part of a sequence or example which is tokenized differently without breaching your confidentiality clause?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I had the same issue when trying to use it with Flair for text classification. Can I know the root cause of this issue? Does this mean that my text part in the dataset is too long? <|||||>Yes, BERT only accepts inputs smaller or equal to 512 tokens.<|||||>> Seems strange that the tokenization changed. > > So you were only having sequence with less than 512 tokens before and now some sequences are longer? > > Without having access to your dataset I can't really help you but if you can compare the tokenized sequences in your dataset with pytorch-pretrained-bert==0.4.0 versus sequences tokenized with the current pytorch-pretrained-bert==0.6.1 to identify a sequence which is tokenized differently it could help find the root of the issue. > > Then maybe you can just post some part of a sequence or example which is tokenized differently without breaching your confidentiality clause? I think I found a little bug in tokenization.py that may be related to this issue. I was facing a similar problem that using the newest version leads to a huge accuracy drop (from 88% to 22%) in a very common multi-class news title classification task. Using pytorch-pretrained-bert==0.4.0 was actually a workaround so I did a comparison of the tokenization logs of these two versions. the main problem was that many tokens have different ids during training and evaluation. Compared to 0.4.0, the newest version has an additional function that saves the vocabulary to the output_dir/vocab.txt after training and then loads this generated vocab.txt instead during evaluation. In my case, this generated vocab.txt differs from the original one because in https://github.com/huggingface/pytorch-pretrained-BERT/blob/3763f8944dc3fef8afb0c525a2ced8a04889c14f/pytorch_pretrained_bert/tokenization.py#L65 the tokenizer deletes all the trailing spaces. This actually strips different tokens, say a normal space and a non-break space into an identical empty token "". After changing this line to "token = token.rstrip("Β₯n") ", I was able to reproduce the expected accuracy using the newest version <|||||>@Ezekiel25c17 I'm a bit surprised that training spaces would be important in the vocabulary so I would like to investigate this deeper. Can you give me the reference of the following elements you were using in your tests: - the python version, - versions of pytorch-pretrained-bert - the pretrained model, - the vocabulary (probably same as the model I guess), - the example script. So I can reproduce the behavior<|||||>@thomwolf yes sure, - Python 3.6.5 - pytorch_pretrained_bert=0.6.2 - pretrained model - [download link](http://nlp.ist.i.kyoto-u.ac.jp/DLcounter/lime.cgi?down=http://nlp.ist.i.kyoto-u.ac.jp/nl-resource/JapaneseBertPretrainedModel/Japanese_L-12_H-768_A-12_E-30_BPE.zip&name=Japanese_L-12_H-768_A-12_E-30_BPE.zip) - vocab.txt and pytorch_model.bin are contained - trained using Japanese Wikipedia - example script: run_classifier.py with a little modification to suit for a multi-class classification - also, you may need to comment out this line in tokenization.py because Japanese contains many Chinese characters https://github.com/huggingface/pytorch-pretrained-BERT/blob/3763f8944dc3fef8afb0c525a2ced8a04889c14f/pytorch_pretrained_bert/tokenization.py#L235 Maybe the point can be explained using the following example: - let's say we have a bert_model/vocab.txt contains only four tokens: 'a', 'b ', 'c', 'b' - then after loading it during training, vocab_train = {'a':0, 'c':2, 'b':3} - the saved output_dir/vocab.txt will be something like: 'a', 'c', 'b' - finally when loading output_dir/vocab.txt during evaluation, vocab_eval = {'a':0, 'c':1, 'b':2} <|||||>@Ezekiel25c17 Shuffled indices would make sense for the accuracy to drop. @thomwolf I had longer sequences before too but in pytorch-pretrained-bert==0.4.0 the statement `input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts], maxlen=MAX_LEN, dtype=”long”, truncating=”post”, padding=”post”)` did not have a very strict implementation but in 0.6.1 it threw a Value Error which I overcame by truncating the sequences to 512 before feeding it to the "tokenizer.convert_tokens_to_ids(txt)" function. Either way, I was using only the first 75 tokens of the sentence (MAX_LEN=75). So it didn't matter to me. When I was re-running the same code this was the only statement that threw an error which was why I thought there must have been a change in this functionality in the update.<|||||>The issue is still there (current master or 1.0.0. release). Looks like 'BertForTokenClassification' is broken since 0.4.0 . With current version any trained model produces very low scores (dozens of percentage points lower than 0.4.0).<|||||>Sorry for misleading comment. BertForTokenClassification is fine, I just did not use the proper padding label (do not use 'O' label for padding, use a separate label, e.g. '[PAD]').<|||||>@IINemo if you are using an attention mask, then wouldn't the label for the padding not matter at all? <|||||>Hi, If you use β€œO” in versions of pytorch pretrained bert >= 0.5.0, the problem happens because loss on padded tokens is ignored, then any wrong output of the model on padded tokens will not be penalized and the model will learn wrong signal for labels β€œO”. The full fixed version of the code that does sequence tagging with BERT and newest version of pytorch pretrained bert is here: https://github.com/IINemo/bert_sequence_tagger There is a class SequenceTaggerBert that works with tokenized sequences (e.g., nltk tokenizer) and does all the necessary preprocessing under the hood. Best On Wed, Sep 11, 2019 at 9:50 AM Akash Saravanan <notifications@github.com> wrote: > @IINemo <https://github.com/IINemo> if you are using an attention mask, > then wouldn't the label for the padding not matter at all? > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-transformers/issues/328?email_source=notifications&email_token=AFAVG3P4WSZIAUGWKBZJDXTQJCIJRA5CNFSM4G2VYDIKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6NOZYQ#issuecomment-530246882>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AFAVG3NTSU4CCAZYWSACMPLQJCIJRANCNFSM4G2VYDIA> > . > <|||||>> Yes, BERT only accepts inputs smaller or equal to 512 tokens. Hi , I wanted to trained BERT for text more than 512 tokens ,I can not truncate text to 512 as there will be loss of information in that case.Could you please help how can I handle this or any other suggestion to build customized NER for my usecase using BERT. Thanks
transformers
327
closed
Issue#324: warmup linear fixes
Fixes for [Issue#324](https://github.com/huggingface/pytorch-pretrained-BERT/issues/324). - Using the same schedule functions in BertAdam and OpenAIAdam, fixing `warmup_linear` of OpenAIAdam - fix for negative learning rate after t_total for `warmup_linear` - some more docstrings - warning when t_total is exceeded with `warmup_linear`, implemented inside `.step()` of the optimizer (maybe not that nice). Warning is printed on every batch update.
02-27-2019 18:10:13
02-27-2019 18:10:13
Great, thanks @lukovnikov!
transformers
326
closed
run_classifier with evaluation job only
Thanks for giving such awesome project. However, I have encountered some problem. After training the model, I just want to do eval on another dataset with the trained model. Therefore I only open do_eval. However, it gives me this error: Traceback (most recent call last): File "run_classifier_torch.py", line 687, in <module> main() File "run_classifier_torch.py", line 677, in main 'loss': tr_loss/nb_tr_steps} UnboundLocalError: local variable 'tr_loss' referenced before assignment It seems that the loss and tr_loss are only declared in train. If we neglect the training process, the error shall pop up. solution : using eval_loss instead
02-27-2019 04:24:42
02-27-2019 04:24:42
evalonly works last ver, need to mv some code out of train...<|||||>> evalonly works last ver, need to mv some code out of train... yup I agree. I shall do the eval loss and do it <|||||>Seems fixed in master, right? Feel free to re-open the issue if it's not the case.
transformers
325
closed
add BertTokenizer flag to skip basic tokenization
When tokenization is done before text hits this package (e.g., when tokenization is specified as part of the dataset) there exists a use case for skipping the `BasicTokenizer` step, going right to `WordpieceTokenizer`. When one still wants to use the `BertTokenizer.from_pretrained` helper function, they have been able to do this (without claiming this is necessarily the best way) by ``` text = "[CLS] `` Truly , pizza is delicious , '' said Mx. Caily." bert_tokenizer = BertTokenizer.from_pretrained('bert-base-cased') tokenized_text = bert_tokenizer.wordpiece_tokenizer.tokenize(text) ``` With this PR, we instead use ``` text = "[CLS] `` Truly , pizza is delicious , '' said Mx. Caily." bert_tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_basic_tokenize=False) tokenized_text = bert_tokenizer.tokenize(text) ``` a flag for which I add documentation in the docstring and README, hopefully making it clear that this is possible.
02-27-2019 04:20:13
02-27-2019 04:20:13
Thanks for the PR (and documenting this), I added a note<|||||>Ok this is great, thanks @john-hewitt, thanks!
transformers
324
closed
warmup_linear for BertAdam and OpenAIAdam
1. OpenAIAdam version of `warmup_linear` does not linearly increase lr, instead it looks like this: ![figure_1](https://user-images.githubusercontent.com/1732910/53427158-b26a0800-39e8-11e9-9f4e-5df1b9f64566.png) This is different from BertAdam version of `warmup_linear`. Should they not be the same (Bert version)? 2. if `t_total` is specified incorrectly (too small), learning rate becomes negative after t_total for both versions. Probably it would be better to set lr to 0 to avoid situations like [Issue#297](https://github.com/huggingface/pytorch-pretrained-BERT/issues/297). Also, with a too small `t_total`, there is a drop in lr right after `warmup` is reached: ![figure_1](https://user-images.githubusercontent.com/1732910/53427736-c5c9a300-39e9-11e9-8f16-8160ac325490.png) Let me know if I should PR a fix for both.
02-26-2019 16:14:45
02-26-2019 16:14:45
I just ran into this problem while running BERT on large samples from `run_squad.py`. I think a fix would be welcome because this is a really disturbing and hard to catch issue. It would probably be enough to move the optimizer creation + computing of `num_train_optimization_steps` inside the train loop.<|||||>Happy to welcome a PR on this indeed. I'm not super fan of silently hide a wrong `t_total` by setting `lr` to zero so maybe sending a warning `logger. warning` at the same time would be nice too.<|||||>made a PR: https://github.com/huggingface/pytorch-pretrained-BERT/pull/327<|||||>Fixed in master now, thanks @lukovnikov!
transformers
323
closed
What should be the label of sub-word units in Token Classification with Bert
Hi, I'm trying to use BERT for a token-level tagging problem such as NER in German. This is what I've done so far for input preparation: ``` from pytorch_pretrained_bert.tokenization import BertTokenizer, WordpieceTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased", do_lower_case=False) sentences= ["Bis 2013 steigen die Mittel aus dem EU-Budget auf rund 120 Millionen Euro ."] labels = [["O","O","O","O","O","O","O","B-ORGpart","O","O","O","O","B-OTH","O"]] tokens = tokenizer.tokenize(sentences[0]) ``` When I check the tokens I see that there are now 18 tokens instead of 14 (as expected) because of the sub-word units. ``` >>> tokens ['Bis', '2013', 'st', '##eig', '##en', 'die', 'Mittel', 'aus', 'dem', 'EU', '##-', '##B', '##ud', '##get', 'auf', 'rund', '120', 'Millionen', 'Euro', '.'] ``` My question is that how should I modify the labels array. Should I label each sub-word unit with the label of the original word or should I do something else ? As second question, which one of the examples in the resository can be used as an example code for this purpose ? `run_classifier.py` ? `run_squad.py `? **UPDATE** OK, according to the paper it should be handled as follows (From Section 4.3 of BERT paper): > To make this compatible with WordPiece tokenization, we feed each CoNLL-tokenized > input word into our WordPiece tokenizer and use the hidden state corresponding to the first > sub-token as input to the classifier. Where no prediction is made for X. Since > the WordPiece tokenization boundaries are a known part of the input, this is done for both > training and test. Then, for the above example , the correct input output pair is : ``` ['Bis', '2013', 'st', '##eig', '##en', 'die', 'Mittel', 'aus', 'dem', 'EU', '##-', '##B', '##ud', '##get', 'auf', 'rund', '120', 'Millionen', 'Euro', '.'] ['O', 'O', 'O', 'X', 'X', 'O', 'O', 'O', 'O', 'B-ORGpart', 'X', 'X', 'X', 'X', 'O', 'O', 'O', 'O', 'B-OTH', 'O'] ``` Then my question is evolved to " How the sub-tokens could be masked during training & testing ?"
02-26-2019 16:06:34
02-26-2019 16:06:34
I have a similar problem. I labeled the tokens as "X" and then got an error relating to NUM_LABELS. BERT appears to have thought the X was a third label, and I only specified there to be two labels.<|||||>You do not need to introduce an additional tag. This is explained here: https://github.com/huggingface/pytorch-pretrained-BERT/issues/64#issuecomment-443703063<|||||>Yes, I've left #64 open to discuss all these questions. Feel free to read the discussion there and ask questions if needed. Closing this issue.<|||||>@ereday AFAIK To answer your question "How the sub-tokens could be masked during training & testing" There is no need of masking. The sub-word token_ids (except for the first) are not fed to the BERT model. Please tell me if i am wrong.
transformers
322
closed
Single sentence corpus in run_lm_finetuning?
Hi, I am trying to pre-train using `BertForPreTraning` in `run_lm_finetuning.py`. My target corpus is based on very many tweets and I am unsure how the model will tackle that since they are mostly only one sentence. Will it affect the IsNextSentence task? Should my .txt input file consist of one tweet on each line where each tweet is seperated by an empty line?
02-26-2019 12:53:33
02-26-2019 12:53:33
https://github.com/huggingface/pytorch-pretrained-BERT/issues/272 I had the same issue and but apparently this cant be done in BERT<|||||>Yes, can't be done currently. Feel free to submit a PR to extend the `run_lm_finetuning` example @vebits!
transformers
321
closed
how to load classification model and predict?
i use my output dir as bert_model, but cannot find the model
02-26-2019 12:18:20
02-26-2019 12:18:20
Should work. Without more information I can't really help you.
transformers
320
closed
what is the batch size we can use for SQUAD task?
I am running the squad example. I have a Tesla M60 GPU which has about 8GB of memory. For bert-large-uncased model, I can only take batch size as 2, even after I used --fp16. Is it normal?
02-26-2019 08:56:20
02-26-2019 08:56:20
solved
transformers
319
closed
run_classifier.py: TypeError: __init__() got an unexpected keyword argument 'cache_dir'
python3 run_classifier.py --task_name MRPC --do_train --do_eval --do_lower_case --data_dir $GLUE_DIR/MRPC/ --bert_model bert-base-uncased --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mrpc_outputunexpected 02/25/2019 15:50:51 - INFO - __main__ - device: cuda n_gpu: 10, distributed training: False, 16-bits training: False 02/25/2019 15:50:51 - INFO - pytorch_pretrained_bert.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache, downloading to /tmp/tmp30rk0ety 02/25/2019 15:50:52 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmp30rk0ety to cache at ..pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 02/25/2019 15:50:52 - INFO - pytorch_pretrained_bert.file_utils - creating metadata file for /tilde/vchordia/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 02/25/2019 15:50:52 - INFO - pytorch_pretrained_bert.file_utils - removing temp file /tmp/tmp30rk0ety 02/25/2019 15:50:52 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at ./.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 02/25/2019 15:50:52 - INFO - __main__ - LOOKING AT ./glue_data/MRPC/train.tsv 02/25/2019 15:51:39 - INFO - pytorch_pretrained_bert.file_utils - creating metadata file for ./.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 02/25/2019 15:51:39 - INFO - pytorch_pretrained_bert.file_utils - removing temp file /tmp/tmpacv7p93x 02/25/2019 15:51:39 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at ./.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 02/25/2019 15:51:39 - INFO - pytorch_pretrained_bert.modeling - extracting archive file ./.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpsrcv3o3c 02/25/2019 15:51:44 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } Traceback (most recent call last): File "run_classifier.py", line 637, in <module> main() File "run_classifier.py", line 468, in main num_labels = num_labels) File "./anaconda3/envs/py_deep/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 502, in from_pretrained model = cls(config, *inputs, **kwargs) TypeError: __init__() got an unexpected keyword argument 'cache_dir' My issue is that I have been trying to run this test case as suggested in the readme to try test the classifier. I am not sure why the class method is not accepting cache_dir argument.
02-26-2019 00:02:34
02-26-2019 00:02:34
Hi @VarnithChordia Were you able to fix this issue? Because I run into a similar issue. I would appreciate any guidance on this issue. Thank you. <|||||>same. bump. Latest master does not handle cache_dir or mode <|||||>@PetreanuAndi You are correct that in the latest master this issue occurs. The way I was able to fix the code was the following: in the `run_glue.py` file, change lines 137-149: ``` train_dataset = ( GlueDataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None ) eval_dataset = ( GlueDataset(data_args, tokenizer=tokenizer, mode="dev", cache_dir=model_args.cache_dir) if training_args.do_eval else None ) test_dataset = ( GlueDataset(data_args, tokenizer=tokenizer, mode="test", cache_dir=model_args.cache_dir) if training_args.do_predict else None ) ``` To: ``` train_dataset = ( GlueDataset(data_args, tokenizer=tokenizer) if training_args.do_train else None ) eval_dataset = ( GlueDataset(data_args, tokenizer=tokenizer, mode="dev") if training_args.do_eval else None ) test_dataset = ( GlueDataset(data_args, tokenizer=tokenizer, mode="test") if training_args.do_predict else None ) ``` Hope this helps. <|||||>Hi! That's weird, the `cache_dir` argument is available on the `GlueDataset`: https://github.com/huggingface/transformers/blob/f45e873910e60d89511ae0193711e71c5c710468/src/transformers/data/datasets/glue.py#L58-L75 Is it possible you haven't pulled the new changes in a while? If you get an error, could you please open a new issue with all the information relative to your environment, the command you ran and the stack trace? Thanks a lot!
transformers
318
closed
TransfoXLLMHeadModel output interpretation
TransfoXLLMHeadModel gives an output of log probabilities of shape [batch_size, sequence_length, n_tokens]. What do these probabilities represent? For example, what distribution is output at the first sequence position? Is it the conditional distribution given the first word? If so, how can the probability of a complete sentence be computed, including the first word? Also, the readme states: > softmax_output: output of the (adaptive) softmax: >if target is None: Negative log likelihood of shape [batch_size, sequence_length] This appears to be incorrect. From current behavior, it should say: if target is **not** None
02-24-2019 06:52:50
02-24-2019 06:52:50
Hi, 1/ it's the usual language modeling probabilities: each token probability given the previous tokens 2/ thanks, fixed.
transformers
317
closed
anyone notice large difference of using fp16 ?
I recently noticed that using fp16 dropped the performance of BERT on my own dataset but improved on another (it works fine on examples like MPRC). It's about 4% so unlikely to be random noise. I'm trying to see the reason and noticed examples from apex: https://github.com/NVIDIA/apex/tree/master/examples actually uses a global copy of the parameters during training but examples in this repository just use fp16 for all steps (and for saving parameters.) Wondering will this be the potential reason?
02-23-2019 17:49:23
02-23-2019 17:49:23
transformers
316
closed
update documentation for gpt-2
fixes a few incorrect details in the gpt-2 documentation. one remaining thing, all of the models return an extra `presents` variable that I'm not quite sure what it is, so there's a ... in the doc. if you tell me what to put there I can put it there, or you can do it yourself.
02-22-2019 23:41:10
02-22-2019 23:41:10
Hi @joelgrus, you are right, the docstring were lagging a lot. All the information is in the `README.py`, more specifically [these sections detailing the API of the GPT2 models](https://github.com/huggingface/pytorch-pretrained-BERT#14-gpt2model) but I forgot to update the docstrings. Do you want to have a look and tell me if it's detailed enough for the docstrings also?<|||||>yes, that's great, I made those changes. (I also feel kind of dumb for not looking at the docs docs.) sorry about the trailing whitespace changes in the README.md, my editor removes those automatically.<|||||>Thanks Joel!
transformers
315
closed
run_classifier.py : TypeError: join() argument must be str or bytes, not 'PosixPath'
when trying the MRPC example : python3.5 run_classifier.py \ --task_name MRPC \ --do_train \ --do_eval \ --do_lower_case \ --data_dir $GLUE_DIR/MRPC/ \ --bert_model bert-base-uncased \ --max_seq_length 128 \ --train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/mrpc_output Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex. 02/22/2019 21:29:48 - INFO - __main__ - device: cuda n_gpu: 1, distributed training: False, 16-bits training: False 02/22/2019 21:29:48 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/ubuntu/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 02/22/2019 21:29:48 - INFO - __main__ - LOOKING AT /home/ubuntu/glue_data/MRPC/train.tsv Traceback (most recent call last): File "pytorch-pretrained-BERT/examples/run_classifier.py", line 637, in <module> main() File "pytorch-pretrained-BERT/examples/run_classifier.py", line 465, in main cache_dir = args.cache_dir if args.cache_dir else os.path.join(PYTORCH_PRETRAINED_BERT_CACHE, 'distributed_{}'.format(args.local_rank)) File "/usr/lib/python3.5/posixpath.py", line 89, in join genericpath._check_arg_types('join', a, *p) File "/usr/lib/python3.5/genericpath.py", line 143, in _check_arg_types (funcname, s.__class__.__name__)) from None TypeError: join() argument must be str or bytes, not 'PosixPath' ubuntu 16 python 3.5 torch 1.0.1 Collecting torch>=0.4.1 (from pytorch-pretrained-bert) Downloading https://files.pythonhosted.org/packages/59/d2/4e806f73b4b72daab9064c99394fc22ea6ef1fb052154546405057cd192d/torch-1.0.1.post2-cp35-cp35m-manylinux1_x86_64.whl (582.5MB)
02-22-2019 21:40:50
02-22-2019 21:40:50
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/file_utils.py#L30 `PYTORCH_PRETRAINED_BERT_CACHE = Path(os.getenv('PYTORCH_PRETRAINED_BERT_CACHE', Path.home() / '.pytorch_pretrained_bert'))` --> `PYTORCH_PRETRAINED_BERT_CACHE = str(Path(os.getenv('PYTORCH_PRETRAINED_BERT_CACHE', Path.home() / '.pytorch_pretrained_bert')))` would solve this, I don't know if there are any side effects. Maybe, a test should be added here?<|||||>The above does not resolve the err<|||||>Let's rather keep the library's internal using `Path` and fix the examples by adding `str` there instead. Fixed on master now.
transformers
314
closed
Issue with apex import on MAC
Python 3.7 MacOS High Sierra 10.13.6 ``` Traceback (most recent call last): File "examples/classifier.py", line 1, in <module> from pytorch_pretrained_bert.tokenization import BertTokenizer, WordpieceTokenizer File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/pytorch_pretrained_bert/__init__.py", line 7, in <module> from .modeling import (BertConfig, BertModel, BertForPreTraining, File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 218, in <module> from apex.normalization.fused_layer_norm import FusedLayerNorm as BertLayerNorm File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/apex-0.1-py3.7.egg/apex/__init__.py", line 12, in <module> File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/apex-0.1-py3.7.egg/apex/optimizers/__init__.py", line 2, in <module> File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/apex-0.1-py3.7.egg/apex/optimizers/fp16_optimizer.py", line 8, in <module> File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ctypes/__init__.py", line 369, in __getattr__ func = self.__getitem__(name) File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ctypes/__init__.py", line 374, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: dlsym(RTLD_DEFAULT, THCudaHalfTensor_normall): symbol not found ```
02-22-2019 07:13:33
02-22-2019 07:13:33
Unfortunately, apex (and fp16 in general) only work on GPU. So you can't use it on MacOS :/
transformers
313
closed
run_lm_finetuning
When I run_lm_finetuning with the exemplary training corpus (small_wiki_sentence_corpus.txt), I printed the tr_loss every 20 steps. I found that the tr_loss increases very fast. I wonder what the reason is. ![image](https://user-images.githubusercontent.com/40857896/53220384-35b0f400-369f-11e9-8902-27f97a1adacc.png) ![image](https://user-images.githubusercontent.com/40857896/53220402-4c574b00-369f-11e9-899b-c7962f28cde0.png)
02-22-2019 04:42:39
02-22-2019 04:42:39
What tr_loss are you exactly printing here? Is it possible that you just print this one here? https://github.com/huggingface/pytorch-pretrained-BERT/blob/2152bfeae82439600dc5b5deab057a3c4331c62d/examples/run_lm_finetuning.py#L600 If yes, you should divide it by the number of training steps (nb_tr_steps) first to get your average train loss. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
312
closed
Problems converting TF BioBERT model to PyTorch
My goal is to convert and train on the [BioBERT pretrained checkpoints](https://github.com/naver/biobert-pretrained) in pytorch and train on the [SQuAD v2.0 Dataset](https://rajpurkar.github.io/SQuAD-explorer/). I have (seemingly) successfully transfered the checkpoint using the `./pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py` [script](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py). I loaded the converted checkpoint into the `run_squad.py` [example](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py). I also changed the tokenizer to use the vocab file found with the BioBERT model. At this point, I was able to train the model and observe the loss to decrease. My first issue appeared when trying to write the SQuAD predictions. `best_non_null_entry.start_logit` did not have a `start_logit` because the `best_non_null_entry` was `NoneType`. This error resembles the previous issue #207. I implemented the solution found and my code was able to run. My results from training have been the same or worse than a random model. Nearly all of the SQuAD predictions are the "empty" string text from the fix of #207. **I believe the original cause of the `NoneType` error for `best_non_null_entry` is the reason for the failure to predict anything.** Are there specs to obey when converting a TF pretrained BERT model? What would cause the `NoneType` error for `best_non_null_entry`? Any and all help is appreciated.
02-22-2019 01:47:02
02-22-2019 01:47:02
I have solved my issue. All my code was correctly written. The error was an corrupted/improperly saved model.bin file.<|||||>I'm trying to convert BioBert to Pytorch also, so just wondering if you could share a bit more details on how you are doing the conversion. Thanks!<|||||>First, I downloaded the BioBERT TF checkpoints [here](https://github.com/naver/biobert-pretrained). Each model (i.e. biobert_pmc) should have three `.ckpt` files, a `vocab.txt` file, and a `bert_config.json` file. Initially, I tried to use the command line interface `pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch` using the `bert_config.json` and `.ckpt` files seen above. I ran into `AttributeError: 'Parameter' object has no attribute 'BERTAdam'`. I followed the solution [here](https://github.com/dmis-lab/biobert/issues/2). To do this, I copied the `convert_tf_checkpoint_to_pytorch.py` file and the `load_tf_weights_in_bert` function found in `modeling.py`. I then added the two lines seen in the [solution above](https://github.com/dmis-lab/biobert/issues/2) in my own version of the function and file. Given correct file paths, this worked to convert all three BioBERT checkpoints into pytorch `.bin` files.<|||||>@jwhite2a Thank you! This worked for me.
transformers
311
closed
Shouldn't GPT2 use Linear instead of Conv1D?
Conv1D seems to be inherited from GPT but does not seem to serve any special purpose in GPT2 (BERT uses Linear). Should GPT2's model be moved to using Linear (which is easier to grasp obvioulsy)?
02-21-2019 11:09:22
02-21-2019 11:09:22
Maybe this would break pre-trained weights loading? Interested to understand if that's the only reason?<|||||>Possibility, feel free to test the modification and submit a PR @spolu!<|||||>Hi guys, I also wondered whether anyone modified the gpt2 model to have nn.Linear instead of Conv1D layers (using the pre-trained weights). Did any of you success or found such implementation?
transformers
310
closed
Few small nits in GPT-2's README code examples
(unless these were on purpose as a responsible disclosure mechanism :p)
02-21-2019 09:16:38
02-21-2019 09:16:38
They were just basic typos :) Thanks Stanislas
transformers
309
closed
Tests error: Issue with python3 compatibility, on zope interface implementation
Hi, I came across the following error after run **python -m pytest tests/modeling_test.py** ________________________________________________________________________________ ERROR collecting tests/modeling_test.py __________________________________________________________________________________ modeling_test.py:25: in <module> from pytorch_pretrained_bert import (BertConfig, BertModel, BertForMaskedLM, /usr/local/lib/python3.6/site-packages/pytorch_pretrained_bert/__init__.py:7: in <module> from .modeling import (BertConfig, BertModel, BertForPreTraining, /usr/local/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py:218: in <module> from apex.normalization.fused_layer_norm import FusedLayerNorm as BertLayerNorm /usr/local/lib/python3.6/site-packages/apex/__init__.py:18: in <module> from apex.interfaces import (ApexImplementation, /usr/local/lib/python3.6/site-packages/apex/interfaces.py:10: in <module> class ApexImplementation(object): /usr/local/lib/python3.6/site-packages/apex/interfaces.py:14: in ApexImplementation implements(IApex) /usr/local/lib/python3.6/site-packages/zope/interface/declarations.py:483: in implements raise TypeError(_ADVICE_ERROR % 'implementer') E TypeError: Class advice impossible in Python3. Use the @implementer class decorator instead. **My configurations are as follows: python version 3.6.4 CUDA Version 8.0.61 torch==1.0.1.post2 apex==0.9.10.dev0 zope.interface==4.6.0** Thanks
02-21-2019 08:41:35
02-21-2019 08:41:35
any solution here?<|||||>This looks like an incompatibility between apex and zope. Have you tried without installing apex?<|||||>> This looks like an incompatibility between apex and zope. > Have you tried without installing apex? I uninstalled apex, it works now! Thank you so much!!!!
transformers
308
closed
It seems the eval speed of transformer-xl is not faster than bert-base-uncased.
I run `run_classifier.py` with `bert-base-uncased` and `max_seq_length=128` on the MRPC task. The log: ``` Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex. 02/21/2019 12:11:44 - INFO - __main__ - device: cpu n_gpu: 1, distributed training: False, 16-bits training: False 02/21/2019 12:11:45 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/tong.guo/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 02/21/2019 12:11:45 - INFO - pytorch_pretrained_bert.modeling - loading archive file ../model_file/bert-base-uncased.tar.gz 02/21/2019 12:11:45 - INFO - pytorch_pretrained_bert.modeling - extracting archive file ../model_file/bert-base-uncased.tar.gz to temp dir /tmp/tmpaho9_3dk 02/21/2019 12:11:50 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } 02/21/2019 12:11:55 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] 02/21/2019 12:11:55 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] 02/21/2019 12:11:55 - INFO - pytorch_pretrained_bert.modeling - loading archive file ../model_file/bert-base-uncased.tar.gz 02/21/2019 12:11:55 - INFO - pytorch_pretrained_bert.modeling - extracting archive file ../model_file/bert-base-uncased.tar.gz to temp dir /tmp/tmpfehb71wu 02/21/2019 12:11:59 - INFO - pytorch_pretrained_bert.modeling - Model config { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "type_vocab_size": 2, "vocab_size": 30522 } 02/21/2019 12:12:03 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] 02/21/2019 12:12:03 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] 02/21/2019 12:12:03 - INFO - __main__ - *** Example *** 02/21/2019 12:12:03 - INFO - __main__ - guid: dev-1 02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] ' s chief operating officer , [UNK] [UNK] , and [UNK] [UNK] , the chief financial officer , will report directly to [UNK] [UNK] . [SEP] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] and [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] will report to [UNK] . [SEP] 02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 1005 1055 2708 4082 2961 1010 100 100 1010 1998 100 100 1010 1996 2708 3361 2961 1010 2097 3189 3495 2000 100 100 1012 102 100 100 100 100 100 100 1998 100 100 100 100 100 100 2097 3189 2000 100 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - label: 1 (id = 1) 02/21/2019 12:12:03 - INFO - __main__ - *** Example *** 02/21/2019 12:12:03 - INFO - __main__ - guid: dev-2 02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] world ' s two largest auto ##makers said their [UNK] . [UNK] . sales declined more than predicted last month as a late summer sales frenzy caused more of an industry backlash than expected . [SEP] [UNK] sales at both [UNK] and [UNK] . 2 [UNK] [UNK] [UNK] . declined more than predicted as a late summer sales frenzy prompted a larger - than - expected industry backlash . [SEP] 02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 2088 1005 1055 2048 2922 8285 12088 2056 2037 100 1012 100 1012 4341 6430 2062 2084 10173 2197 3204 2004 1037 2397 2621 4341 21517 3303 2062 1997 2019 3068 25748 2084 3517 1012 102 100 4341 2012 2119 100 1998 100 1012 1016 100 100 100 1012 6430 2062 2084 10173 2004 1037 2397 2621 4341 21517 9469 1037 3469 1011 2084 1011 3517 3068 25748 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - label: 1 (id = 1) 02/21/2019 12:12:03 - INFO - __main__ - *** Example *** 02/21/2019 12:12:03 - INFO - __main__ - guid: dev-3 02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] to the federal [UNK] for [UNK] [UNK] and [UNK] ( news - web sites ) , there were 19 reported cases of me ##as ##les in the [UNK] [UNK] in 2002 . [SEP] [UNK] [UNK] for [UNK] [UNK] and [UNK] said there were 19 reported cases of me ##as ##les in the [UNK] [UNK] in 2002 . [SEP] 02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 2000 1996 2976 100 2005 100 100 1998 100 1006 2739 1011 4773 4573 1007 1010 2045 2020 2539 2988 3572 1997 2033 3022 4244 1999 1996 100 100 1999 2526 1012 102 100 100 2005 100 100 1998 100 2056 2045 2020 2539 2988 3572 1997 2033 3022 4244 1999 1996 100 100 1999 2526 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - label: 1 (id = 1) 02/21/2019 12:12:03 - INFO - __main__ - *** Example *** 02/21/2019 12:12:03 - INFO - __main__ - guid: dev-4 02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] tropical storm rapidly developed in the [UNK] of [UNK] [UNK] and was expected to hit somewhere along the [UNK] or [UNK] coasts by [UNK] night . [SEP] [UNK] tropical storm rapidly developed in the [UNK] of [UNK] on [UNK] and could have hurricane - force winds when it hits land somewhere along the [UNK] coast [UNK] night . [SEP] 02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 5133 4040 5901 2764 1999 1996 100 1997 100 100 1998 2001 3517 2000 2718 4873 2247 1996 100 2030 100 20266 2011 100 2305 1012 102 100 5133 4040 5901 2764 1999 1996 100 1997 100 2006 100 1998 2071 2031 7064 1011 2486 7266 2043 2009 4978 2455 4873 2247 1996 100 3023 100 2305 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - label: 0 (id = 0) 02/21/2019 12:12:03 - INFO - __main__ - *** Example *** 02/21/2019 12:12:03 - INFO - __main__ - guid: dev-5 02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] company didn ' t detail the costs of the replacement and repairs . [SEP] [UNK] company officials expect the costs of the replacement work to run into the millions of dollars . [SEP] 02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 2194 2134 1005 1056 6987 1996 5366 1997 1996 6110 1998 10315 1012 102 100 2194 4584 5987 1996 5366 1997 1996 6110 2147 2000 2448 2046 1996 8817 1997 6363 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/21/2019 12:12:03 - INFO - __main__ - label: 0 (id = 0) 02/21/2019 12:12:04 - INFO - __main__ - ***** Running evaluation ***** 02/21/2019 12:12:04 - INFO - __main__ - Num examples = 1725 02/21/2019 12:12:04 - INFO - __main__ - Batch size = 8 Evaluating: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 216/216 [06:06<00:00, 1.70s/it] 02/21/2019 12:18:11 - INFO - __main__ - ***** Eval results ***** 02/21/2019 12:18:11 - INFO - __main__ - eval_accuracy = 0.33507246376811595 02/21/2019 12:18:11 - INFO - __main__ - eval_loss = 1.002936492777533 02/21/2019 12:18:11 - INFO - __main__ - global_step = 0 02/21/2019 12:18:11 - INFO - __main__ - loss = Non ``` The speed is about 1.7 s/batch ------------------ I run `run_transfo_xl.py` on the `wikitext-103` task. The log: ``` Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex. 02/20/2019 19:49:30 - INFO - __main__ - device: cuda 02/20/2019 19:49:30 - INFO - pytorch_pretrained_bert.tokenization_transfo_xl - loading vocabulary file ../model_file/transfo-xl-wt103-vocab.bin 02/20/2019 19:49:30 - INFO - pytorch_pretrained_bert.tokenization_transfo_xl - loading corpus file ../model_file/transfo-xl-wt103-corpus.bin 02/20/2019 19:49:36 - INFO - pytorch_pretrained_bert.modeling_transfo_xl - loading weights file ../model_file/transfo-xl-wt103-pytorch_model.bin 02/20/2019 19:49:36 - INFO - pytorch_pretrained_bert.modeling_transfo_xl - loading configuration file ../model_file/transfo-xl-wt103-config.json 02/20/2019 19:49:36 - INFO - pytorch_pretrained_bert.modeling_transfo_xl - Model config { "adaptive": true, "attn_type": 0, "clamp_len": 1000, "cutoffs": [ 20000, 40000, 200000 ], "d_embed": 1024, "d_head": 64, "d_inner": 4096, "d_model": 1024, "div_val": 4, "dropatt": 0.0, "dropout": 0.1, "ext_len": 0, "init": "normal", "init_range": 0.01, "init_std": 0.02, "mem_len": 1600, "n_head": 16, "n_layer": 18, "n_token": 267735, "pre_lnorm": false, "proj_init_std": 0.01, "same_length": true, "sample_softmax": -1, "tgt_len": 128, "tie_projs": [ false, true, true, true ], "tie_weight": true, "untie_r": true } 02/20/2019 19:49:51 - INFO - __main__ - Evaluating with bsz 10 tgt_len 128 ext_len 0 mem_len 1600 clamp_len 1000 02/20/2019 19:57:35 - INFO - __main__ - Time : 464.00s, 2416.66ms/segment 02/20/2019 19:57:35 - INFO - __main__ - ==================================================================================================== 02/20/2019 19:57:35 - INFO - __main__ - | test loss 2.90 | test ppl 18.213 02/20/2019 19:57:35 - INFO - __main__ - ==================================================================================================== ``` The speed is about 2.4 s/batch
02-21-2019 04:22:02
02-21-2019 04:22:02
transformers
307
closed
Update README.md
02-21-2019 03:25:41
02-21-2019 03:25:41
πŸ‘
transformers
306
closed
Issue happens while using convert_tf_checkpoint_to_pytorch
Hi, We are using your brilliant project for working on the Japanese BERT model with Sentence Piece. https://github.com/yoheikikuta/bert-japanese We are trying to use the convert to to convert below TF BERT model to PyTorch. https://drive.google.com/drive/folders/1Zsm9DD40lrUVu6iAnIuTH2ODIkh-WM-O But we see error logs: Traceback (most recent call last): File "/Users/weicheng.zhu/PycharmProjects/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 66, in <module> args.pytorch_dump_path) File "/Users/weicheng.zhu/PycharmProjects/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 37, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_bert(model, tf_checkpoint_path) File "/Users/weicheng.zhu/PycharmProjects/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 95, in load_tf_weights_in_bert pointer = getattr(pointer, l[0]) File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 535, in __getattr__ type(self).__name__, name)) AttributeError: 'BertForPreTraining' object has no attribute 'global_step' Could you kindly help with how we can avoid this? Thank you so much!
02-21-2019 02:33:50
02-21-2019 02:33:50
I resolved this issue by adding the global_step to the skipping list. I think global_step is not required for using pretrained model. Please correct me if I am wrong.<|||||>Is Pytorch requires a TF check point converted? am finding hard to load the checkpoint I generated.BTW is it safe to convert TF checkpoint ?<|||||>> I resolved this issue by adding the global_step to the skipping list. I think global_step is not required for using pretrained model. Please correct me if I am wrong. can you explain me what is skipping list?<|||||>In the file `modeling.py` add it to the list at: `if any(n in ["adam_v", "adam_m"] for n in name):`<|||||>Is it possible to load Tensorflow checkpoint using pytorch and do fine tunning? I can load pytorch_model.bin and finding hard to load my TF checkpoint.Documentation says it can load a archive with bert_config.json and model.chkpt but I have bert_model_ckpt.data-0000-of-00001 in my TF checkpoint folder so am confused. Is there specific example how to do this? ![image](https://user-images.githubusercontent.com/47925301/54554561-1437e500-498b-11e9-9d44-3cf0f01da248.png) ![image](https://user-images.githubusercontent.com/47925301/54554425-c622e180-498a-11e9-852e-545587f78fbb.png) <|||||>There is a conversion script to convert a tf checkpoint to pytorch: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py<|||||>> In the file `modeling.py` add it to the list at: > `if any(n in ["adam_v", "adam_m"] for n in name):` added global_step in skipping list but still getting same issue. ![image](https://user-images.githubusercontent.com/47925301/55280616-759e7300-52fe-11e9-8f7a-f52ac9dc4b7f.png) <|||||>@naga-dsalgo Is it fixed? I too added "global_step" to the list. But still get the error<|||||>Yes it is fixed for me ... I edited installed version not the downloaded git version .. On Tue, Apr 2, 2019 at 4:37 AM Shivam Akhauri <notifications@github.com> wrote: > @naga-dsalgo <https://github.com/naga-dsalgo> Is it fixed? I too added > "global_step" to the list. But still get the error > > β€” > You are receiving this because you were mentioned. > > > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/306#issuecomment-478899861>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AttINdN0Pj9IU0kcwNg_BtrnZdwF6Qjwks5vcxbZgaJpZM4bGhdh> > . > <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
305
closed
Update run_openai_gpt.py
Adding invocation to the top of `run_openai_gpt.py` so that's it's easy to find. Previously, the header said that running the script w/ default values works, but actually you need to set some paths.
02-20-2019 18:59:49
02-20-2019 18:59:49
πŸ‘
transformers
304
closed
Can I do a code reference in implementing my code?
@thomwolf I am trying simply implementing gpt-2 on Pytorch I have trouble in trasfering tensorflow checkpoint to pytorch :( https://github.com/graykode/gpt-2-Pytorch Could I do this code reference in implementing my code? I'll write reference in my code!! Thanks for awesome sharing!
02-20-2019 18:24:41
02-20-2019 18:24:41
Hi @graykode, What do you mean by "code reference"?<|||||>@thomwolf Hello thomwolf! It mean that I apply your code about `GPT-2 model and transferring tensorflow checkpoint to pytorch` in my project! I show the origin of the information in my project code comment when I refer to your code. Thanks<|||||>Oh yes, no problem. Just reference the origin of the work and the licences (inherited from the relevant authors and code I started from)<|||||>@thomwolf Sure, What license i follow? I already known original [openAi/gpt-2](https://github.com/openai/gpt-2) is MIT license, but pytorch-pretrained-BERT is Apache 2.0! I want to just use gpt-2 model and model transferring code!!
transformers
303
closed
Example Code in README fails.
There is an assertion error in the example code in the README. The text that is input to the model `"[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"` is expected to be tokenized and masked like so `['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]']`. However the `[CLS]` and `[SEP]` tokens are split up and in the class of `[CLS]` it is broken into word pieces. The actual tokenized results looks like this `['[', 'cl', '##s', ']', 'who', 'was', 'jim', 'henson', '[MASK]', '[', 'sep', ']', 'jim', 'henson', 'was', 'a', 'puppet', '##eer', '[', 'sep', ']']`
02-20-2019 14:57:26
02-20-2019 14:57:26
This could be related to #266 - are you using the latest version of `pytorch-pretrained-BERT`?<|||||>No, I was on 0.4, I upgraded to 0.6.1 and it worked.
transformers
302
closed
typo
typo in annotation
02-20-2019 13:13:12
02-20-2019 13:13:12
πŸ‘
transformers
301
closed
`train_dataset` and `eval_dataset` in run_openai_gpt.py
Running `examples/run_openai_gpt.py` w/ the default arguments throws an error: ``` $ python run_openai_gpt.py --output_dir tmp --do_eval Traceback (most recent call last): File "run_openai_gpt.py", line 259, in <module> main() File "run_openai_gpt.py", line 153, in main train_dataset = load_rocstories_dataset(args.train_dataset) File "run_openai_gpt.py", line 49, in load_rocstories_dataset with open(dataset_path, encoding='utf_8') as f: FileNotFoundError: [Errno 2] No such file or directory: '' ``` Looking at the code, it looks like `train_dataset` and `eval_dataset` need to be explicitly set. Any suggestions on what they should be set to? Thanks! cc @thomwolf
02-20-2019 03:03:52
02-20-2019 03:03:52
Hi Ben, Please read the [relevant example section in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-openai-gpt-on-the-rocstories-dataset).<|||||>@thomwolf I see that the data is downloaded and cached in case of not providing the `train_dataset` and `eval_dataset` parameters: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_openai_gpt.py#L152-L153, but it fails here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_openai_gpt.py#L162. So either we make the parameters required or do some additional steps which requires untar of the downloaded data and pointing towards that data. Let me know what you think and I can send a PR.
transformers
300
closed
bert.pooler.dense initialization
Hey! Sorry if this is redundant. I saw 3 other issues asking similar questions but couldn't find these exact layers mentioned. Are bert.pooler.dense.weight & bert.pooler.dense.bias randomly initialized? Thank you so much!
02-20-2019 02:56:09
02-20-2019 02:56:09
Hi Jasdeep, No, they are initialized from Google's pretrained model (they are trained for next sentence prediction task during pretraining).
transformers
299
closed
Tests failure
Steps to reproduce: 1. Clone the repo. 2. Set up a plain virtual environment `venv` for the repo with Python 3.6. 3. Run `pip install .` (using the `[--editable]` didn't work and there was some error, so I just removed it) 4. Run `pip install spacy ftfy==4.4.3` and `python -m spacy download en` -- SUCCESSFUL. 5. Run `pip install pytest` -- SUCCESSFUL. 6. Run `python -m pytest -sv tests/` -- FAILURE with error below. ``` tests/modeling_gpt2_test.py::GPT2ModelTest::test_config_to_json_string PASSED tests/modeling_gpt2_test.py::GPT2ModelTest::test_default dyld: lazy symbol binding failed: Symbol not found: _PySlice_Unpack Referenced from: /Users/XYZ/PycharmProjects/pytorch-pretrained-BERT/venv/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib Expected in: flat namespace dyld: Symbol not found: _PySlice_Unpack Referenced from: /Users/XYZ/PycharmProjects/pytorch-pretrained-BERT/venv/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib Expected in: flat namespace Abort trap: 6 ``` In case it helps, these are the packages in my `venv` after I perform the first 5 steps. ``` atomicwrites | 1.3.0 | 1.3.0 attrs | 18.2.0 | 18.2.0 boto3 | 1.9.98 | 1.9.98 botocore | 1.12.98 | 1.12.98 certifi | 2018.11.29 | 2018.11.29 chardet | 3.0.4 | 3.0.4 cymem | 2.0.2 | 2.0.2 cytoolz | 0.9.0.1 | 0.9.0.1 dill | 0.2.9 | 0.2.9 docutils | 0.14 | 0.14 en-core-web-sm | 2.0.0 | Β  ftfy | 4.4.3 | 5.5.1 html5lib | 1.0.1 | 1.0.1 idna | 2.8 | 2.8 jmespath | 0.9.3 | 0.9.3 more-itertools | 6.0.0 | 6.0.0 msgpack | 0.5.6 | 0.6.1 msgpack-numpy | 0.4.3.2 | 0.4.4.2 murmurhash | 1.0.2 | 1.0.2 numpy | 1.16.1 | 1.16.1 pip | 19.0.2 | 19.0.2 plac | 0.9.6 | 1.0.0 pluggy | 0.8.1 | 0.8.1 preshed | 2.0.1 | 2.0.1 py | 1.7.0 | 1.7.0 pytest | 4.3.0 | 4.3.0 python-dateutil | 2.8.0 | 2.8.0 pytorch-pretrained-bert | 0.6.1 | 0.6.1 regex | 2018.1.10 | 2019.02.20 requests | 2.21.0 | 2.21.0 s3transfer | 0.2.0 | 0.2.0 setuptools | 39.1.0 | 40.8.0 six | 1.12.0 | 1.12.0 spacy | 2.0.18 | 2.0.18 thinc | 6.12.1 | 7.0.1 toolz | 0.9.0 | 0.9.0 torch | 1.0.1.post2 | 1.0.1.post2 tqdm | 4.31.1 | 4.31.1 ujson | 1.35 | 1.35 urllib3 | 1.24.1 | 1.24.1 wcwidth | 0.1.7 | 0.1.7 webencodings | 0.5.1 | 0.5.1 wrapt | 1.10.11 | 1.11.1 ```
02-20-2019 01:11:52
02-20-2019 01:11:52
Update: I manually uninstalled PyTorch (`torch | 1.0.1.post2 | 1.0.1.post2` in the list above) from my venv and installed PyTorch 0.4.0 (I couldn't find the whl file for 0.4.1 for Mac OS, which is my platform) by running `pip install https://download.pytorch.org/whl/torch-0.4.0-cp36-cp36m-macosx_10_7_x86_64.whl` I then ran the tests and I saw that 25 tests passed. So I have just a couple questions at this point: 1. Given that the requirements.txt file states that the expected torch version is >= 0.4.1, and the above observation that 25 tests passed with torch 0.4.0, would all the code in this repo work with 0.4.0? 2. It is clear that the error described above occurs when using the latest version of torch, specifically torch 1.0.1.post2, but not with 0.4.0. Could you please make the necessary changes to your repo to support 1.0.1.post2? Thanks!<|||||>Update: Was able to find the whl for 0.4.1, so I uninstalled 0.4.0 from my venv and installed 0.4.1. Ran the tests again and 25 tests passed. So question 1 above is irrelevant now. Could I have an answer for question 2? <|||||>Hi Karthik, On my fresh install with pytorch 1.0.1.post2 (python 3.7) all 25 tests pass without error (also on the continuous integration by the way). Maybe try to create a clean environment? You don't have to install pytorch prior to pytorch-pretrained-bert, it's a dependency.<|||||>Hi Thomas, I did start off with a clean virtual environment and I didn't install PyTorch prior to pytorch-pretrained-bert because I saw it's a dependency. The only difference I see between what you've described above and what I did is the version of Python: you used 3.7 while I used 3.6. Maybe that has something to do with this? Could you try with Python 3.6?<|||||>Tested on a clean python 3.6 install and all the tests pass. Honestly there is not much more I can do at this stage. Closing for now. Feel free to re-open if you find something.
transformers
298
closed
Transformer-XL: Convert lm1b model to PyTorch
Hi, I wanted to convert the TensorFlow checkpoint for the ` lm1b` model to PyTorch with the `convert_transfo_xl_checkpoint_to_pytorch.py` script. I downloaded the checkpoint with the [download.sh](https://github.com/kimiyoung/transformer-xl/blob/master/tf/sota/download.sh) script. Then I called the convert script with: ```bash $ python3 convert_transfo_xl_checkpoint_to_pytorch.py --pytorch_dump_folder_path converted --tf_checkpoint_path /mnt/transformer-xl/tf/sota/pretrained_xl/tf_lm1b/model/checkpoint ``` Then the following error message is returned: ```bash 2019-02-19 22:46:54.693060: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open /mnt/transformer-xl/tf/sota/pretrained_xl/tf_lm1b/model/checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator? Traceback (most recent call last): File "convert_transfo_xl_checkpoint_to_pytorch.py", line 116, in <module> args.transfo_xl_dataset_file) File "convert_transfo_xl_checkpoint_to_pytorch.py", line 81, in convert_transfo_xl_checkpoint_to_pytorch model = load_tf_weights_in_transfo_xl(model, config, tf_path) File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling_transfo_xl.py", line 141, in load_tf_weights_in_transfo_xl init_vars = tf.train.list_variables(tf_path) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpoint_utils.py", line 95, in list_variables reader = load_checkpoint(ckpt_dir_or_file) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpoint_utils.py", line 64, in load_checkpoint return pywrap_tensorflow.NewCheckpointReader(filename) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 382, in NewCheckpointReader return CheckpointReader(compat.as_bytes(filepattern), status) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py", line 548, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file /mnt/transformer-xl/tf/sota/pretrained_xl/tf_lm1b/model/checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator? ``` I'm using the *0.6.1* version of `pytorch-pretrained-BERT` and the latest `tf-nightly-gpu` package that ships TensorFlow 1.13dev.
02-19-2019 22:49:57
02-19-2019 22:49:57
Hm, I guess I was using the wrong `checkpoint` file? When I used `/mnt/transformer-xl/tf/sota/pretrained_xl/tf_lm1b/model/model.ckpt-1191000` weights are loaded, but another error occurs: ```bash Loading TF weight transformer/r_r_bias/Adam_1 with shape [24, 16, 80] Loading TF weight transformer/r_w_bias with shape [24, 16, 80] Loading TF weight transformer/r_w_bias/Adam with shape [24, 16, 80] Loading TF weight transformer/r_w_bias/Adam_1 with shape [24, 16, 80] Traceback (most recent call last): File "convert_transfo_xl_checkpoint_to_pytorch.py", line 116, in <module> args.transfo_xl_dataset_file) File "convert_transfo_xl_checkpoint_to_pytorch.py", line 81, in convert_transfo_xl_checkpoint_to_pytorch model = load_tf_weights_in_transfo_xl(model, config, tf_path) File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling_transfo_xl.py", line 169, in load_tf_weights_in_transfo_xl assert pointer.shape == array.shape AssertionError: (torch.Size([3, 1024]), (3, 1280)) ```<|||||>Ok, the `TransfoXLConfig` for the `lm1b` model is a bit different. I tried: ```python config = TransfoXLConfig(vocab_size_or_config_json_file=793472, cutoffs=[0, 60000, 100000, 640000, 793472], d_model=1280, d_embed=1280, n_head=16, d_head=80, d_inner=8192, div_val=4, pre_lnorm=False, n_layer=24, tgt_len=32, ext_len=0, mem_len=128, clamp_len=-1, same_length=True, proj_share_all_but_first=False, attn_type=0, sample_softmax=-1, adaptive=True, tie_weight=True, dropout=0.0, dropatt=0.0, untie_r=True, init="normal", init_range=0.01, proj_init_std=0.01, init_std=0.02) ``` which seems not to be 100% correct. Where do I get the model json configuration from (so I can easily pass it to the `convert_transfo_xl_checkpoint_to_pytorch.py` script πŸ€”<|||||>Hi Stefan, You have to create the configuration yourself indeed πŸ™‚ I usually do it by looking at the training parameters of the Tensorflow code related to the model you are trying to load.<|||||>The voab `cutoffs` were wrong. I changed the configuration to: ```python config = TransfoXLConfig(vocab_size_or_config_json_file=793472, cutoffs=[60000, 100000, 640000], d_model=1280, d_embed=1280, n_head=16, d_head=80, d_inner=8192, div_val=4, pre_lnorm=False, n_layer=24, tgt_len=32, ext_len=0, mem_len=128, clamp_len=-1, same_length=True, proj_share_all_but_first=False, attn_type=0, sample_softmax=-1, adaptive=True, tie_weight=True, dropout=0.0, dropatt=0.0, untie_r=True, init="normal", init_range=0.01, proj_init_std=0.01, init_std=0.02, ) ``` And then the `transformer/adaptive_softmax/cutoff_0/proj` key wasn't found in the `tf_weights` dict: ```python transformer/adaptive_softmax/cutoff_0/proj Traceback (most recent call last): File "convert_transfo_xl_checkpoint_to_pytorch.py", line 142, in <module> args.transfo_xl_dataset_file) File "convert_transfo_xl_checkpoint_to_pytorch.py", line 107, in convert_transfo_xl_checkpoint_to_pytorch model = load_tf_weights_in_transfo_xl(model, config, tf_path) File "/mnt/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling_transfo_xl.py", line 150, in load_tf_weights_in_transfo_xl assert name in tf_weights AssertionError ``` <|||||>That's probably a question of weights or projection tying, try to set `tie_weight` or `proj_share_all_but_first` to `False` (the correct value should be indicated in Google/CMU hyper-parameters for lm1b). (I can convert this model later if you don't manage to but not before next week unfortunately)<|||||>Thanks for your help @thomwolf ! I'll try to find the correct configuration settings. We are currently trying to integrate the Transformer-XL model into [flair](https://github.com/zalandoresearch/flair), and we would really like to use a larger (in terms of training size) model for downstream tasks like NER :)<|||||>Here's the last configuration I tried: ```json { "adaptive": true, "attn_type": 0, "clamp_len": -1, "cutoffs": [ 60000, 100000, 640000 ], "d_embed": 1280, "d_head": 80, "d_inner": 8192, "d_model": 1280, "div_val": 4, "dropatt": 0.0, "dropout": 0.1, "ext_len": 0, "init": "normal", "init_range": 0.01, "init_std": 0.02, "mem_len": 32, "n_head": 16, "n_layer": 24, "n_token": 793472, "pre_lnorm": false, "proj_init_std": 0.01, "same_length": true, "sample_softmax": -1, "tgt_len": 32, "tie_weight": true, "untie_r": true, "proj_share_all_but_first": false, "proj_same_dim": false, "tie_projs": [ true, false, true ] } ```` Unfortunately, an error is thrown. @thomwolf it would be awesome if you can take a look on this :)<|||||>Did you manage to convert this model @stefan-it?<|||||>Sadly, I couldn't managed to convert it (I tried several options)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||> @stefan-it did you ever manage to convert this model?<|||||>Hi @irugina, unfortunately, I wasn't able to convert the model 😞
transformers
297
closed
Sudden catastrophic classification output during NER training
Hi, I am fine-tuning BERT model (based on `BertForTokenClassification`) to a NER task with 9 labels ("O" + BILU tags for 2 classes) and sometimes during training I run into this odd behavior: a network with 99% accuracy that is showing a converging trend suddenly shifts all of its predictions to a single class. This happens during the interval of a single epoch. Below are the confusion matrices and some other metrics one epoch before the event and after the event: ``` Epoch 7/10: 150.57s/it, val_acc=99.718% (53391/53542), val_acc_bilu=87.568% (162/185), val_rec=98.780%, val_prec=55.862%, val_f1=71.366% Confusion matrix: [[53229 2 66 25 2 25 8] [ 0 7 0 7 0 0 0] [ 0 0 14 0 0 0 0] [ 0 0 0 67 0 0 1] [ 1 0 0 3 11 0 1] [ 1 0 1 0 0 14 0] [ 0 0 0 7 1 0 49]] Epoch 8/10: 150.64s/it, val_acc=0.030% (16/53542), val_acc_bilu=8.649% (16/185), val_rec=100.000%, val_prec=0.030%, val_f1=0.060% Confusion matrix: [[ 0 0 0 0 53357 0 0] [ 0 0 0 0 14 0 0] [ 0 0 0 0 14 0 0] [ 0 0 0 0 68 0 0] [ 0 0 0 0 16 0 0] [ 0 0 0 0 16 0 0] [ 0 0 0 0 57 0 0]] ``` I am using the default configs for `bert-base-multilingual-cased` and standard `CrossEntropyLoss`. The optimizer is `BertAdam` untouched with learning rate 1e-5. The dataset is highly unbalanced (very few named entities, so >99% of the tokens are "O" tags), so I use a weight of 0.01 to the "O" tag in CE. Has anyone faced a similar issue? Thanks in advance
02-19-2019 22:09:33
02-19-2019 22:09:33
I manage to solve this problem. There is an issue in the calculation of the total optimization steps in `run_squad.py` example that results in a negative learning rate because of the `warmup_linear` schedule. This happens because `t_total` is calculated based on `len(train_examples)` instead of `len(train_features)`. That may not be a problem for datasets with short sentences, but, for long sentences, one example may generate many entries in `train_features` due to the strategy of dividing an example in `DocSpan's`.<|||||>@fabiocapsouza I am trying to handle text classification but my dataset is also highly unbalanced. I am trying to find where I can adjust the class weights when training transformers. Which parameter you changed in your case?<|||||>@MendesSP , since the provided BERT model classes have the loss function hardcoded in the `forward`method, I had to write a subclass to override the `CrossEntropyLoss` definition passing a `weight` tensor.
transformers
296
closed
How to change config parameters when loading the model with `from_pretrained`
I have created a model by extending `PreTrainedBertModel`: ```python class BertForMultiLabelClassification(PreTrainedBertModel): def __init__(self, config, num_labels=2): super(BertForMultiLabelClassification, self).__init__(config) self.num_labels = num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.attention_probs_dropout_prob) self.classifier = nn.Linear(config.hidden_size, num_labels) self.apply(self.init_bert_weights) # some code here ... ``` I am creating an instance of this model: ```python model = BertForMultiLabelClassification.from_pretrained(args.bert_model, cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format( args.local_rank), num_labels=num_labels) ``` what is an effective way to modify parameters of the default config, when creating an instance of `BertForMultiLabelClassification`? (say, setting a different value for `config.hidden_dropout_prob`). Any thoughts on what is an effective way to do this?
02-19-2019 18:27:01
02-19-2019 18:27:01
I have the same question. I need to change the `hidden_dropout_prob`. How is that possible?<|||||>Same question. Any best practice? <|||||>Oh, I find this code works: ```python hidden_droput_prob = 0.3 config = BertConfig.from_pretrained("bert-base-uncased", num_labels=num_labels, hidden_dropout_prob=hidden_dropout_prob) model = BertForMultiLabelClassification.from_pretrained("bert-base-uncased", config=config) ``` And `print(model)` could see that drop_out changes: ```sh (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.3, inplace=False) ``` <|||||>Hi @Opdoop - this code overrides the config parameters of the pertained BERT model. Did I understand correctly? Also, how can we ensure that the `num_labels` parameter is also updated? I don't see it in the output of `print(model)`.<|||||>@kaankork * For override: Yes. Your understand is correct. * For `num_labels`: It's been a long time. I didn't sure. But you should see `num_labels` by the last layer shape in `print(model)` .<|||||>Any update on this?<|||||>I need to pass some values to config as well, will save me a lot of time....<|||||>Just use the `update` method. For example, if you want to change the number of hidden layers, simply use `config.update({'num_hidden_layers': 1})`.
transformers
295
closed
fix broken link in readme
02-19-2019 12:35:19
02-19-2019 12:35:19
Thanks!
transformers
294
closed
Extract Features for GPT2 and Transformer-XL
Hi everyone, I'm interested in extracting token-level embeddings from the pre-trained GPT2 and Transformer-XL models and noticed that extract_features.py seems to be specific to BERT. Can you let us know if you have any plans to provide a similar implementation for models other than BERT? Alternatively, could you possibly provide some hints for us to extract the token-level embeddings the code you already made available with the models? Many thanks, great work!
02-19-2019 10:36:26
02-19-2019 10:36:26
Hi Dan, You can extract all the hidden-states of Transformer-XL using the snippet indicated in the readme [here](https://github.com/huggingface/pytorch-pretrained-BERT#12-transfoxlmodel). For the GPT-2 it's not possible right now. I can add it in the next release (or you can submit a PR).<|||||>@thomwolf Can we extract final hidden layer representations from GPT-2 1.5 billion models now?
transformers
293
closed
Minor README typos corrected
02-18-2019 20:29:09
02-18-2019 20:29:09
Thanks!
transformers
292
closed
Fix typo in `GPT2Model` code sample
Typo prevented code from running
02-18-2019 17:28:31
02-18-2019 17:28:31
Thanks!
transformers
291
closed
Too much info @ stdout
As a library, it is preferred to have no unnecessary `print`s in the repo. Using the `pytorch-pretrained-BERT` makes it impossible to use `stdout` as main output mechanism for my code. For example: it prints directly to `stdout` "Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.". I guess printing this kind of messages to `stderr` will be a better idea.
02-18-2019 16:30:54
02-18-2019 16:30:54
Oh that's right, this one should be a logging.info event like the other ones.<|||||>Fixed
transformers
290
closed
Typo/formatting fixes in README
02-18-2019 11:30:27
02-18-2019 11:30:27
Changes are... even more minor then. I'll close this and open another (hope that's OK).
transformers
289
closed
HugginFace or HuggingFace?
Thought to flag, also given the terrific work on this repo (and others), that the company name in the code here seems to be systematically spelt wrong (?) https://github.com/huggingface/pytorch-pretrained-BERT/search?q=hugginface&unscoped_q=hugginface
02-18-2019 11:27:20
02-18-2019 11:27:20
Thanks, I'll fix that in a future release.
transformers
288
closed
forgot to add regex to requirements.txt :(
Updating requirements to add `regex` for GPT-2 tokenizer. A test on OpenAI GPT-2 tokenizer module would have caught that. But (byte-level) BPE tokenization tests are such a pain to make properly. Let's add one in the next release, after the ACL deadline.
02-18-2019 10:56:21
02-18-2019 10:56:21