nielsr HF staff commited on
Commit
873e7df
1 Parent(s): 697692f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
 
8
  # VideoMAE (base-sized model, fine-tuned on Something-Something-v2)
9
 
10
- VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Something-Something-v2. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE).
11
 
12
  Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team.
13
 
@@ -65,7 +65,7 @@ For more code examples, we refer to the [documentation](https://huggingface.co/t
65
 
66
  ## Evaluation results
67
 
68
- This model obtains a top-1 accuracy of 80.9 and a top-5 accuracy of 94.7 on the test set of Kinetics-400.
69
 
70
  ### BibTeX entry and citation info
71
 
 
7
 
8
  # VideoMAE (base-sized model, fine-tuned on Something-Something-v2)
9
 
10
+ VideoMAE model pre-trained for 2400 epochs in a self-supervised way and fine-tuned in a supervised way on Something-Something-v2. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE).
11
 
12
  Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team.
13
 
 
65
 
66
  ## Evaluation results
67
 
68
+ This model obtains a top-1 accuracy of 70.6 and a top-5 accuracy of 92.6 on the test set of Something-Something-v2.
69
 
70
  ### BibTeX entry and citation info
71