nielsr HF staff commited on
Commit
f715714
1 Parent(s): 48eeaeb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ widget:
16
 
17
  # LeViT
18
 
19
- LeViT128S model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
20
  ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
21
 
22
  Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
@@ -33,8 +33,8 @@ import requests
33
  url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
34
  image = Image.open(requests.get(url, stream=True).raw)
35
 
36
- feature_extractor = LevitFeatureExtractor.from_pretrained('anugunj/levit-384')
37
- model = LevitForImageClassificationWithTeacher.from_pretrained('anugunj/levit-384')
38
 
39
  inputs = feature_extractor(images=image, return_tensors="pt")
40
  outputs = model(**inputs)
 
16
 
17
  # LeViT
18
 
19
+ LeViT-384 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
20
  ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
21
 
22
  Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
 
33
  url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
34
  image = Image.open(requests.get(url, stream=True).raw)
35
 
36
+ feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-384')
37
+ model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-384')
38
 
39
  inputs = feature_extractor(images=image, return_tensors="pt")
40
  outputs = model(**inputs)