nielsr HF staff commited on
Commit
2f3f7f3
1 Parent(s): 7183d93

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # Vision-and-Language Transformer (ViLT), fine-tuned on COCO
6
+
7
+ Vision-and-Language Transformer (ViLT) model fine-tuned on [COCO](https://cocodataset.org/#home). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
8
+
9
+ Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
10
+
11
+ ## Intended uses & limitations
12
+
13
+ You can use the model for image and text retrieval.
14
+
15
+ ### How to use
16
+
17
+ Here is how to use the model in PyTorch:
18
+
19
+ ```
20
+ from transformers import ViltProcessor, ViltForImageAndTextRetrieval
21
+ import requests
22
+ from PIL import Image
23
+
24
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
25
+ image = Image.open(requests.get(url, stream=True).raw)
26
+ texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
27
+
28
+ processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-coco")
29
+ model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-coco")
30
+
31
+ # prepare inputs
32
+ encoding = processor(image, text, return_tensors="pt")
33
+
34
+ # forward pass
35
+ scores = dict()
36
+ for text in texts:
37
+ encoding = processor(image, text, return_tensors="pt")
38
+ outputs = model(**encoding)
39
+ scores[text] = outputs.logits[0, :].item()
40
+ ```
41
+
42
+ ## Training data
43
+
44
+ (to do)
45
+
46
+ ## Training procedure
47
+
48
+ ### Preprocessing
49
+
50
+ (to do)
51
+
52
+ ### Pretraining
53
+
54
+ (to do)
55
+
56
+ ## Evaluation results
57
+
58
+ (to do)
59
+
60
+ ### BibTeX entry and citation info
61
+
62
+ ```bibtex
63
+ @misc{kim2021vilt,
64
+ title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
65
+ author={Wonjae Kim and Bokyung Son and Ildoo Kim},
66
+ year={2021},
67
+ eprint={2102.03334},
68
+ archivePrefix={arXiv},
69
+ primaryClass={stat.ML}
70
+ }
71
+ ```