jinhybr commited on
Commit
8436d13
1 Parent(s): 654746a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: document-question-answering
4
+ tags:
5
+ - donut
6
+ - image-to-text
7
+ - vision
8
+ widget:
9
+ - text: "What is the invoice number?"
10
+ src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
11
+ - text: "What is the purchase amount?"
12
+ src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
13
+ ---
14
+
15
+ # Donut (base-sized model, fine-tuned on DocVQA)
16
+
17
+ Donut model fine-tuned on DocVQA. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
18
+
19
+ Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
20
+
21
+ ## Model description
22
+
23
+ Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.
24
+
25
+ ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg)
26
+
27
+ ## Intended uses & limitations
28
+
29
+ This model is fine-tuned on DocVQA, a document visual question answering dataset.
30
+
31
+ We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.