ColPali
Safetensors
English
vidore
manu commited on
Commit
bac167c
1 Parent(s): 2c3d1e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -9
README.md CHANGED
@@ -1,23 +1,114 @@
1
  ---
 
 
2
  base_model: vidore/colqwen2-base
3
- library_name: peft
 
4
  tags:
 
5
  - vidore
6
  ---
 
7
 
8
- # ColQwen2 Beta
 
 
9
 
10
- This is a temporary model in development leveraging ColPali techniques and the Qwen2-VL model.
11
- It is intentionally undertrained with low resolution.
12
 
13
- Stay tuned for the real ones !
14
 
15
 
16
- ### Usage
 
17
 
18
- Use branch `qwen2` on colpali_engine.
19
 
 
20
 
21
- ### Framework versions
22
 
23
- - PEFT 0.11.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ library_name: colpali
4
  base_model: vidore/colqwen2-base
5
+ language:
6
+ - en
7
  tags:
8
+ - colpali
9
  - vidore
10
  ---
11
+ # ColQwen2: Visual Retriever based on PaliGemma-3B with ColBERT strategy
12
 
13
+ ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
14
+ It is a [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
15
+ It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
16
 
17
+ This version is the untrained base version to guarantee deterministic projection layer initialization.
18
+ <p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
19
 
20
+ ## Version specificity
21
 
22
 
23
+ This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali.
24
+ Maximal resolution is set so that 768 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements.
25
 
26
+ This version is trained with `colpali-engine==0.3.1`.
27
 
28
+ Data is the same as the ColPali data described in the paper.
29
 
 
30
 
31
+ ## Model Training
32
+
33
+ ### Dataset
34
+ Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
35
+ Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
36
+ A validation set is created with 2% of the samples to tune hyperparameters.
37
+
38
+ *Note: Multilingual data is present in the pretraining corpus of the language model (Gemma-2B) and potentially occurs during PaliGemma-3B's multimodal training.*
39
+
40
+ ### Parameters
41
+
42
+ All models are trained for 1 epoch on the train set. Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
43
+ with `alpha=32` and `r=32` on the transformer layers from the language model,
44
+ as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
45
+ We train on an 8 GPU setup with data parallelism, a learning rate of 5e-5 with linear decay with 2.5% warmup steps, and a batch size of 32.
46
+
47
+ ## Usage
48
+
49
+ ```python
50
+ import torch
51
+ from PIL import Image
52
+
53
+ from colpali_engine.models import Colwen2, ColQwen2Processor
54
+
55
+ model = ColQwen2.from_pretrained(
56
+ "manu/colqwen2-beta",
57
+ torch_dtype=torch.bfloat16,
58
+ device_map="cuda:0", # or "mps" if on Apple Silicon
59
+ )
60
+ processor = ColQwen2Processor.from_pretrained("manu/colqwen2-beta"))
61
+
62
+ # Your inputs
63
+ images = [
64
+ Image.new("RGB", (32, 32), color="white"),
65
+ Image.new("RGB", (16, 16), color="black"),
66
+ ]
67
+ queries = [
68
+ "Is attention really all you need?",
69
+ "What is the amount of bananas farmed in Salvador?",
70
+ ]
71
+
72
+ # Process the inputs
73
+ batch_images = processor.process_images(images).to(model.device)
74
+ batch_queries = processor.process_queries(queries).to(model.device)
75
+
76
+ # Forward pass
77
+ with torch.no_grad():
78
+ image_embeddings = model(**batch_images)
79
+ querry_embeddings = model(**batch_queries)
80
+
81
+ scores = processor.score_multi_vector(querry_embeddings, image_embeddings)
82
+ ```
83
+
84
+
85
+ ## Limitations
86
+
87
+ - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
88
+ - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
89
+
90
+ ## License
91
+
92
+ ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.
93
+
94
+ ## Contact
95
+
96
+ - Manuel Faysse: manuel.faysse@illuin.tech
97
+ - Hugues Sibille: hugues.sibille@illuin.tech
98
+ - Tony Wu: tony.wu@illuin.tech
99
+
100
+ ## Citation
101
+
102
+ If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
103
+
104
+ ```bibtex
105
+ @misc{faysse2024colpaliefficientdocumentretrieval,
106
+ title={ColPali: Efficient Document Retrieval with Vision Language Models},
107
+ author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
108
+ year={2024},
109
+ eprint={2407.01449},
110
+ archivePrefix={arXiv},
111
+ primaryClass={cs.IR},
112
+ url={https://arxiv.org/abs/2407.01449},
113
+ }
114
+ ```