PyTorch
English
Tevatron
phi3_v
vidore
custom_code
File size: 4,679 Bytes
8a79123
58844a5
 
 
 
 
 
9bd2bdd
58844a5
 
8a79123
 
62588ff
8a79123
f5c38c6
8a79123
cda3e23
f5c38c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a79123
62588ff
8a79123
f5c38c6
3f8b454
f5c38c6
 
 
 
 
62588ff
8a79123
f5c38c6
 
3f8b454
 
 
 
 
 
 
 
 
 
 
 
 
f5c38c6
 
 
 
3f8b454
f5c38c6
 
 
 
 
 
 
 
 
 
 
 
 
 
62588ff
f5c38c6
 
3f8b454
 
 
 
f5c38c6
 
 
 
8a79123
f5c38c6
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: mit
language:
- en
tags:
- vidore
datasets:
- Tevatron/docmatix-ir
- HuggingFaceM4/Docmatix
library_name: Tevatron
---

# DSE-Phi3-Docmatix-V1.0

DSE-Phi3-Docmatix-V1.0 is a bi-encoder model designed to encode document screenshots into dense vectors for document retrieval. The Document Screenshot Embedding ([DSE](https://arxiv.org/abs/2406.11251)) approach captures documents in their original visual format, preserving all information such as text, images, and layout, thus avoiding tedious parsing and potential information loss.

The model, `Tevatron/dse-phi3-docmatix-v1.0`, is trained using the `Tevatron/docmatix-ir` dataset, a variant of `HuggingFaceM4/Docmatix` specifically adapted for training PDF retrievers with Vision Language Models in open-domain question answering scenarios. For more information on dataset filtering and hard negative mining, refer to the [docmatix-ir](https://huggingface.co/datasets/Tevatron/docmatix-ir/blob/main/README.md) dataset page.

## How to Use the Model

### Load the Model and Processor

```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM, AutoConfig

processor = AutoProcessor.from_pretrained('microsoft/Phi-3-vision-128k-instruct', trust_remote_code=True)
config = AutoConfig.from_pretrained('microsoft/Phi-3-vision-128k-instruct', trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, use_cache=False)
model = AutoModelForCausalLM.from_pretrained('Tevatron/dse-phi3-docmatix-v1.0', trust_remote_code=True, config=config, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16).to('cuda:0')

def get_embedding(last_hidden_state: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
    sequence_lengths = attention_mask.sum(dim=1) - 1
    bs = last_hidden_state.shape[0]
    reps = last_hidden_state[torch.arange(bs, device=last_hidden_state.device), sequence_lengths]
    reps = torch.nn.functional.normalize(reps, p=2, dim=-1)
    return reps
```

### Encode Text Query

```python
queries = ["query: Where can we see Llama?", "query: What is LLaMA model?"]
query_inputs = processor(queries, return_tensors="pt", padding="longest", max_length=128, truncation=True).to('cuda:0')
output = model(**query_inputs, return_dict=True, output_hidden_states=True)
query_embeddings = get_embedding(output.hidden_states[-1], query_inputs["attention_mask"])
```

### Encode Document Screenshot

```python
from PIL import Image
import requests
from io import BytesIO

# URLs of the images
url1 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v1.0/blob/main/animal-llama.png"
url2 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v1.0/blob/main/meta-llama.png"

# Download and open images
response1 = requests.get(url1)
response2 = requests.get(url2)

passage_image1 = Image.open(BytesIO(response1.content))
passage_image2 = Image.open(BytesIO(response2.content))

passage_images = [passage_image1, passage_image2]
passage_prompts = ["\nWhat is shown in this image?</s>", "\nWhat is shown in this image?</s>"]

# Process inputs and get embeddings
passage_inputs = processor(passage_prompts, images=passage_images, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
```

### Compute Similarity

```python
from torch.nn.functional import cosine_similarity

similarities = cosine_similarity(query_embeddings, doc_embeddings)
print(similarities)
```

### Encode Document Text
This DSE checkpoint is warm-up with `Tevatron/msmarco-passage-aug`, thus the model can also effectively encode document as text input.
```python
passage_prompts = [
  "The llama (/ˈlɑːmə/; Spanish pronunciation: [ˈʎama] or [ˈʝama]) (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.</s>",
  "Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023.[2][3] The latest version is Llama 3.1, released in July 2024.[4]"
]

passage_inputs = processor(passage_prompts, images=None, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])

similarities = cosine_similarity(query_embeddings, doc_embeddings)
print(similarities)
```