File size: 2,827 Bytes
80c6598
 
 
 
 
817d582
80c6598
 
 
 
 
 
 
 
 
 
817d582
80c6598
817d582
80c6598
817d582
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
language:
- en
---

# MiniCPM-Visual-Embedding: An OCR-free Visual-Based Document Embedding Model Based on MiniCPM-V-2.0 as Your Personal Librarian

With MiniCPM-Visual-Embedding, it is possible to directly build knowledge base with raw PDF/Book/Document without any OCR technique nor OCR pipeline. The model only takes images as document-side inputs and produce vectors representing document pages. 

[Github Repo](https://github.com/bokesyo/minicpm-visual-embedding)


![Memex Archtechture](images/memex.png)

# News

- 2024-06-27: We released our first visual embedding model minicpm-visual-embedding-v0.1 on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0.1).

- 2024-05-08: We [committed](https://github.com/bokesyo/minicpm-visual-embedding) our training code (full-parameter tuning with GradCache and DeepSpeed, supports large batch size across multiple GPUs with zero-stage1) and eval code. 

# Get started

First you are suggested to git clone this huggingface repo or download repo with `huggingface_cli`.

```bash
git lfs install
git clone https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0.1
```

or 

```bash
huggingface-cli download RhapsodyAI/minicpm-visual-embedding-v0.1
```

```python
from transformers import AutoModel
from transformers import AutoTokenizer
from PIL import Image
import torch

device = 'cuda:0'

def last_token_pool(last_hidden_states: Tensor,
                 attention_mask: Tensor) -> Tensor:
    left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
    if left_padding:
        return last_hidden_states[:, -1]
    else:
        sequence_lengths = attention_mask.sum(dim=1) - 1
        batch_size = last_hidden_states.shape[0]
        return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]


tokenizer = AutoTokenizer.from_pretrained('/local/path/to/minicpm-visual-embedding-v0.1')
model = AutoModel.from_pretrained('/local/path/to/minicpm-visual-embedding-v0.1')

image_1 = Image.open('/local/path/to/document1.png').convert('RGB')
image_2 = Image.open('/local/path/to/document2.png').convert('RGB')

query_instruction = 'Represent this query for retrieving relavant document: '

query = 'Who was elected as president of United States in 2020?'

query_full = query_instruction + query

# Embed text queries
q_outputs = model(text=[query_full], image=[None, None], tokenizer=tokenizer) # [B, s, d]
q_reps = last_token_pool(q_outputs.last_hidden_state, q_outputs.attention_mask) # [B, d]

# Embed image documents
p_outputs = model(text=['', ''], image=[image_1, image_2], tokenizer=tokenizer) # [B, s, d]
p_reps = last_token_pool(p_outputs.last_hidden_state, p_outputs.attention_mask) # [B, d]

# Calculate similarities
scores = torch.matmul(q_reps, p_reps)

print(scores)

```