File size: 4,470 Bytes
7ac06e8
 
 
bfb338f
 
 
7ac06e8
bfb338f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bbc68a8
bfb338f
 
bbc68a8
 
bfb338f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc384cb
 
09526dd
bfb338f
 
 
 
 
 
 
 
7ac06e8
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B-Instruct
license: apache-2.0
language:
- en
---
## Overview

The model is a LoRa Adaptor based on LLaMA-3-8B-Instruct. The model has been trained on a [re-annotated version](https://github.com/Teddy-Li/MulVOIEL/tree/master/CaRB/data) of the [CaRB dataset](https://github.com/dair-iitd/CaRB).

The model produces multi-valent Open IE tuples, i.e. relations with various numbers of arguments (1, 2, or more). We provide an example below:

Consider the following sentence (taken from the CaRB dev set):

`Earlier this year , President Bush made a final `` take - it - or - leave it '' offer on the minimum wage`

Our model would extract the following relation from the sentence:

 <<span style="color:#2471A3">President Bush</span>, <span style="color:#A93226">made</span>, <span style="color:#138D75">a final "take-it-or-leave-it" offer</span>, <span style="color:#B7950B ">on the minimum wage</span>, <span style="color:#B9770E">earlier this year</span>>

where we include <span style="color:#2471A3">President Bush</span> as the <span style="color:#2471A3">subject</span>, <span style="color:#A93226">made</span> as the <span style="color:#A93226">object</span>, <span style="color:#138D75">a final "take-it-or-leave-it" offer</span> as the<span style="color:#138D75">direct object</span>, and <span style="color:#B7950B ">on the minimum wage</span> and <span style="color:#B9770E">earlier this year</span>> as salient <span style="color:#B7950B">_compl_</span><span style="color:#B9770E">_ements_</span>.

We briefly describe how to use our model in the below, and provide further details in our [MulVOIEL repository on Github](https://github.com/Teddy-Li/MulVOIEL/)


## Getting Started

### Model Output Format

Given a sentence, the model produces textual predictions in the following format:

`<subj> ,, (<auxi> ###) <predicate> ,, (<prep1> ###) <obj1>, (<prep2> ###) <obj2>, ...`

### How to Use

1. Install the relevant libraries as well as the [MulVOIEL](https://github.com/Teddy-Li/MulVOIEL/) package:
    ```bash
    pip install transformers datasets peft torch
    git clone https://github.com/Teddy-Li/MulVOIEL
    cd MulVOIEL
    ```

2. Load the model and perform inference (example):
    ```python
    from transformers import AutoModelForCausalLM, AutoTokenizer
    from peft import PeftModel
    import torch
    from llamaOIE import parse_outstr_to_triples
    from llamaOIE_dataset import prepare_input

    base_model_name = "meta-llama/Meta-Llama-3-8B-Instruct"
    peft_adapter_name = "Teddy487/LLaMA3-8b-for-OpenIE"

    model = AutoModelForCausalLM.from_pretrained(base_model_name)
    model = PeftModel.from_pretrained(model, peft_adapter_name)
    tokenizer = AutoTokenizer.from_pretrained(base_model_name)

    input_text = "Earlier this year , President Bush made a final `` take - it - or - leave it '' offer on the minimum wage"
    input_text, _ = prepare_input({'s': input_text}, tokenizer, has_labels=False)

    input_ids = tokenizer(input_text, return_tensors="pt").input_ids

    outputs = model.generate(input_ids)
    outstr = tokenizer.decode(outputs[0][len(input_ids):], skip_special_tokens=True)
    triples = parse_outstr_to_triples(outstr)

    for tpl in triples:
        print(tpl)
    ```

    🍺

## Model Performance

The primary benefit of our model is the ability to extract finer-grained information for predicates. On the other hand, we also report performance on a roughly comparable basis with prior SOTA open IE models, where our method is comparable and even superior to prior models, while producing finer-grained and more complex outputs. We report evaluation results in (macro) F-1 metric, as well as in the average [Levenshtein Distance](https://pypi.org/project/python-Levenshtein/) between gold and predicted relations:

| Model | Levenshtein Distance | Macro F-1 |
| --- | --- | --- |
| [LoRA LLaMA2-7b](https://huggingface.co/Teddy487/LLaMA2-7b-for-OpenIE) | 5.85 | 50.2 |
| [LoRA LLaMA3-8b](https://huggingface.co/Teddy487/LLaMA3-8b-for-OpenIE) | **5.04** | **55.3** |
| RNN OIE * | - | 49.0 |
| IMOJIE * | - | 53.5 |
| Open IE 6 * | - | 54.0/52.7 |

Note that the precision and recall values are not directly comparable, because we evaluate the model prediction at a finer granularity, and we use different train/dev/test arrangements as the original CaRB dataset, hence the asterisk.




### Framework versions

- PEFT 0.10.0

- PEFT 0.5.0