Usage

#1
by lucasmccabe-lmi - opened

Hi! I'm a bit confused about usage here. From the paper, my impression is that the prompt template looks like this:

"""
query = "Test query"
passages = ["This is Passage 1", "This is Passage 2", "This is Passage 3"]

input_text = ""
for i in range(len(passages)):
input_text += "Search Query: %s " %query
input_text += "Passage: [%d]\n%s "%(i+1, passages[i])
input_text += "Relevance Ranking:"
"""

And usage with the transformers library looks like this:

"""
input_tokens = tokenizer.encode(input_text, return_tensors="pt")
output_tokens = model.generate(input_tokens)
output_text = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
"""

Does this look right to you? When I try this, I get an output of "[1] > [2] > [3] > [4] > [5]" regardless of how many or what passages I provide.

Castorini org

The input template is a bit more nuanced than that because every passage is encoded separately by the model's encoder. Additionally, you would need to load the model as a FiD model and not just a T5 model. I recommend starting with our codebase: https://github.com/castorini/LiT5 and in particular our script to perform reranking with LiT5-Distill: https://github.com/castorini/LiT5/blob/main/LiT5-Distill.sh. Then, to look at the generated ordering you would probe the generated_permutations: https://github.com/castorini/LiT5/blob/main/FiD/LiT5-Distill.py#L37. Happy to help further.

Sign up or log in to comment