Edit model card

Distilbert encoder models trained on Movielens Ratings dataset (MovieLens-25M) using DEXML (Dual Encoder for eXtreme Multi-Label classification, ICLR'24) method.

Inference Usage (Sentence-Transformers)

With sentence-transformers installed you can use this model as following:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('quicktensor/dexml_movielens-25m')
embeddings = model.encode(sentences)
print(embeddings)

Usage (HuggingFace Transformers)

With huggingface transformers you only need to be a bit careful with how you pool the transformer output to get the embedding, you can use this model as following;

from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F

pooler = lambda x: F.normalize(x[:, 0, :], dim=-1) # Choose CLS token and normalize

sentences = ["This is an example sentence", "Each sentence is converted"]
tokenizer = AutoTokenizer.from_pretrained('quicktensor/dexml_movielens-25m')
model = AutoModel.from_pretrained('quicktensor/dexml_movielens-25m')

encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
    embeddings = pooler(model(**encoded_input))

print(embeddings)

Cite

If you found this model helpful, please cite our work as:

@InProceedings{DEXML,
  author    = "Gupta, N. and Khatri, D. and Rawat, A-S. and Bhojanapalli, S. and Jain, P. and Dhillon, I.",
  title     = "Dual-encoders for Extreme Multi-label Classification",
  booktitle = "International Conference on Learning Representations",
  month     = "May",
  year      = "2024"
}
Downloads last month
2
Safetensors
Model size
66.4M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.