Edit model card

What is this?

This model has been developed to detect "narrative-style" jokes, stories and anecdotes (i.e. they are narrated as a story) spoken during speeches or conversations etc. It works best when jokes/anecdotes are at least 40 words or longer. It is based on Moritz Laurer's DeBERTa-v3.

The training dataset was a private collection of around 2000 jokes. This model has not been trained or tested on one-liners, puns or Reddit-style language-manipulation jokes such as knock-knock, Q&A jokes etc.

See the example in the inference widget or How to use section for what constitues a narrative-style joke.

For a slightly less accurate model (0.4% less) that is 65% faster at inference, see the Roberta model. For a much more inaccurate model (2.9% less) that is way faster at inference, see the distilbert model.

Install these first

You'll need to pip install transformers & maybe sentencepiece

How to use

from transformers import pipeline
import torch

device = 0 if torch.cuda.is_available() else -1
model_name = 'Reggie/DeBERTa-v3-base-joke_detector/'
max_seq_len = 510

pipe = pipeline(model=model_name, device=device, truncation=True, max_length=max_seq_len)
is_it_a_joke = """A nervous passenger is about to book a flight ticket, and he asks the airlines' ticket seller, "I hope your planes are safe. Do they have a good track record for safety?" The airline agent replies, "Sir, I can guarantee you, we've never had a plane that has crashed more than once." """
result = pipe(is_it_a_joke) # [{'label': 'LABEL_1', 'score': 0.7313136458396912}]
print('This is a joke') if result[0]['label'] == 'LABEL_1' else print('This is not a joke')
Downloads last month
16
Safetensors
Model size
184M params
Tensor type
I64
·
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.