Edit model card

This RoBERTa-based model can classify the sentiment of English language text in 3 classes:

  • positive πŸ˜€
  • neutral 😐
  • negative πŸ™

The model was fine-tuned on 5,304 manually annotated social media posts. The hold-out accuracy is 86.1%. For details on the training approach see Web Appendix F in Hartmann et al. (2021).

Application

from transformers import pipeline
classifier = pipeline("text-classification", model="j-hartmann/sentiment-roberta-large-english-3-classes", return_all_scores=True)
classifier("This is so nice!")
Output:
[[{'label': 'negative', 'score': 0.00016451838018838316},
  {'label': 'neutral', 'score': 0.000174045650055632},
  {'label': 'positive', 'score': 0.9996614456176758}]]

Reference

Please cite this paper when you use our model. Feel free to reach out to jochen.hartmann@tum.de with any questions or feedback you may have.

@article{hartmann2021,
  title={The Power of Brand Selfies},
  author={Hartmann, Jochen and Heitmann, Mark and Schamp, Christina and Netzer, Oded},
  journal={Journal of Marketing Research}
  year={2021}
}
Downloads last month
6,366
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using j-hartmann/sentiment-roberta-large-english-3-classes 4