Sarcasm Detection

61 papers with code • 9 benchmarks • 13 datasets

The goal of Sarcasm Detection is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts.

Source: Attentional Multi-Reading Sarcasm Detection

Libraries

Use these libraries to find Sarcasm Detection models and implementations

Most implemented papers

A Large Self-Annotated Corpus for Sarcasm

NLPrinceton/SARC LREC 2018

We introduce the Self-Annotated Reddit Corpus (SARC), a large corpus for sarcasm research and for training and evaluating systems for sarcasm detection.

Sarcasm Detection using Hybrid Neural Network

rishabhmisra/Sarcasm-Detection-using-CNN 20 Aug 2019

Sarcasm Detection has enjoyed great interest from the research community, however the task of predicting sarcasm in a text remains an elusive problem for machines.

Context-Dependent Sentiment Analysis in User-Generated Videos

senticnet/sc-lstm ACL 2017

Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos.

The Role of Conversation Context for Sarcasm Detection in Online Interactions

debanjanghosh/sarcasm_context WS 2017

To address the first issue, we investigate several types of Long Short-Term Memory (LSTM) networks that can model both the conversation context and the sarcastic response.

A Corpus of English-Hindi Code-Mixed Tweets for Sarcasm Detection

asking28/sentimix2020 30 May 2018

Social media platforms like twitter and facebook have be- come two of the largest mediums used by people to express their views to- wards different topics.

Training Compute-Optimal Large Language Models

karpathy/llama2.c 29 Mar 2022

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.

UTNLP at SemEval-2022 Task 6: A Comparative Analysis of Sarcasm Detection Using Generative-based and Mutation-based Data Augmentation

amirabaskohi/semeval2022-task6-sarcasm-detection SemEval (NAACL) 2022

Using RoBERTa and mutation-based data augmentation, our best approach achieved an F1-sarcastic of 0. 38 in the competition's evaluation phase.