Search Results for author: Meghana Moorthy Bhat

Found 6 papers, 1 papers with code

How Effectively Can Machines Defend Against Machine-Generated Fake News? An Empirical Study

no code implementations EMNLP (insights) 2020 Meghana Moorthy Bhat, Srinivasan Parthasarathy

We empirically study the effectiveness of machine-generated fake news detectors by understanding the model’s sensitivity to different synthetic perturbations during test time.

Say ‘YES’ to Positivity: Detecting Toxic Language in Workplace Communications

no code implementations Findings (EMNLP) 2021 Meghana Moorthy Bhat, Saghar Hosseini, Ahmed Hassan Awadallah, Paul Bennett, Weisheng Li

Specifically, the lack of corpus, sparsity of toxicity in enterprise emails, and well-defined criteria for annotating toxic conversations have prevented researchers from addressing the problem at scale.

Self-training with Few-shot Rationalization

no code implementations EMNLP 2021 Meghana Moorthy Bhat, Alessandro Sordoni, Subhabrata Mukherjee

While pre-trained language models have obtained state-of-the-art performance for several natural language understanding tasks, they are quite opaque in terms of their decision-making process.

Decision Making Natural Language Understanding

Investigating Answerability of LLMs for Long-Form Question Answering

no code implementations15 Sep 2023 Meghana Moorthy Bhat, Rui Meng, Ye Liu, Yingbo Zhou, Semih Yavuz

As we embark on a new era of LLMs, it becomes increasingly crucial to understand their capabilities, limitations, and differences.

Long Form Question Answering Question Generation +1

Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU

no code implementations17 Sep 2021 Meghana Moorthy Bhat, Alessandro Sordoni, Subhabrata Mukherjee

While pre-trained language models have obtained state-of-the-art performance for several natural language understanding tasks, they are quite opaque in terms of their decision-making process.

Decision Making Natural Language Understanding

Fake News Detection via NLP is Vulnerable to Adversarial Attacks

1 code implementation5 Jan 2019 Zhixuan Zhou, Huankang Guan, Meghana Moorthy Bhat, Justin Hsu

In this paper, we argue that these models have the potential to misclassify fact-tampering fake news as well as under-written real news.

Fact Checking Fake News Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.