no code implementations • EMNLP (insights) 2020 • Meghana Moorthy Bhat, Srinivasan Parthasarathy
We empirically study the effectiveness of machine-generated fake news detectors by understanding the model’s sensitivity to different synthetic perturbations during test time.
no code implementations • Findings (EMNLP) 2021 • Meghana Moorthy Bhat, Saghar Hosseini, Ahmed Hassan Awadallah, Paul Bennett, Weisheng Li
Specifically, the lack of corpus, sparsity of toxicity in enterprise emails, and well-defined criteria for annotating toxic conversations have prevented researchers from addressing the problem at scale.
no code implementations • EMNLP 2021 • Meghana Moorthy Bhat, Alessandro Sordoni, Subhabrata Mukherjee
While pre-trained language models have obtained state-of-the-art performance for several natural language understanding tasks, they are quite opaque in terms of their decision-making process.
no code implementations • 15 Sep 2023 • Meghana Moorthy Bhat, Rui Meng, Ye Liu, Yingbo Zhou, Semih Yavuz
As we embark on a new era of LLMs, it becomes increasingly crucial to understand their capabilities, limitations, and differences.
no code implementations • 17 Sep 2021 • Meghana Moorthy Bhat, Alessandro Sordoni, Subhabrata Mukherjee
While pre-trained language models have obtained state-of-the-art performance for several natural language understanding tasks, they are quite opaque in terms of their decision-making process.
1 code implementation • 5 Jan 2019 • Zhixuan Zhou, Huankang Guan, Meghana Moorthy Bhat, Justin Hsu
In this paper, we argue that these models have the potential to misclassify fact-tampering fake news as well as under-written real news.