Search Results for author: Pushkar Mishra

Found 22 papers, 10 papers with code

Insights on Disagreement Patterns in Multimodal Safety Perception across Diverse Rater Groups

no code implementations22 Oct 2024 Charvi Rastogi, Tian Huey Teh, Pushkar Mishra, Roma Patel, Zoe Ashwood, Aida Mostafazadeh Davani, Mark Diaz, Michela Paganini, Alicia Parrish, Ding Wang, Vinodkumar Prabhakaran, Lora Aroyo, Verena Rieser

Our study shows that (1) there are significant differences across demographic groups (including intersectional groups) on how severe they assess the harm to be, and that these differences vary across different types of safety violations, (2) the diverse rater pool captures annotation patterns that are substantially different from expert raters trained on specific set of safety policies, and (3) the differences we observe in T2I safety are distinct from previously documented group level differences in text-based safety tasks.

Yesterday's News: Benchmarking Multi-Dimensional Out-of-Distribution Generalisation of Misinformation Detection Models

1 code implementation12 Oct 2024 Ivo Verhoeven, Pushkar Mishra, Ekaterina Shutova

This paper introduces misinfo-general, a benchmark dataset for evaluating misinformation models' ability to perform out-of-distribution generalisation.

Benchmarking Misinformation

A (More) Realistic Evaluation Setup for Generalisation of Community Models on Malicious Content Detection

1 code implementation2 Apr 2024 Ivo Verhoeven, Pushkar Mishra, Rahel Beloch, Helen Yannakoudakis, Ekaterina Shutova

This mismatch can be partially attributed to the limitations of current evaluation setups that neglect the rapid evolution of online content and the underlying social graph.

Misinformation

Investigating the Robustness of Sequential Recommender Systems Against Training Data Perturbations

no code implementations24 Jul 2023 Filippo Betello, Federico Siciliano, Pushkar Mishra, Fabrizio Silvestri

However, their robustness in the face of perturbations in training data remains a largely understudied yet critical issue.

Recommendation Systems

Scientific and Creative Analogies in Pretrained Language Models

2 code implementations28 Nov 2022 Tamara Czinczoll, Helen Yannakoudakis, Pushkar Mishra, Ekaterina Shutova

This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2.

ReFactor GNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective

no code implementations20 Jul 2022 Yihong Chen, Pushkar Mishra, Luca Franceschi, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs).

Knowledge Graph Completion

Prescriptive and Descriptive Approaches to Machine-Learning Transparency

no code implementations27 Apr 2022 David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina

We further propose a preliminary approach, called Method Cards, which aims to increase the transparency and reproducibility of ML systems by providing prescriptive documentation of commonly-used ML methods and techniques.

BIG-bench Machine Learning Descriptive +2

Ruddit: Norms of Offensiveness for English Reddit Comments

1 code implementation ACL 2021 Rishav Hada, Sohi Sudhir, Pushkar Mishra, Helen Yannakoudakis, Saif M. Mohammad, Ekaterina Shutova

On social media platforms, hateful and offensive language negatively impact the mental well-being of users and the participation of people from diverse backgrounds.

Modeling Users and Online Communities for Abuse Detection: A Position on Ethics and Explainability

no code implementations Findings (EMNLP) 2021 Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova

Specifically, we review and analyze the state of the art methods that leverage user or community information to enhance the understanding and detection of abusive language.

Abuse Detection Abusive Language +2

Meta-Learning with Sparse Experience Replay for Lifelong Language Learning

1 code implementation10 Sep 2020 Nithin Holla, Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova

Lifelong learning requires models that can continuously learn from sequential streams of data without suffering catastrophic forgetting due to shifts in data distributions.

Continual Learning Meta-Learning +3

Graph-based Modeling of Online Communities for Fake News Detection

1 code implementation14 Aug 2020 Shantanu Chandra, Pushkar Mishra, Helen Yannakoudakis, Madhav Nimishakavi, Marzieh Saeidi, Ekaterina Shutova

Existing research has modeled the structure, style, content, and patterns in dissemination of online posts, as well as the demographic traits of users who interact with them.

Fake News Detection

Joint Modelling of Emotion and Abusive Language Detection

no code implementations ACL 2020 Santhosh Rajamanickam, Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova

The rise of online communication platforms has been accompanied by some undesirable effects, such as the proliferation of aggressive and abusive behaviour online.

Abuse Detection Abusive Language +1

Author Profiling for Hate Speech Detection

no code implementations14 Feb 2019 Pushkar Mishra, Marco del Tredici, Helen Yannakoudakis, Ekaterina Shutova

The rapid growth of social media in recent years has fed into some highly undesirable phenomena such as proliferation of abusive and offensive language on the Internet.

16k Author Profiling +1

Neural Character-based Composition Models for Abuse Detection

no code implementations WS 2018 Pushkar Mishra, Helen Yannakoudakis, Ekaterina Shutova

The current state of the art approaches to abusive language detection, based on recurrent neural networks, do not explicitly address this problem and resort to a generic OOV (out of vocabulary) embedding for unseen words.

Abuse Detection Abusive Language

Author Profiling for Abuse Detection

1 code implementation COLING 2018 Pushkar Mishra, Marco del Tredici, Helen Yannakoudakis, Ekaterina Shutova

The rapid growth of social media in recent years has fed into some highly undesirable phenomena such as proliferation of hateful and offensive language on the Internet.

16k Abuse Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.