Search Results for author: Nathalie Japkowicz

Found 16 papers, 5 papers with code

Towards Ethical Content-Based Detection of Online Influence Campaigns

1 code implementation29 Aug 2019 Evan Crothers, Nathalie Japkowicz, Herna Viktor

The detection of clandestine efforts to influence users in online communities is a challenging problem with significant active development.

Native Language Identification Sentence

In BLOOM: Creativity and Affinity in Artificial Lyrics and Art

1 code implementation13 Jan 2023 Evan Crothers, Herna Viktor, Nathalie Japkowicz

We apply a large multilingual language model (BLOOM-176B) in open-ended generation of Chinese song lyrics, and evaluate the resulting lyrics for coherence and creativity using human reviewers.

Language Modelling Large Language Model

Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers

1 code implementation2 Mar 2022 Evan Crothers, Nathalie Japkowicz, Herna Viktor, Paula Branco

The detection of computer-generated text is an area of rapidly increasing significance as nascent generative models allow for efficient creation of compelling human-like text, which may be abused for the purposes of spam, disinformation, phishing, or online influence campaigns.

Adversarial Robustness Adversarial Text

Contextual One-Class Classification in Data Streams

no code implementations9 Jul 2019 Richard Hugh Moulton, Herna L. Viktor, Nathalie Japkowicz, João Gama

We conclude that the paradigm of contexts in data streams can be used to improve the performance of streaming one-class classifiers.

Classification General Classification +2

ReMix: Calibrated Resampling for Class Imbalance in Deep learning

no code implementations3 Dec 2020 Colin Bellinger, Roberto Corizzo, Nathalie Japkowicz

Class imbalance is a problem of significant importance in applied deep learning where trained models are exploited for decision support and automated decisions in critical areas such as health and medicine, transportation, and finance.

imbalanced classification

On the combined effect of class imbalance and concept complexity in deep learning

1 code implementation29 Jul 2021 Kushankur Ghosh, Colin Bellinger, Roberto Corizzo, Bartosz Krawczyk, Nathalie Japkowicz

Structural concept complexity, class overlap, and data scarcity are some of the most important factors influencing the performance of classifiers under class imbalance conditions.

WATCH: Wasserstein Change Point Detection for High-Dimensional Time Series Data

no code implementations18 Jan 2022 Kamil Faber, Roberto Corizzo, Bartlomiej Sniezynski, Michael Baron, Nathalie Japkowicz

Detecting relevant changes in dynamic time series data in a timely manner is crucially important for many data analysis tasks in real-world settings.

Change Point Detection Human Activity Recognition +3

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods

no code implementations13 Oct 2022 Evan Crothers, Nathalie Japkowicz, Herna Viktor

Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems.

Abuse Detection Fairness +2

System Design for an Integrated Lifelong Reinforcement Learning Agent for Real-Time Strategy Games

no code implementations8 Dec 2022 Indranil Sur, Zachary Daniels, Abrar Rahman, Kamil Faber, Gianmarco J. Gallardo, Tyler L. Hayes, Cameron E. Taylor, Mustafa Burak Gurbuz, James Smith, Sahana Joshi, Nathalie Japkowicz, Michael Baron, Zsolt Kira, Christopher Kanan, Roberto Corizzo, Ajay Divakaran, Michael Piacentino, Jesse Hostetler, Aswin Raghavan

In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system.

Continual Learning reinforcement-learning +2

A Semi-Supervised Framework for Misinformation Detection

no code implementations22 Apr 2023 Yueyang Liu, Zois Boukouvalas, Nathalie Japkowicz

The spread of misinformation in social media outlets has become a prevalent societal problem and is the cause of many kinds of social unrest.

Misinformation

Faithful to Whom? Questioning Interpretability Measures in NLP

no code implementations13 Aug 2023 Evan Crothers, Herna Viktor, Nathalie Japkowicz

A common approach to quantifying model interpretability is to calculate faithfulness metrics based on iteratively masking input tokens and measuring how much the predicted label changes as a result.

Cannot find the paper you are looking for? You can Submit a new open access paper.