Search Results for author: Gillian Dobbie

Found 14 papers, 6 papers with code

Can Large Language Models Learn Independent Causal Mechanisms?

no code implementations4 Feb 2024 Gaël Gendron, Bao Trung Nguyen, Alex Yuxuan Peng, Michael Witbrock, Gillian Dobbie

We show that such causal constraints can improve out-of-distribution performance on abstract and causal reasoning tasks.

Language Modelling

Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies

no code implementations3 Dec 2023 Vithya Yogarajan, Gillian Dobbie, Te Taka Keegan, Rostam J. Neuwirth

The importance and novelty of this survey are that it explores the perspective of under-represented societies.

Fairness

Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks

no code implementations24 Oct 2023 Xinglong Chang, Katharina Dost, Gillian Dobbie, Jörg Wicker

This paper presents a novel fully-agnostic framework, DIVA (Detecting InVisible Attacks), that detects attacks solely relying on analyzing the potentially poisoned data set.

Fast Adversarial Label-Flipping Attack on Tabular Data

no code implementations16 Oct 2023 Xinglong Chang, Gillian Dobbie, Jörg Wicker

To demonstrate this risk is inherited in the adversary's objective, we propose FALFA (Fast Adversarial Label-Flipping Attack), a novel efficient attack for crafting adversarial labels.

Challenges in Annotating Datasets to Quantify Bias in Under-represented Society

no code implementations11 Sep 2023 Vithya Yogarajan, Gillian Dobbie, Timothy Pistotti, Joshua Bensemann, Kobe Knowles

Recent advances in artificial intelligence, including the development of highly sophisticated large language models (LLM), have proven beneficial in many real-world applications.

Gender Classification

Large Language Models Are Not Strong Abstract Reasoners

1 code implementation31 May 2023 Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie

We perform extensive evaluations of state-of-the-art LLMs, showing that they currently achieve very limited performance in contrast with other natural language tasks, even when applying techniques that have been shown to improve performance on other NLP tasks.

Common Sense Reasoning Memorization +1

Neuromodulation Gated Transformer

1 code implementation5 May 2023 Kobe Knowles, Joshua Bensemann, Diana Benavides-Prado, Vithya Yogarajan, Michael Witbrock, Gillian Dobbie, Yang Chen

We introduce a novel architecture, the Neuromodulation Gated Transformer (NGT), which is a simple implementation of neuromodulation in transformers via a multiplicative effect.

Effectiveness of Debiasing Techniques: An Indigenous Qualitative Analysis

no code implementations17 Apr 2023 Vithya Yogarajan, Gillian Dobbie, Henry Gouk

An indigenous perspective on the effectiveness of debiasing techniques for pre-trained language models (PLMs) is presented in this paper.

Disentanglement of Latent Representations via Causal Interventions

1 code implementation2 Feb 2023 Gaël Gendron, Michael Witbrock, Gillian Dobbie

Following this assumption, we introduce a new method for disentanglement inspired by causal dynamics that combines causality theory with vector-quantized variational autoencoders.

Disentanglement Retrieval

A Survey of Methods, Challenges and Perspectives in Causality

no code implementations1 Feb 2023 Gaël Gendron, Michael Witbrock, Gillian Dobbie

Deep Learning models have shown success in a large variety of tasks by extracting correlation patterns from high-dimensional data but still struggle when generalizing out of their initial distribution.

Source Inference Attacks in Federated Learning

1 code implementation13 Sep 2021 Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Xuyun Zhang

However, existing MIAs ignore the source of a training member, i. e., the information of which client owns the training member, while it is essential to explore source privacy in FL beyond membership privacy of examples from all clients.

Federated Learning Inference Attack

Membership Inference Attacks on Machine Learning: A Survey

2 code implementations14 Mar 2021 Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S. Yu, Xuyun Zhang

In recent years, MIAs have been shown to be effective on various ML models, e. g., classification models and generative models.

BIG-bench Machine Learning Fairness +4

Recurring Concept Meta-learning for Evolving Data Streams

no code implementations21 May 2019 Robert Anderson, Yun Sing Koh, Gillian Dobbie, Albert Bifet

The novelty of ECPF is in how it uses similarity of classifications on new data, between a new classifier and existing classifiers, to quickly identify the best classifier to reuse.

General Classification Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.