Search Results for author: Nathalie Baracaldo

Found 24 papers, 6 papers with code

Enhancing In-context Learning via Linear Probe Calibration

1 code implementation22 Jan 2024 Momin Abbas, Yi Zhou, Parikshit Ram, Nathalie Baracaldo, Horst Samulowitz, Theodoros Salonidis, Tianyi Chen

However, applying ICL in real cases does not scale with the number of samples, and lacks robustness to different prompt templates and demonstration permutations.

In-Context Learning

FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs

no code implementations12 Dec 2023 Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo

We evaluate the performance-fairness trade-off for SISA, and empirically demsontrate that SISA can indeed reduce fairness in LLMs.

Fairness Unsupervised Pre-training

Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks

no code implementations7 Dec 2023 Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou, Ling Cai, Nathalie Baracaldo

Growing applications of large language models (LLMs) trained by a third party raise serious concerns on the security vulnerability of LLMs. It has been demonstrated that malicious actors can covertly exploit these vulnerabilities in LLMs through poisoning attacks aimed at generating undesirable outputs.

Data Poisoning object-detection +2

LESS-VFL: Communication-Efficient Feature Selection for Vertical Federated Learning

no code implementations3 May 2023 Timothy Castiglia, Yi Zhou, Shiqiang Wang, Swanand Kadhe, Nathalie Baracaldo, Stacy Patterson

As part of the training, the parties wish to remove unimportant features in the system to improve generalization, efficiency, and explainability.

feature selection Vertical Federated Learning

Federated XGBoost on Sample-Wise Non-IID Data

no code implementations3 Sep 2022 Katelinh Jones, Yuya Jeremy Ong, Yi Zhou, Nathalie Baracaldo

Federated Learning (FL) is a paradigm for jointly training machine learning algorithms in a decentralized manner which allows for parties to communicate with an aggregator to create and train a model, without exposing the underlying raw data distribution of the local parties involved in the training process.

Federated Learning

Federated Unlearning: How to Efficiently Erase a Client in FL?

1 code implementation12 Jul 2022 Anisa Halimi, Swanand Kadhe, Ambrish Rawat, Nathalie Baracaldo

With privacy legislation empowering the users with the right to be forgotten, it has become essential to make a model amenable for forgetting some of its training data.

Federated Learning

FLoRA: Single-shot Hyper-parameter Optimization for Federated Learning

no code implementations15 Dec 2021 Yi Zhou, Parikshit Ram, Theodoros Salonidis, Nathalie Baracaldo, Horst Samulowitz, Heiko Ludwig

We address the relatively unexplored problem of hyper-parameter optimization (HPO) for federated learning (FL-HPO).

Federated Learning

Privacy-Preserving Machine Learning: Methods, Challenges and Directions

no code implementations10 Aug 2021 Runhua Xu, Nathalie Baracaldo, James Joshi

In particular, existing PPML research cross-cut ML, systems and applications design, as well as security and privacy areas; hence, there is a critical need to understand state-of-the-art research, related challenges and a research roadmap for future research in PPML area.

Attribute BIG-bench Machine Learning +1

LEGATO: A LayerwisE Gradient AggregaTiOn Algorithm for Mitigating Byzantine Attacks in Federated Learning

no code implementations26 Jul 2021 Kamala Varma, Yi Zhou, Nathalie Baracaldo, Ali Anwar

This global model can be corrupted when Byzantine workers send malicious gradients, which necessitates robust methods for aggregating gradients that mitigate the adverse effects of Byzantine inputs.

Federated Learning

FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data

no code implementations5 Mar 2021 Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar, James Joshi, Heiko Ludwig

We empirically demonstrate the applicability for multiple types of ML models and show a reduction of 10%-70% of training time and 80% to 90% in data transfer with respect to the state-of-the-art approaches.

Federated Learning Privacy Preserving

Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning

no code implementations1 Feb 2021 Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, Feng Yan

Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.

Federated Learning

Adaptive Histogram-Based Gradient Boosted Trees for Federated Learning

no code implementations11 Dec 2020 Yuya Jeremy Ong, Yi Zhou, Nathalie Baracaldo, Heiko Ludwig

This approach makes the use of gradient boosted trees practical in enterprise federated learning.

Federated Learning

Mitigating Bias in Federated Learning

no code implementations4 Dec 2020 Annie Abay, Yi Zhou, Nathalie Baracaldo, Shashank Rajamoni, Ebube Chuba, Heiko Ludwig

As methods to create discrimination-aware models develop, they focus on centralized ML, leaving federated learning (FL) unexplored.

Fairness Federated Learning

TiFL: A Tier-based Federated Learning System

no code implementations25 Jan 2020 Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng

To this end, we propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity.

Federated Learning

HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning

no code implementations12 Dec 2019 Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar, Heiko Ludwig

Participants in a federated learning process cooperatively train a model by exchanging model parameters instead of the actual training data, which they might want to keep private.

Federated Learning Privacy Preserving

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

1 code implementation9 Nov 2018 Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, Biplav Srivastava

While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern.

Clustering

Adversarial Robustness Toolbox v1.0.0

5 code implementations3 Jul 2018 Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, Ben Edwards

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary.

Adversarial Robustness BIG-bench Machine Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.