Search Results for author: Christophe Dupuy

Found 13 papers, 3 papers with code

Coordinated Replay Sample Selection for Continual Federated Learning

no code implementations23 Oct 2023 Jack Good, Jimit Majmudar, Christophe Dupuy, Jixuan Wang, Charith Peris, Clement Chung, Richard Zemel, Rahul Gupta

Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continual stream of data without keeping the entire history.

Continual Learning Federated Learning

FLIRT: Feedback Loop In-context Red Teaming

no code implementations8 Aug 2023 Ninareh Mehrabi, Palash Goyal, Christophe Dupuy, Qian Hu, Shalini Ghosh, Richard Zemel, Kai-Wei Chang, Aram Galstyan, Rahul Gupta

Here we propose an automatic red teaming framework that evaluates a given model and exposes its vulnerabilities against unsafe and inappropriate content generation.

In-Context Learning Response Generation

Differentially Private Decoding in Large Language Models

no code implementations26 May 2022 Jimit Majmudar, Christophe Dupuy, Charith Peris, Sami Smaili, Rahul Gupta, Richard Zemel

Recent large-scale natural language processing (NLP) systems use a pre-trained Large Language Model (LLM) on massive and diverse corpora as a headstart.

Language Modelling Large Language Model +1

Canary Extraction in Natural Language Understanding Models

no code implementations ACL 2022 Rahil Parikh, Christophe Dupuy, Rahul Gupta

In this work, we present a version of such an attack by extracting canaries inserted in NLU training data.

Natural Language Understanding

Learnings from Federated Learning in the Real world

no code implementations8 Feb 2022 Christophe Dupuy, Tanya G. Roosta, Leo Long, Clement Chung, Rahul Gupta, Salman Avestimehr

In this study, we evaluate the impact of such idiosyncrasies on Natural Language Understanding (NLU) models trained using FL.

Federated Learning Natural Language Understanding

An Efficient DP-SGD Mechanism for Large Scale NLP Models

no code implementations14 Jul 2021 Christophe Dupuy, Radhika Arava, Rahul Gupta, Anna Rumshisky

However, the data used to train NLU models may contain private information such as addresses or phone numbers, particularly when drawn from human subjects.

Natural Language Understanding Privacy Preserving

FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks

1 code implementation Findings (NAACL) 2022 Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, Salman Avestimehr

Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks.

Benchmarking Federated Learning +5

ADePT: Auto-encoder based Differentially Private Text Transformation

2 code implementations EACL 2021 Satyapriya Krishna, Rahul Gupta, Christophe Dupuy

We prove the theoretical privacy guarantee of our algorithm and assess its privacy leakage under Membership Inference Attacks(MIA) (Shokri et al., 2017) on models trained with transformed data.

Self-Attention Gazetteer Embeddings for Named-Entity Recognition

no code implementations8 Apr 2020 Stanislav Peshterliev, Christophe Dupuy, Imre Kiss

Recent attempts to ingest external knowledge into neural models for named-entity recognition (NER) have exhibited mixed results.

named-entity-recognition Named Entity Recognition +1

Learning Determinantal Point Processes in Sublinear Time

no code implementations19 Oct 2016 Christophe Dupuy, Francis Bach

We propose a new class of determinantal point processes (DPPs) which can be manipulated for inference and parameter learning in potentially sublinear time in the number of items.

Document Summarization Point Processes

Decentralized Topic Modelling with Latent Dirichlet Allocation

no code implementations5 Oct 2016 Igor Colin, Christophe Dupuy

Privacy preserving networks can be modelled as decentralized networks (e. g., sensors, connected objects, smartphones), where communication between nodes of the network is not controlled by an all-knowing, central node.

Privacy Preserving Topic Models

Online but Accurate Inference for Latent Variable Models with Local Gibbs Sampling

no code implementations8 Mar 2016 Christophe Dupuy, Francis Bach

We first propose an unified treatment of online inference for latent variable models from a non-canonical exponential family, and draw explicit links between several previously proposed frequentist or Bayesian methods.

Bayesian Inference Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.