Search Results for author: Pierre Stock

Found 17 papers, 10 papers with code

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models

no code implementations29 May 2023 Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, Vikas Chandra

Several post-training quantization methods have been applied to large language models (LLMs), and have been shown to perform well down to 8-bits.

Data Free Quantization

Evaluating Privacy Leakage in Split Learning

no code implementations22 May 2023 Xinchi Qiu, Ilias Leontiadis, Luca Melis, Alex Sablayrolles, Pierre Stock

In particular, on-device machine learning allows us to avoid sharing raw data with a third-party server during inference.

Privacy Preserving

Green Federated Learning

no code implementations26 Mar 2023 Ashkan Yousefpour, Shen Guo, Ashish Shenoy, Sayan Ghosh, Pierre Stock, Kiwan Maeng, Schalk-Willem Krüger, Michael Rabbat, Carole-Jean Wu, Ilya Mironov

The rapid progress of AI is fueled by increasingly large and computationally intensive machine learning models and datasets.

Federated Learning

Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design

1 code implementation8 Nov 2022 Chuan Guo, Kamalika Chaudhuri, Pierre Stock, Mike Rabbat

In private federated learning (FL), a server aggregates differentially private updates from a large number of clients in order to train a machine learning model.

Federated Learning

TAN Without a Burn: Scaling Laws of DP-SGD

1 code implementation7 Oct 2022 Tom Sander, Pierre Stock, Alexandre Sablayrolles

Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in particular with the use of massive batches and aggregated data augmentations for a large number of training steps.

Image Classification with Differential Privacy

CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning

1 code implementation6 Oct 2022 Samuel Maddock, Alexandre Sablayrolles, Pierre Stock

We propose a novel method, CANIFE, that uses canaries - carefully crafted samples by a strong adversary to evaluate the empirical privacy of a training round.

Federated Learning

Reconciling Security and Communication Efficiency in Federated Learning

1 code implementation26 Jul 2022 Karthik Prasad, Sayan Ghosh, Graham Cormode, Ilya Mironov, Ashkan Yousefpour, Pierre Stock

Cross-device Federated Learning is an increasingly popular machine learning setting to train a model by leveraging a large population of client devices with high privacy and security guarantees.

Federated Learning Quantization

Defending against Reconstruction Attacks with Rényi Differential Privacy

no code implementations15 Feb 2022 Pierre Stock, Igor Shilov, Ilya Mironov, Alexandre Sablayrolles

Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.

An Embedding of ReLU Networks and an Analysis of their Identifiability

no code implementations20 Jul 2021 Pierre Stock, Rémi Gribonval

The overall objective of this paper is to introduce an embedding for ReLU neural networks of any depth, $\Phi(\theta)$, that is invariant to scalings and that provides a locally linear parameterization of the realization of the network.

Low Bandwidth Video-Chat Compression using Deep Generative Models

no code implementations1 Dec 2020 Maxime Oquab, Pierre Stock, Oran Gafni, Daniel Haziza, Tao Xu, Peizhao Zhang, Onur Celebi, Yana Hasson, Patrick Labatut, Bobo Bose-Kolanu, Thibault Peyronel, Camille Couprie

To unlock video chat for hundreds of millions of people hindered by poor connectivity or unaffordable data costs, we propose to authentically reconstruct faces on the receiver's device using facial landmarks extracted at the sender's side and transmitted over the network.

Training with Quantization Noise for Extreme Model Compression

4 code implementations ICLR 2021 Angela Fan, Pierre Stock, Benjamin Graham, Edouard Grave, Remi Gribonval, Herve Jegou, Armand Joulin

A standard solution is to train networks with Quantization Aware Training, where the weights are quantized during training and the gradients approximated with the Straight-Through Estimator.

Image Generation Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.