Search Results for author: Andre Manoel

Found 17 papers, 8 papers with code

Project Florida: Federated Learning Made Easy

no code implementations21 Jul 2023 Daniel Madrigal Diaz, Andre Manoel, Jialei Chen, Nalin Singal, Robert Sim

Federated learning enables model training across devices and silos while the training data remains within its security boundary, by distributing a model snapshot to a client running inside the boundary, running client code to update the model, and then aggregating updated snapshots across many clients in a central orchestrator.

Federated Learning Management

TrojanPuzzle: Covertly Poisoning Code-Suggestion Models

1 code implementation6 Jan 2023 Hojjat Aghakhani, Wei Dai, Andre Manoel, Xavier Fernandes, Anant Kharkar, Christopher Kruegel, Giovanni Vigna, David Evans, Ben Zorn, Robert Sim

To achieve this, prior attacks explicitly inject the insecure code payload into the training data, making the poison data detectable by static analysis tools that can remove such malicious data from the training set.

Data Poisoning

Federated Multilingual Models for Medical Transcript Analysis

no code implementations4 Nov 2022 Andre Manoel, Mirian Hipolito Garcia, Tal Baumel, Shize Su, Jialei Chen, Dan Miller, Danny Karmon, Robert Sim, Dimitrios Dimitriadis

Federated Learning (FL) is a novel machine learning approach that allows the model trainer to access more data samples, by training the model across multiple decentralized data sources, while data access constraints are in place.

Federated Learning

Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning

no code implementations27 Apr 2022 Yae Jee Cho, Andre Manoel, Gauri Joshi, Robert Sim, Dimitrios Dimitriadis

In this work, we propose a novel ensemble knowledge transfer method named Fed-ET in which small models (different in architecture) are trained on clients, and used to train a larger model at the server.

Ensemble Learning Federated Learning +1

FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning Simulations

1 code implementation25 Mar 2022 Mirian Hipolito Garcia, Andre Manoel, Daniel Madrigal Diaz, FatemehSadat Mireshghallah, Robert Sim, Dimitrios Dimitriadis

We compare the platform with other state-of-the-art platforms and describe available features of FLUTE for experimentation in core areas of active research, such as optimization, privacy, and scalability.

Federated Learning Quantization +3

Differentially Private Fine-tuning of Language Models

2 code implementations ICLR 2022 Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, Huishuai Zhang

For example, on the MNLI dataset we achieve an accuracy of $87. 8\%$ using RoBERTa-Large and $83. 5\%$ using RoBERTa-Base with a privacy budget of $\epsilon = 6. 7$.

Text Generation

Federated Survival Analysis with Discrete-Time Cox Models

no code implementations16 Jun 2020 Mathieu Andreux, Andre Manoel, Romuald Menuet, Charlie Saillard, Chloé Simpson

Building machine learning models from decentralized datasets located in different centers with federated learning (FL) is a promising approach to circumvent local data scarcity while preserving privacy.

Federated Learning Privacy Preserving +1

Efficient Per-Example Gradient Computations in Convolutional Neural Networks

1 code implementation12 Dec 2019 Gaspar Rochette, Andre Manoel, Eric W. Tramel

One notable application comes from the field of differential privacy, where per-example gradients must be norm-bounded in order to limit the impact of each example on the aggregated batch gradient.

Approximate message-passing for convex optimization with non-separable penalties

2 code implementations17 Sep 2018 Andre Manoel, Florent Krzakala, Gaël Varoquaux, Bertrand Thirion, Lenka Zdeborová

We introduce an iterative optimization scheme for convex objectives consisting of a linear loss and a non-separable penalty, based on the expectation-consistent approximation and the vector approximate message-passing (VAMP) algorithm.

Streaming Bayesian inference: theoretical limits and mini-batch approximate message-passing

no code implementations2 Jun 2017 Andre Manoel, Florent Krzakala, Eric W. Tramel, Lenka Zdeborová

In statistical learning for real-world large-scale data problems, one must often resort to "streaming" algorithms which operate sequentially on small batches of data.

Bayesian Inference Clustering

A Deterministic and Generalized Framework for Unsupervised Learning with Restricted Boltzmann Machines

no code implementations10 Feb 2017 Eric W. Tramel, Marylou Gabrié, Andre Manoel, Francesco Caltagirone, Florent Krzakala

Restricted Boltzmann machines (RBMs) are energy-based neural-networks which are commonly used as the building blocks for deep architectures neural architectures.

Denoising

Multi-Layer Generalized Linear Estimation

no code implementations24 Jan 2017 Andre Manoel, Florent Krzakala, Marc Mézard, Lenka Zdeborová

We consider the problem of reconstructing a signal from multi-layered (possibly) non-linear measurements.

Inferring Sparsity: Compressed Sensing using Generalized Restricted Boltzmann Machines

no code implementations13 Jun 2016 Eric W. Tramel, Andre Manoel, Francesco Caltagirone, Marylou Gabrié, Florent Krzakala

In this work, we consider compressed sensing reconstruction from $M$ measurements of $K$-sparse structured signals which do not possess a writable correlation model.

Expectation Propagation

no code implementations22 Sep 2014 Jack Raymond, Andre Manoel, Manfred Opper

Variational inference is a powerful concept that underlies many iterative approximation algorithms; expectation propagation, mean-field methods and belief propagations were all central themes at the school that can be perceived from this unifying framework.

Variational Inference

Sparse Estimation with the Swept Approximated Message-Passing Algorithm

1 code implementation17 Jun 2014 Andre Manoel, Florent Krzakala, Eric W. Tramel, Lenka Zdeborová

Approximate Message Passing (AMP) has been shown to be a superior method for inference problems, such as the recovery of signals from sets of noisy, lower-dimensionality measurements, both in terms of reconstruction accuracy and in computational efficiency.

Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.