no code implementations • 25 Aug 2020 • Dimitris Stripelis, Jose Luis Ambite
There are situations where data relevant to a machine learning problem are distributed among multiple locations that cannot share the data due to regulatory, competitiveness, or privacy reasons.
no code implementations • 4 Feb 2021 • Dimitris Stripelis, Jose Luis Ambite
There are situations where data relevant to machine learning problems are distributed across multiple locations that cannot share the data due to regulatory, competitiveness, or privacy reasons.
no code implementations • 16 Feb 2021 • Dimitris Stripelis, Jose Luis Ambite, Pradeep Lam, Paul Thompson
Federated Learning is a promising approach to learn a joint model over data silos.
no code implementations • 6 May 2021 • Umang Gupta, Dimitris Stripelis, Pradeep K. Lam, Paul M. Thompson, José Luis Ambite, Greg Ver Steeg
In particular, we show that it is possible to infer if a sample was used to train the model given only access to the model prediction (black-box) or access to the model itself (white-box) and some leaked samples from the training data distribution.
no code implementations • 7 Aug 2021 • Dimitris Stripelis, Hamza Saleem, Tanmay Ghai, Nikhil Dhinagar, Umang Gupta, Chrysovalantis Anastasiou, Greg Ver Steeg, Srivatsan Ravi, Muhammad Naveed, Paul M. Thompson, Jose Luis Ambite
Federated learning (FL) enables distributed computation of machine learning models over various disparate, remote data sources, without requiring to transfer any individual data to a centralized location.
no code implementations • 28 Mar 2022 • Joel Mathew, Dimitris Stripelis, José Luis Ambite
We present an analysis of the performance of Federated Learning in a paradigmatic natural-language processing task: Named-Entity Recognition (NER).
no code implementations • 26 Apr 2022 • Dimitris Stripelis, Umang Gupta, Greg Ver Steeg, Jose Luis Ambite
Second, the models are incrementally constrained to a smaller set of parameters, which facilitates alignment/merging of the local models and improved learning performance at high sparsification rates.
no code implementations • 2 May 2022 • Dimitris Stripelis, Marcin Abram, Jose Luis Ambite
Here, we focus on the latter, the susceptibility of federated learning to various data corruption attacks.
no code implementations • 11 May 2022 • Dimitris Stripelis, Umang Gupta, Hamza Saleem, Nikhil Dhinagar, Tanmay Ghai, Rafael Chrysovalantis Anastasiou, Armaghan Asghar, Greg Ver Steeg, Srivatsan Ravi, Muhammad Naveed, Paul M. Thompson, Jose Luis Ambite
Each site trains the neural network over its private data for some time, then shares the neural network parameters (i. e., weights, gradients) with a Federation Controller, which in turn aggregates the local models, sends the resulting community model back to each site, and the process repeats.
no code implementations • 24 Aug 2022 • Dimitris Stripelis, Umang Gupta, Nikhil Dhinagar, Greg Ver Steeg, Paul Thompson, José Luis Ambite
In our experiments in centralized and federated settings on the brain age prediction task (estimating a person's age from their brain MRI), we demonstrate that models can be pruned up to 95% sparsity without affecting performance even in challenging federated learning environments with highly heterogeneous data distributions.
no code implementations • 15 May 2023 • Dimitris Stripelis, Jose Luis Ambite
Federated Learning is a distributed machine learning approach that enables geographically distributed data silos to collaboratively learn a joint machine learning model without sharing data.
no code implementations • 1 Nov 2023 • Dimitris Stripelis, Chrysovalantis Anastasiou, Patrick Toral, Armaghan Asghar, Jose Luis Ambite
The controller is responsible for managing the execution of FL workflows across learners and the learners for training and evaluating federated models over their private datasets.