Search Results for author: Michael Rabbat

Found 24 papers, 9 papers with code

FedShuffle: Recipes for Better Use of Local Work in Federated Learning

no code implementations27 Apr 2022 Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael Rabbat

Our FedShuffle recipe comprises four simple-yet-powerful ingredients: 1) local shuffling of the data, 2) adjustment of the local learning rates, 3) update weighting, and 4) momentum variance reduction (Cutkosky and Orabona, 2019).

Federated Learning

Federated Learning with Partial Model Personalization

no code implementations8 Apr 2022 Krishna Pillutla, Kshitiz Malik, Abdelrahman Mohamed, Michael Rabbat, Maziar Sanjabi, Lin Xiao

We consider two federated learning algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on the devices.

Federated Learning

Stochastic Polyak Stepsize with a Moving Target

no code implementations22 Jun 2021 Robert M. Gower, Aaron Defazio, Michael Rabbat

MOTAPS can be seen as a variant of the Stochastic Polyak (SP) which is also a method that also uses loss values to adjust the stepsize.

Image Classification Translation

Federated Learning with Buffered Asynchronous Aggregation

no code implementations11 Jun 2021 John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek, Dzmitry Huba

On the other hand, asynchronous aggregation of client updates in FL (i. e., asynchronous FL) alleviates the scalability issue.

Federated Learning

A Closer Look at Codistillation for Distributed Training

no code implementations6 Oct 2020 Shagun Sodhani, Olivier Delalleau, Mahmoud Assran, Koustuv Sinha, Nicolas Ballas, Michael Rabbat

Surprisingly, we find that even at moderate batch sizes, models trained with codistillation can perform as well as models trained with synchronous data-parallel methods, despite using a much weaker synchronization mechanism.

Distributed Computing

Advances in Asynchronous Parallel and Distributed Optimization

no code implementations24 Jun 2020 Mahmoud Assran, Arda Aytekin, Hamid Feyzmahdavian, Mikael Johansson, Michael Rabbat

Motivated by large-scale optimization problems arising in the context of machine learning, there have been several advances in the study of asynchronous parallel and distributed optimization methods during the past decade.

Distributed Optimization

Supervision Accelerates Pre-training in Contrastive Semi-Supervised Learning of Visual Representations

2 code implementations18 Jun 2020 Mahmoud Assran, Nicolas Ballas, Lluis Castrejon, Michael Rabbat

We investigate a strategy for improving the efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training.

Contrastive Learning

On the Convergence of Nesterov's Accelerated Gradient Method in Stochastic Settings

no code implementations ICML 2020 Mahmoud Assran, Michael Rabbat

We study Nesterov's accelerated gradient method with constant step-size and momentum parameters in the stochastic approximation setting (unbiased gradients with bounded variance) and the finite-sum setting (where randomness is due to sampling mini-batches).

Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge

1 code implementation6 Jan 2020 Florian Knoll, Tullie Murrell, Anuroop Sriram, Nafissa Yakubova, Jure Zbontar, Michael Rabbat, Aaron Defazio, Matthew J. Muckley, Daniel K. Sodickson, C. Lawrence Zitnick, Michael P. Recht

Conclusion: The challenge led to new developments in machine learning for image reconstruction, provided insight into the current state of the art in the field, and highlighted remaining hurdles for clinical adoption.

Image Reconstruction

Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning

1 code implementation NeurIPS 2019 Mahmoud Assran, Joshua Romoff, Nicolas Ballas, Joelle Pineau, Michael Rabbat

We show that we can run several loosely coupled GALA agents in parallel on a single GPU and achieve significantly higher hardware utilization and frame-rates than vanilla A2C at comparable power draws.

Frame reinforcement-learning

A Graph-CNN for 3D Point Cloud Classification

1 code implementation28 Nov 2018 Yingxue Zhang, Michael Rabbat

Graph convolutional neural networks (Graph-CNNs) extend traditional CNNs to handle data that is supported on a graph.

3D Object Classification Classification +2

Stochastic Gradient Push for Distributed Deep Learning

1 code implementation ICLR 2019 Mahmoud Assran, Nicolas Loizou, Nicolas Ballas, Michael Rabbat

Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes.

General Classification Image Classification +2

Provably Accelerated Randomized Gossip Algorithms

no code implementations31 Oct 2018 Nicolas Loizou, Michael Rabbat, Peter Richtárik

In this work we present novel provably accelerated gossip algorithms for solving the average consensus problem.

TarMAC: Targeted Multi-Agent Communication

no code implementations ICLR 2019 Abhishek Das, Théophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Michael Rabbat, Joelle Pineau

We propose a targeted communication architecture for multi-agent reinforcement learning, where agents learn both what messages to send and whom to address them to while performing cooperative tasks in partially-observable environments.

Multi-agent Reinforcement Learning

Learning graphs from data: A signal representation perspective

no code implementations3 Jun 2018 Xiaowen Dong, Dorina Thanou, Michael Rabbat, Pascal Frossard

The construction of a meaningful graph topology plays a crucial role in the effective representation, processing, analysis and visualization of structured data.

Graph Learning

Efficient Large-Scale Similarity Search Using Matrix Factorization

no code implementations CVPR 2016 Ahmet Iscen, Michael Rabbat, Teddy Furon

Experiments with standard image search benchmarks, including the Yahoo100M dataset comprising 100 million images, show that our method gives comparable (and sometimes superior) accuracy compared to exhaustive search while requiring only 10% of the vector operations and memory.

Dictionary Learning Dimensionality Reduction +1

Memory vectors for similarity search in high-dimensional spaces

no code implementations10 Dec 2014 Ahmet Iscen, Teddy Furon, Vincent Gripon, Michael Rabbat, Hervé Jégou

We study an indexing architecture to store and search in a database of high-dimensional vectors from the perspective of statistical signal processing and decision theory.

Image Retrieval

Combating Corrupt Messages in Sparse Clustered Associative Memories

no code implementations27 Sep 2014 Zhe Yao, Vincent Gripon, Michael Rabbat

In this paper we analyze and extend the neural network based associative memory proposed by Gripon and Berrou.

Storing sequences in binary tournament-based neural networks

no code implementations1 Sep 2014 Xiaoran Jiang, Vincent Gripon, Claude Berrou, Michael Rabbat

An extension to a recently introduced architecture of clique-based neural networks is presented.

Improving Sparse Associative Memories by Escaping from Bogus Fixed Points

no code implementations27 Aug 2013 Zhe Yao, Vincent Gripon, Michael Rabbat

The latter outperforms the former in terms of retrieval rate by a huge margin.

Cannot find the paper you are looking for? You can Submit a new open access paper.