no code implementations • 27 Apr 2022 • Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael Rabbat
Our FedShuffle recipe comprises four simple-yet-powerful ingredients: 1) local shuffling of the data, 2) adjustment of the local learning rates, 3) update weighting, and 4) momentum variance reduction (Cutkosky and Orabona, 2019).
1 code implementation • 14 Apr 2022 • Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas
We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations.
Self-Supervised Learning
Semi-Supervised Image Classification
no code implementations • 8 Apr 2022 • Krishna Pillutla, Kshitiz Malik, Abdelrahman Mohamed, Michael Rabbat, Maziar Sanjabi, Lin Xiao
We consider two federated learning algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on the devices.
no code implementations • 22 Jun 2021 • Robert M. Gower, Aaron Defazio, Michael Rabbat
MOTAPS can be seen as a variant of the Stochastic Polyak (SP) which is also a method that also uses loss values to adjust the stepsize.
no code implementations • 11 Jun 2021 • John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek, Dzmitry Huba
On the other hand, asynchronous aggregation of client updates in FL (i. e., asynchronous FL) alleviates the scalability issue.
4 code implementations • ICCV 2021 • Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Armand Joulin, Nicolas Ballas, Michael Rabbat
This paper proposes a novel method of learning by predicting view assignments with support samples (PAWS).
no code implementations • 6 Oct 2020 • Shagun Sodhani, Olivier Delalleau, Mahmoud Assran, Koustuv Sinha, Nicolas Ballas, Michael Rabbat
Surprisingly, we find that even at moderate batch sizes, models trained with codistillation can perform as well as models trained with synchronous data-parallel methods, despite using a much weaker synchronization mechanism.
no code implementations • 24 Jun 2020 • Mahmoud Assran, Arda Aytekin, Hamid Feyzmahdavian, Mikael Johansson, Michael Rabbat
Motivated by large-scale optimization problems arising in the context of machine learning, there have been several advances in the study of asynchronous parallel and distributed optimization methods during the past decade.
2 code implementations • 18 Jun 2020 • Mahmoud Assran, Nicolas Ballas, Lluis Castrejon, Michael Rabbat
We investigate a strategy for improving the efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training.
no code implementations • ICML 2020 • Mahmoud Assran, Michael Rabbat
We study Nesterov's accelerated gradient method with constant step-size and momentum parameters in the stochastic approximation setting (unbiased gradients with bounded variance) and the finite-sum setting (where randomness is due to sampling mini-batches).
1 code implementation • 6 Jan 2020 • Florian Knoll, Tullie Murrell, Anuroop Sriram, Nafissa Yakubova, Jure Zbontar, Michael Rabbat, Aaron Defazio, Matthew J. Muckley, Daniel K. Sodickson, C. Lawrence Zitnick, Michael P. Recht
Conclusion: The challenge led to new developments in machine learning for image reconstruction, provided insight into the current state of the art in the field, and highlighted remaining hurdles for clinical adoption.
1 code implementation • ICLR 2020 • Jianyu Wang, Vinayak Tantia, Nicolas Ballas, Michael Rabbat
We provide theoretical convergence guarantees showing that SlowMo converges to a stationary point of smooth non-convex losses.
1 code implementation • NeurIPS 2019 • Mahmoud Assran, Joshua Romoff, Nicolas Ballas, Joelle Pineau, Michael Rabbat
We show that we can run several loosely coupled GALA agents in parallel on a single GPU and achieve significantly higher hardware utilization and frame-rates than vanilla A2C at comparable power draws.
1 code implementation • 28 Nov 2018 • Yingxue Zhang, Michael Rabbat
Graph convolutional neural networks (Graph-CNNs) extend traditional CNNs to handle data that is supported on a graph.
1 code implementation • ICLR 2019 • Mahmoud Assran, Nicolas Loizou, Nicolas Ballas, Michael Rabbat
Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes.
8 code implementations • 21 Nov 2018 • Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J. Muckley, Aaron Defazio, Ruben Stern, Patricia Johnson, Mary Bruno, Marc Parente, Krzysztof J. Geras, Joe Katsnelson, Hersh Chandarana, Zizhao Zhang, Michal Drozdzal, Adriana Romero, Michael Rabbat, Pascal Vincent, Nafissa Yakubova, James Pinkerton, Duo Wang, Erich Owens, C. Lawrence Zitnick, Michael P. Recht, Daniel K. Sodickson, Yvonne W. Lui
Accelerating Magnetic Resonance Imaging (MRI) by taking fewer measurements has the potential to reduce medical costs, minimize stress to patients and make MRI possible in applications where it is currently prohibitively slow or expensive.
no code implementations • 31 Oct 2018 • Nicolas Loizou, Michael Rabbat, Peter Richtárik
In this work we present novel provably accelerated gossip algorithms for solving the average consensus problem.
no code implementations • ICLR 2019 • Abhishek Das, Théophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Michael Rabbat, Joelle Pineau
We propose a targeted communication architecture for multi-agent reinforcement learning, where agents learn both what messages to send and whom to address them to while performing cooperative tasks in partially-observable environments.
no code implementations • 3 Jun 2018 • Xiaowen Dong, Dorina Thanou, Michael Rabbat, Pascal Frossard
The construction of a meaningful graph topology plays a crucial role in the effective representation, processing, analysis and visualization of structured data.
no code implementations • CVPR 2016 • Ahmet Iscen, Michael Rabbat, Teddy Furon
Experiments with standard image search benchmarks, including the Yahoo100M dataset comprising 100 million images, show that our method gives comparable (and sometimes superior) accuracy compared to exhaustive search while requiring only 10% of the vector operations and memory.
no code implementations • 10 Dec 2014 • Ahmet Iscen, Teddy Furon, Vincent Gripon, Michael Rabbat, Hervé Jégou
We study an indexing architecture to store and search in a database of high-dimensional vectors from the perspective of statistical signal processing and decision theory.
no code implementations • 27 Sep 2014 • Zhe Yao, Vincent Gripon, Michael Rabbat
In this paper we analyze and extend the neural network based associative memory proposed by Gripon and Berrou.
no code implementations • 1 Sep 2014 • Xiaoran Jiang, Vincent Gripon, Claude Berrou, Michael Rabbat
An extension to a recently introduced architecture of clique-based neural networks is presented.
no code implementations • 27 Aug 2013 • Zhe Yao, Vincent Gripon, Michael Rabbat
The latter outperforms the former in terms of retrieval rate by a huge margin.