1 code implementation • NeurIPS 2023 • Adel Nabli, Eugene Belilovsky, Edouard Oyallon
Distributed training of Deep Learning models has been critical to many recent successes in the field.
1 code implementation • 26 Jul 2022 • Adel Nabli, Edouard Oyallon
This work introduces DADAO: the first decentralized, accelerated, asynchronous, primal, first-order algorithm to minimize a sum of $L$-smooth and $\mu$-strongly convex functions distributed over a given network of size $n$.
no code implementations • 11 Apr 2022 • Robin Algayres, Adel Nabli, Benoit Sagot, Emmanuel Dupoux
We introduce a simple neural encoder architecture that can be trained using an unsupervised contrastive learning objective which gets its positive samples from data-augmented k-Nearest Neighbors search.
1 code implementation • NeurIPS 2020 • Adel Nabli, Margarida Carvalho
Our framework is based on a simple curriculum: if an agent knows how to estimate the value of instances with budgets up to $B$, then solving instances with budget $B+1$ can be done in polynomial time regardless of the direction of the optimization by checking the value of every possible afterstate.
Combinatorial Optimization Multi-agent Reinforcement Learning