no code implementations • 18 Sep 2023 • Pedro Valdeira, Yuejie Chi, Cláudia Soares, João Xavier
Communication efficiency is a major challenge in federated learning (FL).
no code implementations • 2 Jul 2023 • João Domingos, João Xavier
To study the VaR formulation, we start by designing a majorizer set that contains any solution of a general percentile problem.
no code implementations • 14 Apr 2023 • João Domingos, João Xavier
While regime (a) spans staple results in large deviation theory, regime (b) is new in distributed detection and is elicited by the ARMA setup.
no code implementations • 24 Jan 2022 • Pedro Valdeira, Cláudia Soares, João Xavier
Expectation Maximization (EM) is the standard method to learn Gaussian mixtures.
1 code implementation • 7 Sep 2021 • Manuel Madeira, Renato Negrinho, João Xavier, Pedro M. Q. Aguiar
First-order methods for stochastic optimization have undeniable relevance, in part due to their pivotal role in machine learning.
no code implementations • 23 Apr 2020 • Inês Almeida, João Xavier
StochaLM can be interpreted as a dual block coordinate ascent method whose block components are selected using the state of an ergodic Markov chain.
no code implementations • 26 Mar 2018 • Inês Almeida, João Xavier
We prove that DJAM converges with probability one to the solution, provided that the personal loss functions are strongly convex and have Lipschitz gradient.