no code implementations • 19 Jun 2023 • Timothée Mathieu, Riccardo Della Vecchia, Alena Shilova, Matheus Medeiros Centa, Hector Kohler, Odalric-Ambrym Maillard, Philippe Preux
When comparing several RL algorithms, a major question is how many executions must be made and how can we ensure that the results of such a comparison are theoretically sound.
no code implementations • 18 Feb 2023 • Riccardo Della Vecchia, Debabrota Basu
Endogeneity, i. e. the dependence of noise and covariates, is a common phenomenon in real data due to omitted variables, strategic behaviours, measurement errors etc.
no code implementations • 16 Oct 2022 • Riccardo Della Vecchia, Alena Shilova, Philippe Preux, Riad Akrour
Compared to these learning frameworks, one of the major difficulties of RL is the absence of i. i. d.
no code implementations • 9 Jun 2021 • Nicolò Cesa-Bianchi, Tommaso R. Cesari, Riccardo Della Vecchia
We study the interplay between feedback and communication in a cooperative online learning setting where a network of agents solves a task in which the learners' feedback is determined by an arbitrary graph.
no code implementations • 23 Feb 2021 • Maximilian Mordig, Riccardo Della Vecchia, Nicolò Cesa-Bianchi, Bernhard Schölkopf
Our setting is motivated by a PhD market of students, advisors, and co-advisors, and can be generalized to supply chain networks viewed as $n$-sided markets.
Computer Science and Game Theory Theoretical Economics Combinatorics
no code implementations • 28 Jan 2021 • Maximilian Mordig, Riccardo Della Vecchia
In this work we summarize the procedure that, in its final step, matches students to advisors in the ELLIS 2020 PhD program.
Computer Science and Game Theory Theoretical Economics Combinatorics
no code implementations • 5 Oct 2020 • Riccardo Della Vecchia, Tommaso Cesari
Furthermore, we prove that this is only $\sqrt$ k log k-away from the best achievable rate and that Coop-FTPL has a state-of-the-art T 3/2 worst-case computational complexity.
no code implementations • 15 Nov 2019 • Carlo Baldassi, Riccardo Della Vecchia, Carlo Lucibello, Riccardo Zecchina
The geometrical features of the (non-convex) loss landscape of neural network models are crucial in ensuring successful optimization and, most importantly, the capability to generalize well.