no code implementations • 12 Dec 2023 • Khaled Eldowa, Andrea Paudice
Finally, we support our theory with illustrative experiments that compare the behavior of the average of the iterates with that of the last iterate in heavy-tailed noise regimes.
no code implementations • 13 Jul 2023 • Roberto Colomboni, Emmanuel Esposito, Andrea Paudice
The fat-shattering dimension characterizes the uniform convergence property of real-valued functions.
no code implementations • 8 Sep 2022 • Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, Andrea Paudice, Maximilian Thiessen
In this work we show that, by carefully combining the two types of queries, a binary classifier can be learned in time $\operatorname{poly}(n+m)$ using only $O(m^2 \log n)$ label queries and $O\big(m \log \frac{m}{\gamma}\big)$ seed queries; the result extends to $k$-class classifiers at the price of a $k! k^2$ multiplicative overhead.
no code implementations • 2 Sep 2022 • François Bachoc, Tommaso Cesari, Roberto Colomboni, Andrea Paudice
We analyze the cumulative regret of the Dyadic Search algorithm of Bachoc et al. [2022].
no code implementations • 17 Aug 2022 • Daniela A. Parletta, Andrea Paudice, Massimiliano Pontil, Saverio Salzo
In this work we study high probability bounds for stochastic subgradient methods under heavy tailed noise.
no code implementations • 13 Aug 2022 • François Bachoc, Tommaso Cesari, Roberto Colomboni, Andrea Paudice
This paper studies a natural generalization of the problem of minimizing a univariate convex function $f$ by querying its values sequentially.
no code implementations • NeurIPS 2021 • Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, Andrea Paudice
We study an active cluster recovery problem where, given a set of $n$ points and an oracle answering queries like "are these two points in the same cluster?
no code implementations • NeurIPS 2021 • Nicolò Cesa-Bianchi, Pierre Laforgue, Andrea Paudice, Massimiliano Pontil
We introduce and analyze MT-OMD, a multitask generalization of Online Mirror Descent (OMD) which operates by sharing updates between tasks.
no code implementations • 31 Jan 2021 • Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, Andrea Paudice
Previous results show that clusters in Euclidean spaces that are convex and separated with a margin can be reconstructed exactly using only $O(\log n)$ same-cluster queries, where $n$ is the number of input points.
no code implementations • 14 Dec 2020 • Andreas Maurer, Daniela A. Parletta, Andrea Paudice, Massimiliano Pontil
Designing learning algorithms that are resistant to perturbations of the underlying data distribution is a problem of wide practical and theoretical importance.
no code implementations • NeurIPS 2020 • Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, Andrea Paudice
Given a finite set of input points, and an oracle revealing whether any two points lie in the same cluster, our goal is to recover all clusters exactly using as few queries as possible.
1 code implementation • NeurIPS 2019 • Marco Bressan, Nicolò Cesa-Bianchi, Andrea Paudice, Fabio Vitale
In this work we investigate correlation clustering as an active learning problem: each similarity score can be learned by making a query, and the goal is to minimise both the disagreements and the total number of queries.
no code implementations • 2 Mar 2018 • Andrea Paudice, Luis Muñoz-González, Emil C. Lupu
Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points.
1 code implementation • 8 Feb 2018 • Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu
We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack.
no code implementations • 29 Aug 2017 • Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli
This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.
no code implementations • 22 Jun 2016 • Luis Muñoz-González, Daniele Sgandurra, Andrea Paudice, Emil C. Lupu
We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages of approximate inference techniques to scale to larger attack graphs.