Search Results for author: Kevin Scaman

Found 17 papers, 3 papers with code

Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize

no code implementations NeurIPS 2021 Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Kevin Scaman, Hoi-To Wai

This family of methods arises in many machine learning tasks and is used to obtain approximate solutions of a linear system $\bar{A}\theta = \bar{b}$ for which $\bar{A}$ and $\bar{b}$ can only be accessed through random estimates $\{({\bf A}_n, {\bf b}_n): n \in \mathbb{N}^*\}$.

Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks

1 code implementation8 Mar 2021 George Dasoulas, Kevin Scaman, Aladin Virmaux

To address this issue, we derive a theoretical analysis of the Lipschitz continuity of attention modules and introduce LipschitzNorm, a simple and parameter-free normalization for self-attention mechanisms that enforces the model to be Lipschitz continuous.

Deep Attention Graph Attention +1

Ego-based Entropy Measures for Structural Representations on Graphs

no code implementations17 Feb 2021 George Dasoulas, Giannis Nikolentzos, Kevin Scaman, Aladin Virmaux, Michalis Vazirgiannis

Machine learning on graph-structured data has attracted high research interest due to the emergence of Graph Neural Networks (GNNs).

Graph Classification

Robustness Analysis of Non-Convex Stochastic Gradient Descent using Biased Expectations

no code implementations NeurIPS 2020 Kevin Scaman, Cedric Malherbe

In the case of sub-Gaussian and centered noise, we prove that, with probability $1-\delta$, the number of iterations to reach a precision $\varepsilon$ for the squared gradient norm is $O(\varepsilon^{-2}\ln(1/\delta))$.

A Simple and Efficient Smoothing Method for Faster Optimization and Local Exploration

no code implementations NeurIPS 2020 Kevin Scaman, Ludovic Dos Santos, Merwan Barlier, Igor Colin

This novel smoothing method is then used to improve first-order non-smooth optimization (both convex and non-convex) by allowing for a local exploration of the search space.

Ego-based Entropy Measures for Structural Representations

no code implementations1 Mar 2020 George Dasoulas, Giannis Nikolentzos, Kevin Scaman, Aladin Virmaux, Michalis Vazirgiannis

Moreover, on graph classification tasks, we suggest the utilization of the generated structural embeddings for the transformation of an attributed graph structure into a set of augmented node attributes.

General Classification Graph Classification

Coloring graph neural networks for node disambiguation

no code implementations12 Dec 2019 George Dasoulas, Ludovic Dos Santos, Kevin Scaman, Aladin Virmaux

In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks(MPNNs).

Graph Classification

Theoretical Limits of Pipeline Parallel Optimization and Application to Distributed Deep Learning

no code implementations NeurIPS 2019 Igor Colin, Ludovic Dos Santos, Kevin Scaman

For smooth convex and non-convex objective functions, we provide matching lower and upper complexity bounds and show that a naive pipeline parallelization of Nesterov's accelerated gradient descent is optimal.

Optimal Algorithms for Non-Smooth Distributed Optimization in Networks

no code implementations NeurIPS 2018 Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié

Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

Optimization and Control

Lipschitz regularity of deep neural networks: analysis and efficient estimation

1 code implementation NeurIPS 2018 Kevin Scaman, Aladin Virmaux

First, we show that, even for two layer neural networks, the exact computation of this quantity is NP-hard and state-of-art methods may significantly overestimate it.

KONG: Kernels for ordered-neighborhood graphs

1 code implementation NeurIPS 2018 Moez Draief, Konstantin Kutzkov, Kevin Scaman, Milan Vojnovic

We present novel graph kernels for graphs with node and edge labels that have ordered neighborhoods, i. e. when neighbor nodes follow an order.

A Spectral Method for Activity Shaping in Continuous-Time Information Cascades

no code implementations15 Sep 2017 Kevin Scaman, Argyris Kalogeratos, Luca Corinzia, Nicolas Vayatis

Information Cascades Model captures dynamical properties of user activity in a social network.

Optimal algorithms for smooth and strongly convex distributed optimization in networks

no code implementations ICML 2017 Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié

For centralized (i. e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp.

Distributed Optimization

Multivariate Hawkes Processes for Large-scale Inference

no code implementations26 Feb 2016 Rémi Lemonnier, Kevin Scaman, Argyris Kalogeratos

In this paper, we present a framework for fitting multivariate Hawkes processes for large-scale problems both in the number of events in the observed history $n$ and the number of event types $d$ (i. e. dimensions).

Anytime Influence Bounds and the Explosive Behavior of Continuous-Time Diffusion Networks

no code implementations NeurIPS 2015 Kevin Scaman, Rémi Lemonnier, Nicolas Vayatis

Using this concept, we prove tight non-asymptotic bounds for the influence of a set of nodes, and we also provide an in-depth analysis of the critical time after which the contagion becomes super-critical.

Epidemiology

Cannot find the paper you are looking for? You can Submit a new open access paper.