no code implementations • 2 Dec 2022 • Pascal Mattia Esser, Satyaki Mukherjee, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar
The central question in representation learning is what constitutes a good or meaningful representation.
no code implementations • 17 Jun 2022 • Pascal Mattia Esser, Frank Nielsen
A common way to learn and analyze statistical models is to consider operations in the model parameter space.
no code implementations • NeurIPS 2021 • Pascal Mattia Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar
While VC Dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models.
no code implementations • 7 Dec 2021 • Pascal Mattia Esser, Frank Nielsen
We empirically show that using (natural) gradient descent on the smooth manifold approximation instead of the singular space allows us to avoid the attractor behavior and therefore improve the convergence speed in learning.
no code implementations • NeurIPS 2020 • Michaël Perrot, Pascal Mattia Esser, Debarghya Ghoshdastidar
The goal of clustering is to group similar objects into meaningful partitions.