Search Results for author: Debarghya Ghoshdastidar

Found 19 papers, 3 papers with code

Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks

no code implementations NeurIPS 2021 Pascal Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

While VC-dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models.

Learning Theory Node Classification

Causal Forecasting:Generalization Bounds for Autoregressive Models

no code implementations18 Nov 2021 Leena Chennuru Vankadara, Philipp Michael Faller, Lenon Minorics, Debarghya Ghoshdastidar, Dominik Janzing

Here, we study the problem of *causal generalization* -- generalizing from the observational to interventional distributions -- in forecasting.

Learning Theory Time Series

Recovery Guarantees for Kernel-based Clustering under Non-parametric Mixture Models

no code implementations18 Oct 2021 Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar

Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.

New Insights into Graph Convolutional Networks using Neural Tangent Kernels

no code implementations8 Oct 2021 Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar

This paper focuses on semi-supervised learning on graphs, and explains the above observations through the lens of Neural Tangent Kernels (NTKs).

Graphon based Clustering and Testing of Networks: Algorithms and Theory

1 code implementation6 Oct 2021 Mahalakshmi Sabanayagam, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

Using the proposed graph distance, we present two clustering algorithms and show that they achieve state-of-the-art results.

Classification Graph Classification +2

On the optimality of kernels for high-dimensional clustering

no code implementations1 Dec 2019 Leena Chennuru Vankadara, Debarghya Ghoshdastidar

This is the first work that provides such optimality guarantees for the kernel k-means as well as its convex relaxation.

Practical methods for graph two-sample testing

1 code implementation NeurIPS 2018 Debarghya Ghoshdastidar, Ulrike Von Luxburg

Hypothesis testing for graphs has been an important tool in applied research fields for more than two decades, and still remains a challenging problem as one often needs to draw inference from few replicates of large graphs.

Learning Theory Two-sample testing

Foundations of Comparison-Based Hierarchical Clustering

1 code implementation NeurIPS 2019 Debarghya Ghoshdastidar, Michaël Perrot, Ulrike Von Luxburg

We address the classical problem of hierarchical clustering, but in a framework where one does not have access to a representation of the objects or their pairwise similarities.

Two-sample Hypothesis Testing for Inhomogeneous Random Graphs

no code implementations4 Jul 2017 Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg

Given a population of $m$ graphs from each model, we derive minimax separation rates for the problem of testing $P=Q$ against $d(P, Q)>\rho$.

Two-sample testing

Two-Sample Tests for Large Random Graphs Using Network Statistics

no code implementations17 May 2017 Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg

We consider a two-sample hypothesis testing problem, where the distributions are defined on the space of undirected graphs, and one has access to only one observation from each model.

Two-sample testing

Comparison Based Nearest Neighbor Search

no code implementations5 Apr 2017 Siavash Haghiri, Debarghya Ghoshdastidar, Ulrike Von Luxburg

We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points.

Uniform Hypergraph Partitioning: Provable Tensor Methods and Sampling Techniques

no code implementations21 Feb 2016 Debarghya Ghoshdastidar, Ambedkar Dukkipati

This work is motivated by two issues that arise when a hypergraph partitioning approach is used to tackle computer vision problems: (i) The uniform hypergraphs constructed for higher-order learning contain all edges, but most have negligible weights.

hypergraph partitioning Stochastic Block Model

Consistency of Spectral Partitioning of Uniform Hypergraphs under Planted Partition Model

no code implementations NeurIPS 2014 Debarghya Ghoshdastidar, Ambedkar Dukkipati

Spectral graph partitioning methods have received significant attention from both practitioners and theorists in computer science.

graph partitioning

Spectral Clustering with Jensen-type kernels and their multi-point extensions

no code implementations CVPR 2014 Debarghya Ghoshdastidar, Ambedkar Dukkipati, Ajay P. Adsul, Aparna S. Vijayan

Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multi-point' kernels, and study their applications.

Semantic Segmentation

Generative Maximum Entropy Learning for Multiclass Classification

no code implementations3 May 2012 Ambedkar Dukkipati, Gaurav Pandey, Debarghya Ghoshdastidar, Paramita Koley, D. M. V. Satya Sriram

In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature.

Classification Feature Selection +1

On Power-law Kernels, corresponding Reproducing Kernel Hilbert Space and Applications

no code implementations9 Apr 2012 Debarghya Ghoshdastidar, Ambedkar Dukkipati

Motivated by the importance of power-law distributions in statistical modeling, in this paper, we propose the notion of power-law kernels to investigate power-laws in learning problem.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.