Search Results for author: Debarghya Ghoshdastidar

Found 29 papers, 6 papers with code

Causal Forecasting:Generalization Bounds for Autoregressive Models

1 code implementation18 Nov 2021 Leena Chennuru Vankadara, Philipp Michael Faller, Michaela Hardt, Lenon Minorics, Debarghya Ghoshdastidar, Dominik Janzing

Under causal sufficiency, the problem of causal generalization amounts to learning under covariate shifts, albeit with additional structure (restriction to interventional distributions under the VAR model).

Learning Theory Time Series +1

Foundations of Comparison-Based Hierarchical Clustering

1 code implementation NeurIPS 2019 Debarghya Ghoshdastidar, Michaël Perrot, Ulrike Von Luxburg

We address the classical problem of hierarchical clustering, but in a framework where one does not have access to a representation of the objects or their pairwise similarities.

Clustering

Practical methods for graph two-sample testing

1 code implementation NeurIPS 2018 Debarghya Ghoshdastidar, Ulrike Von Luxburg

Hypothesis testing for graphs has been an important tool in applied research fields for more than two decades, and still remains a challenging problem as one often needs to draw inference from few replicates of large graphs.

Learning Theory Open-Ended Question Answering +2

Analysis of Convolutions, Non-linearity and Depth in Graph Neural Networks using Neural Tangent Kernel

1 code implementation18 Oct 2022 Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar

The fundamental principle of Graph Neural Networks (GNNs) is to exploit the structural information of the data by aggregating the neighboring nodes using a `graph convolution' in conjunction with a suitable choice for the network architecture, such as depth and activation functions.

Node Classification Stochastic Block Model

A Revenue Function for Comparison-Based Hierarchical Clustering

1 code implementation29 Nov 2022 Aishik Mandal, Michaël Perrot, Debarghya Ghoshdastidar

Comparison-based learning addresses the problem of learning when, instead of explicit features or pairwise similarities, one only has access to comparisons of the form: \emph{Object $A$ is more similar to $B$ than to $C$.}

Clustering Open-Ended Question Answering

Two-sample Hypothesis Testing for Inhomogeneous Random Graphs

no code implementations4 Jul 2017 Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg

Given a population of $m$ graphs from each model, we derive minimax separation rates for the problem of testing $P=Q$ against $d(P, Q)>\rho$.

Two-sample testing Vocal Bursts Valence Prediction

Two-Sample Tests for Large Random Graphs Using Network Statistics

no code implementations17 May 2017 Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg

We consider a two-sample hypothesis testing problem, where the distributions are defined on the space of undirected graphs, and one has access to only one observation from each model.

Two-sample testing Vocal Bursts Valence Prediction

Uniform Hypergraph Partitioning: Provable Tensor Methods and Sampling Techniques

no code implementations21 Feb 2016 Debarghya Ghoshdastidar, Ambedkar Dukkipati

This work is motivated by two issues that arise when a hypergraph partitioning approach is used to tackle computer vision problems: (i) The uniform hypergraphs constructed for higher-order learning contain all edges, but most have negligible weights.

Clustering hypergraph partitioning +1

Comparison Based Nearest Neighbor Search

no code implementations5 Apr 2017 Siavash Haghiri, Debarghya Ghoshdastidar, Ulrike Von Luxburg

We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points.

Spectral Clustering with Jensen-type kernels and their multi-point extensions

no code implementations CVPR 2014 Debarghya Ghoshdastidar, Ambedkar Dukkipati, Ajay P. Adsul, Aparna S. Vijayan

Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multi-point' kernels, and study their applications.

Clustering Image Segmentation +2

Generative Maximum Entropy Learning for Multiclass Classification

no code implementations3 May 2012 Ambedkar Dukkipati, Gaurav Pandey, Debarghya Ghoshdastidar, Paramita Koley, D. M. V. Satya Sriram

In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature.

Binary Classification Classification +2

On Power-law Kernels, corresponding Reproducing Kernel Hilbert Space and Applications

no code implementations9 Apr 2012 Debarghya Ghoshdastidar, Ambedkar Dukkipati

Motivated by the importance of power-law distributions in statistical modeling, in this paper, we propose the notion of power-law kernels to investigate power-laws in learning problem.

General Classification regression

Consistency of Spectral Partitioning of Uniform Hypergraphs under Planted Partition Model

no code implementations NeurIPS 2014 Debarghya Ghoshdastidar, Ambedkar Dukkipati

Spectral graph partitioning methods have received significant attention from both practitioners and theorists in computer science.

graph partitioning

On the optimality of kernels for high-dimensional clustering

no code implementations1 Dec 2019 Leena Chennuru Vankadara, Debarghya Ghoshdastidar

This is the first work that provides such optimality guarantees for the kernel k-means as well as its convex relaxation.

Clustering Vocal Bursts Intensity Prediction

New Insights into Graph Convolutional Networks using Neural Tangent Kernels

no code implementations8 Oct 2021 Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar

This paper focuses on semi-supervised learning on graphs, and explains the above observations through the lens of Neural Tangent Kernels (NTKs).

Recovery Guarantees for Kernel-based Clustering under Non-parametric Mixture Models

no code implementations18 Oct 2021 Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar

Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.

Clustering

Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks

no code implementations NeurIPS 2021 Pascal Mattia Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

While VC Dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models.

Learning Theory Node Classification

Interpolation and Regularization for Causal Learning

no code implementations18 Feb 2022 Leena Chennuru Vankadara, Luca Rendsburg, Ulrike Von Luxburg, Debarghya Ghoshdastidar

If the confounding strength is negative, causal learning requires weaker regularization than statistical learning, interpolators can be optimal, and the optimal regularization can even be negative.

A Consistent Estimator for Confounding Strength

no code implementations3 Nov 2022 Luca Rendsburg, Leena Chennuru Vankadara, Debarghya Ghoshdastidar, Ulrike Von Luxburg

Regression on observational data can fail to capture a causal relationship in the presence of unobserved confounding.

regression

Wasserstein Projection Pursuit of Non-Gaussian Signals

no code implementations24 Feb 2023 Satyaki Mukherjee, Soumendu Sundar Mukherjee, Debarghya Ghoshdastidar

We consider the general dimensionality reduction problem of locating in a high-dimensional data cloud, a $k$-dimensional non-Gaussian subspace of interesting features.

Dimensionality Reduction

Fast Adaptive Test-Time Defense with Robust Features

no code implementations21 Jul 2023 Anurag Singh, Mahalakshmi Sabanayagam, Krikamol Muandet, Debarghya Ghoshdastidar

Adaptive test-time defenses are used to improve the robustness of deep neural networks to adversarial examples.

Explaining Kernel Clustering via Decision Trees

no code implementations15 Feb 2024 Maximilian Fleissner, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods.

Clustering Interpretable Machine Learning

On the Stability of Gradient Descent for Large Learning Rate

no code implementations20 Feb 2024 Alexandru Crăciun, Debarghya Ghoshdastidar

There currently is a significant interest in understanding the Edge of Stability (EoS) phenomenon, which has been observed in neural networks training, characterized by a non-monotonic decrease of the loss function over epochs, while the sharpness of the loss (spectral norm of the Hessian) progressively approaches and stabilizes around 2/(learning rate).

When can we Approximate Wide Contrastive Models with Neural Tangent Kernels and Principal Component Analysis?

no code implementations13 Mar 2024 Gautham Govind Anil, Pascal Esser, Debarghya Ghoshdastidar

We provide the first convergence results of NTK for contrastive losses, and present a nuanced picture: NTK of wide networks remains almost constant for cosine similarity based contrastive losses, but not for losses based on dot product similarity.

Contrastive Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.