Search Results for author: Gautam Dasarathy

Found 37 papers, 13 papers with code

Advanced Tutorial: Label-Efficient Two-Sample Tests

no code implementations7 Jan 2025 Weizhi Li, Visar Berisha, Gautam Dasarathy

This tutorial extends active learning concepts to two-sample testing within this \textit{label-costly} setting while maintaining statistical validity and high testing power.

Active Learning Two-sample testing

Structure Learning in Gaussian Graphical Models from Glauber Dynamics

no code implementations24 Dec 2024 Vignesh Tirukkonda, Anirudh Rayas, Gautam Dasarathy

Glauber dynamics, also called the Gibbs sampler, is a Markov chain that sequentially updates the variables of the underlying model based on the statistics of the remaining model.

Model Selection

Learning Networks from Wide-Sense Stationary Stochastic Processes

no code implementations4 Dec 2024 Anirudh Rayas, Jiajun Cheng, Rajasekhar Anguluri, Deepjyoti Deka, Gautam Dasarathy

Under a novel mutual incoherence condition and certain sufficient conditions on $(n, p, d)$, we show that the ML estimate recovers the sparsity pattern of $L^\ast$ with high probability, where $d$ is the maximum degree of the graph underlying $L^{\ast}$.

Communication-Efficient Federated Learning over Wireless Channels via Gradient Sketching

no code implementations30 Oct 2024 Vineet Sunil Gattani, Junshan Zhang, Gautam Dasarathy

Large-scale federated learning (FL) over wireless multiple access channels (MACs) has emerged as a crucial learning paradigm with a wide range of applications.

Federated Learning

Unraveling overoptimism and publication bias in ML-driven science

no code implementations23 May 2024 Pouria Saidi, Gautam Dasarathy, Visar Berisha

Validity concerns are underscored by findings of an inverse relationship between sample size and reported accuracy in published ML models, contrasting with the theory of learning curves where accuracy should improve or remain stable with increasing sample size.

Active Sequential Two-Sample Testing

no code implementations30 Jan 2023 Weizhi Li, Prad Kadambi, Pouria Saidi, Karthikeyan Natesan Ramamurthy, Gautam Dasarathy, Visar Berisha

The classification model is adaptively updated and used to predict where the (unlabelled) features have a high dependency on labels; labeling the ``high-dependency'' features leads to the increased power of the proposed testing framework.

Two-sample testing valid +1

Differential Analysis for Networks Obeying Conservation Laws

no code implementations30 Jan 2023 Anirudh Rayas, Rajasekhar Anguluri, Jiajun Cheng, Gautam Dasarathy

Given the dynamic nature of the systems under consideration, an equally important task is estimating the change in the structure of the network from data -- the so called differential network analysis problem.

Robust Model Selection of Gaussian Graphical Models

no code implementations10 Nov 2022 Abrar Zahin, Rajasekhar Anguluri, Lalitha Sankar, Oliver Kosut, Gautam Dasarathy

We first characterize the equivalence class up to which general graphs can be recovered in the presence of noise.

model Model Selection

Transmission Line Parameter Estimation Under Non-Gaussian Measurement Noise

no code implementations28 Aug 2022 Antos Cheeramban Varghese, Anamitra Pal, Gautam Dasarathy

The use of phasor measurement unit (PMU) data for transmission line parameter estimation (TLPE) is well-documented.

parameter estimation

Controllability of Coarsely Measured Networked Linear Dynamical Systems (Extended Version)

no code implementations21 Jun 2022 Nafiseh Ghoroghchian, Rajasekhar Anguluri, Gautam Dasarathy, Stark C. Draper

We consider the controllability of large-scale linear networked dynamical systems when complete knowledge of network structure is unavailable and knowledge is limited to coarse summaries.

Community Detection Stochastic Block Model

Learning the Structure of Large Networked Systems Obeying Conservation Laws

1 code implementation14 Jun 2022 Anirudh Rayas, Rajasekhar Anguluri, Gautam Dasarathy

Many networked systems such as electric networks, the brain, and social networks of opinion dynamics are known to obey conservation laws.

A Machine Learning Framework for Event Identification via Modal Analysis of PMU Data

no code implementations14 Feb 2022 Nima T. Bazargani, Gautam Dasarathy, Lalitha Sankar, Oliver Kosut

Using the obtained subset of features, we investigate the performance of two well-known classification models, namely, logistic regression (LR) and support vector machines (SVM) to identify generation loss and line trip events in two datasets.

feature selection

A label-efficient two-sample test

1 code implementation17 Nov 2021 Weizhi Li, Gautam Dasarathy, Karthikeyan Natesan Ramamurthy, Visar Berisha

Two-sample tests evaluate whether two samples are realizations of the same distribution (the null hypothesis) or two different distributions (the alternative hypothesis).

Two-sample testing Vocal Bursts Valence Prediction

Quantifying the Controllability of Coarsely Characterized Networked Dynamical Systems

no code implementations29 Sep 2021 Nafiseh Ghoroghchian, Rajasekhar Anguluri, Gautam Dasarathy, Stark Draper

In contrast, in this paper the controllability aspects of the coarse system are derived from coarse summaries {\em without} knowledge of the fine-scale structure.

Community Detection Stochastic Block Model

Maximizing and Satisficing in Multi-armed Bandits with Graph Information

1 code implementation2 Aug 2021 Parth K. Thaker, Mohit Malu, Nikhil Rao, Gautam Dasarathy

In this paper, we consider the pure exploration problem in stochastic multi-armed bandits where the similarities between the arms are captured by a graph and the rewards may be represented as a smooth signal on this graph.

Decision Making Multi-Armed Bandits

State and Topology Estimation for Unobservable Distribution Systems using Deep Neural Networks

1 code implementation15 Apr 2021 Behrouz Azimian, Reetam Sen Biswas, Shiva Moshtagh, Anamitra Pal, Lang Tong, Gautam Dasarathy

Time-synchronized state estimation for reconfigurable distribution networks is challenging because of limited real-time observability.

Graph Community Detection from Coarse Measurements: Recovery Conditions for the Coarsened Weighted Stochastic Block Model

1 code implementation25 Feb 2021 Nafiseh Ghoroghchian, Gautam Dasarathy, Stark C. Draper

Our objective is to develop conditions on the graph structure, the quantity, and properties of measurements, under which we can recover the community organization in this coarse graph.

Community Detection Stochastic Block Model

Finding the Homology of Decision Boundaries with Active Learning

1 code implementation NeurIPS 2020 Weizhi Li, Gautam Dasarathy, Karthikeyan Natesan Ramamurthy, Visar Berisha

We theoretically analyze the proposed framework and show that the query complexity of our active learning algorithm depends naturally on the intrinsic complexity of the underlying manifold.

Active Learning Meta-Learning +2

On the alpha-loss Landscape in the Logistic Model

no code implementations22 Jun 2020 Tyler Sypherd, Mario Diaz, Lalitha Sankar, Gautam Dasarathy

We analyze the optimization landscape of a recently introduced tunable class of loss functions called $\alpha$-loss, $\alpha \in (0,\infty]$, in the logistic model.

On the Sample Complexity and Optimization Landscape for Quadratic Feasibility Problems

no code implementations4 Feb 2020 Parth Thaker, Gautam Dasarathy, Angelia Nedić

We consider the problem of recovering a complex vector $\mathbf{x}\in \mathbb{C}^n$ from $m$ quadratic measurements $\{\langle A_i\mathbf{x}, \mathbf{x}\rangle\}_{i=1}^m$.

Retrieval

Regularization via Structural Label Smoothing

no code implementations7 Jan 2020 Weizhi Li, Gautam Dasarathy, Visar Berisha

Regularization is an effective way to promote the generalization performance of machine learning models.

A Tunable Loss Function for Robust Classification: Calibration, Landscape, and Generalization

1 code implementation5 Jun 2019 Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar

We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification.

Classification General Classification +1

Thresholding Graph Bandits with GrAPL

1 code implementation22 May 2019 Daniel LeJeune, Gautam Dasarathy, Richard G. Baraniuk

The main goal is to efficiently identify a subset of arms in a multi-armed bandit problem whose means are above a specified threshold.

Decision Making

IdeoTrace: A Framework for Ideology Tracing with a Case Study on the 2016 U.S. Presidential Election

no code implementations21 May 2019 Indu Manickam, Andrew S. Lan, Gautam Dasarathy, Richard G. Baraniuk

We apply this framework to the last two months of the election period for a group of 47508 Twitter users and demonstrate that both liberal and conservative users became more polarized over time.

Ultra Large-Scale Feature Selection using Count-Sketches

1 code implementation ICML 2018 Amirali Aghazadeh, Ryan Spring, Daniel LeJeune, Gautam Dasarathy, Anshumali Shrivastava, baraniuk

We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.

BIG-bench Machine Learning feature selection

MISSION: Ultra Large-Scale Feature Selection using Count-Sketches

1 code implementation12 Jun 2018 Amirali Aghazadeh, Ryan Spring, Daniel LeJeune, Gautam Dasarathy, Anshumali Shrivastava, Richard G. Baraniuk

We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.

BIG-bench Machine Learning feature selection

Coalescent-based species tree estimation: a stochastic Farris transform

no code implementations13 Jul 2017 Gautam Dasarathy, Elchanan Mossel, Robert Nowak, Sebastien Roch

As a corollary, we also obtain a new identifiability result of independent interest: for any species tree with $n \geq 3$ species, the rooted species tree can be identified from the distribution of its unrooted weighted gene trees even in the absence of a molecular clock.

DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks

no code implementations11 Jul 2017 Ali Mousavi, Gautam Dasarathy, Richard G. Baraniuk

In this paper we develop a novel computational sensing framework for sensing and recovering structured signals.

Compressive Sensing

Multi-fidelity Bayesian Optimisation with Continuous Approximations

no code implementations ICML 2017 Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabas Poczos

Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design.

Bayesian Optimisation

The Multi-fidelity Multi-armed Bandit

no code implementations NeurIPS 2016 Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabás Póczos

We study a variant of the classical stochastic $K$-armed bandit where observing the outcome of each arm is expensive, but cheap approximations to this outcome are available.

Active Learning Algorithms for Graphical Model Selection

no code implementations1 Feb 2016 Gautam Dasarathy, Aarti Singh, Maria-Florina Balcan, Jong Hyuk Park

The problem of learning the structure of a high dimensional graphical model from data has received considerable attention in recent years.

Active Learning model +1

Data Requirement for Phylogenetic Inference from Multiple Loci: A New Distance Method

no code implementations28 Apr 2014 Gautam Dasarathy, Robert Nowak, Sebastien Roch

We consider the problem of estimating the evolutionary history of a set of species (phylogeny or species tree) from several genes.

Cannot find the paper you are looking for? You can Submit a new open access paper.