Search Results for author: Santosh Vempala

Found 16 papers, 6 papers with code

Learning with Plasticity Rules: Generalization and Robustness

no code implementations1 Jan 2021 Rares C Cristian, Max Dabagia, Christos Papadimitriou, Santosh Vempala

Here we hypothesize that (a) Brains employ synaptic plasticity rules that serve as proxies for GD; (b) These rules themselves can be learned by GD on the rule parameters; and (c) This process may be a missing ingredient for the development of ANNs that generalize well and are robust to adversarial perturbations.

Socially Fair k-Means Clustering

2 code implementations17 Jun 2020 Mehrdad Ghadiri, Samira Samadi, Santosh Vempala

We show that the popular k-means clustering algorithm (Lloyd's heuristic), used for a variety of scientific data, can result in outcomes that are unfavorable to subgroups of data (e. g., demographic groups).

Clustering

Robustly Clustering a Mixture of Gaussians

no code implementations26 Nov 2019 He Jia, Santosh Vempala

We give an efficient algorithm for robustly clustering of a mixture of two arbitrary Gaussians, a central open problem in the theory of computationally efficient robust estimation, assuming only that the the means of the component Gaussians are well-separated or their covariances are well-separated.

Clustering Position

Biologically Plausible Neural Networks via Evolutionary Dynamics and Dopaminergic Plasticity

no code implementations NeurIPS Workshop Neuro_AI 2019 Sruthi Gorantla, Anand Louis, Christos H. Papadimitriou, Santosh Vempala, Naganand Yadati

Artificial neural networks (ANNs) lack in biological plausibility, chiefly because backpropagation requires a variant of plasticity (precise changes of the synaptic weights informed by neural events that occur downstream in the neural circuit) that is profoundly incompatible with the current understanding of the animal brain.

Multi-Criteria Dimensionality Reduction with Applications to Fairness

2 code implementations NeurIPS 2019 Uthaipon Tantipongpipat, Samira Samadi, Mohit Singh, Jamie Morgenstern, Santosh Vempala

Our main result is an exact polynomial-time algorithm for the two-criterion dimensionality reduction problem when the two criteria are increasing concave functions.

Dimensionality Reduction Fairness

Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds

no code implementations7 May 2018 Santosh Vempala, John Wilmes

We give an agnostic learning guarantee for GD: starting from a randomly initialized network, it converges in mean squared loss to the minimum error (in $2$-norm) of the best approximation of the target function using a polynomial of degree at most $k$.

On the Complexity of Learning Neural Networks

no code implementations NeurIPS 2017 Le Song, Santosh Vempala, John Wilmes, Bo Xie

Moreover, this hard family of functions is realizable with a small (sublinear in dimension) number of activation units in the single hidden layer.

Chi-squared Amplification: Identifying Hidden Hubs

no code implementations12 Aug 2016 Ravi Kannan, Santosh Vempala

We give a polynomial-time algorithm to identify all the hidden hubs with high probability for $k \ge n^{0. 5-\delta}$ for some $\delta >0$, when $\sigma_1^2>2\sigma_0^2$.

Agnostic Estimation of Mean and Covariance

2 code implementations24 Apr 2016 Kevin A. Lai, Anup B. Rao, Santosh Vempala

We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\mathbb{R}^n$, in the presence of an $\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type.

Cortical Computation via Iterative Constructions

no code implementations26 Feb 2016 Christos Papadimitrou, Samantha Petti, Santosh Vempala

We study the rate of convergence, finding that while linear convergence to the correct function can be achieved for any threshold using a fixed set of primitives, for quadratic convergence, the size of the primitives must grow as the threshold approaches 0 or 1.

Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization

no code implementations30 Dec 2015 Vitaly Feldman, Cristobal Guzman, Santosh Vempala

Stochastic convex optimization, where the objective is the expectation of a random convex function, is an important and widely used method with numerous applications in machine learning, statistics, operations research and other areas.

BIG-bench Machine Learning

Efficient Representations for Life-Long Learning and Autoencoding

no code implementations6 Nov 2014 Maria-Florina Balcan, Avrim Blum, Santosh Vempala

Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm.

Fourier PCA and Robust Tensor Decomposition

1 code implementation25 Jun 2013 Navin Goyal, Santosh Vempala, Ying Xiao

Fourier PCA is Principal Component Analysis of a matrix obtained from higher order derivatives of the logarithm of the Fourier transform of a distribution. We make this method algorithmic by developing a tensor decomposition method for a pair of tensors sharing the same vectors in rank-$1$ decompositions.

Tensor Decomposition

Cannot find the paper you are looking for? You can Submit a new open access paper.