Search Results for author: Aditya Bhaskara

Found 23 papers, 3 papers with code

On Mergable Coresets for Polytope Distance

no code implementations8 Nov 2023 Benwei Shi, Aditya Bhaskara, Wai Ming Tai, Jeff M. Phillips

We show that a constant-size constant-error coreset for polytope distance is simple to maintain under merges of coresets.

Online Learning and Bandits with Queried Hints

no code implementations4 Nov 2022 Aditya Bhaskara, Sreenivas Gollapudi, Sungjin Im, Kostas Kollias, Kamesh Munagala

For stochastic MAB, we also consider a stronger model where a probe reveals the reward values of the probed arms, and show that in this case, $k=3$ probes suffice to achieve parameter-independent constant regret, $O(n^2)$.

Logarithmic Regret from Sublinear Hints

no code implementations NeurIPS 2021 Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We consider the online linear optimization problem, where at every step the algorithm plays a point $x_t$ in the unit ball, and suffers loss $\langle c_t, x_t\rangle$ for some cost vector $c_t$ that is then revealed to the algorithm.

Online MAP Inference of Determinantal Point Processes

no code implementations NeurIPS 2020 Aditya Bhaskara, Amin Karbasi, Silvio Lattanzi, Morteza Zadimoghaddam

In this paper, we provide an efficient approximation algorithm for finding the most likelihood configuration (MAP) of size $k$ for Determinantal Point Processes (DPP) in the online setting where the data points arrive in an arbitrary order and the algorithm cannot discard the selected elements from its local memory.

Point Processes

Adaptive Probing Policies for Shortest Path Routing

no code implementations NeurIPS 2020 Aditya Bhaskara, Sreenivas Gollapudi, Kostas Kollias, Kamesh Munagala

Inspired by traffic routing applications, we consider the problem of finding the shortest path from a source $s$ to a destination $t$ in a graph, when the lengths of the edges are unknown.

Online Linear Optimization with Many Hints

no code implementations NeurIPS 2020 Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We study an online linear optimization (OLO) problem in which the learner is provided access to $K$ "hint" vectors in each round prior to making a decision.

Fair clustering via equitable group representations

no code implementations19 Jun 2020 Mohsen Abbasi, Aditya Bhaskara, Suresh Venkatasubramanian

A core principle in most clustering problems is that a cluster center should be representative of the cluster it represents, by being "close" to the points associated with it.

Clustering Fairness

Online Learning with Imperfect Hints

no code implementations ICML 2020 Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We consider a variant of the classical online linear optimization problem in which at every step, the online player receives a "hint" vector before choosing the action for that round.

On Distributed Averaging for Stochastic k-PCA

1 code implementation NeurIPS 2019 Aditya Bhaskara, Pruthuvi Maheshakya Wijewardena

The server performs an aggregation and computes the desired eigenvalues and vectors.

Greedy Sampling for Approximate Clustering in the Presence of Outliers

1 code implementation NeurIPS 2019 Aditya Bhaskara, Sharvaree Vadgama, Hong Xu

One the one hand, they possess good theoretical approximation guarantees, and on the other, they are fast and easy to implement.

Clustering

Approximate Guarantees for Dictionary Learning

no code implementations28 May 2019 Aditya Bhaskara, Wai Ming Tai

The problem is formalized as factorizing a matrix $X (d \times n)$ (whose columns are the signals) as $X = AY$, where $A$ has a prescribed number $m$ of columns (typically $m \ll n$), and $Y$ has columns that are $k$-sparse (typically $k \ll d$).

Dictionary Learning

Smoothed Analysis in Unsupervised Learning via Decoupling

no code implementations29 Nov 2018 Aditya Bhaskara, Aidao Chen, Aidan Perreault, Aravindan Vijayaraghavan

Smoothed analysis is a powerful paradigm in overcoming worst-case intractability in unsupervised learning and high-dimensional data analysis.

Distributed Clustering via LSH Based Data Partitioning

no code implementations ICML 2018 Aditya Bhaskara, Maheshakya Wijewardena

Given the importance of clustering in the analysisof large scale data, distributed algorithms for formulations such as k-means, k-median, etc.

Clustering

On Binary Embedding using Circulant Matrices

no code implementations20 Nov 2015 Felix X. Yu, Aditya Bhaskara, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang

To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix.

Sparse Solutions to Nonnegative Linear Systems and Applications

no code implementations7 Jan 2015 Aditya Bhaskara, Ananda Theertha Suresh, Morteza Zadimoghaddam

For learning a mixture of $k$ axis-aligned Gaussians in $d$ dimensions, we give an algorithm that outputs a mixture of $O(k/\epsilon^3)$ Gaussians that is $\epsilon$-close in statistical distance to the true distribution, without any separation assumptions.

More Algorithms for Provable Dictionary Learning

no code implementations3 Jan 2014 Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma

In dictionary learning, also known as sparse coding, the algorithm is given samples of the form $y = Ax$ where $x\in \mathbb{R}^m$ is an unknown random sparse vector and $A$ is an unknown dictionary matrix in $\mathbb{R}^{n\times m}$ (usually $m > n$, which is the overcomplete case).

Dictionary Learning

Smoothed Analysis of Tensor Decompositions

no code implementations14 Nov 2013 Aditya Bhaskara, Moses Charikar, Ankur Moitra, Aravindan Vijayaraghavan

We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension).

Tensor Decomposition

Provable Bounds for Learning Some Deep Representations

no code implementations23 Oct 2013 Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma

The analysis of the algorithm reveals interesting structure of neural networks with random edge weights.

Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability

no code implementations30 Apr 2013 Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan

We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error.

Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.