no code implementations • 8 Nov 2023 • Benwei Shi, Aditya Bhaskara, Wai Ming Tai, Jeff M. Phillips
We show that a constant-size constant-error coreset for polytope distance is simple to maintain under merges of coresets.
no code implementations • 9 Jun 2023 • Harvey Dam, Vinu Joseph, Aditya Bhaskara, Ganesh Gopalakrishnan, Saurav Muralidharan, Michael Garland
E. g., it has been shown that mismatches between the full and compressed models can be biased towards under-represented classes.
no code implementations • 4 Nov 2022 • Aditya Bhaskara, Sreenivas Gollapudi, Sungjin Im, Kostas Kollias, Kamesh Munagala
For stochastic MAB, we also consider a stronger model where a probe reveals the reward values of the probed arms, and show that in this case, $k=3$ probes suffice to achieve parameter-independent constant regret, $O(n^2)$.
no code implementations • NeurIPS 2021 • Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit
We consider the online linear optimization problem, where at every step the algorithm plays a point $x_t$ in the unit ball, and suffers loss $\langle c_t, x_t\rangle$ for some cost vector $c_t$ that is then revealed to the algorithm.
1 code implementation • 3 Dec 2020 • Vinu Joseph, Shoaib Ahmed Siddiqui, Aditya Bhaskara, Ganesh Gopalakrishnan, Saurav Muralidharan, Michael Garland, Sheraz Ahmed, Andreas Dengel
With the rise in edge-computing devices, there has been an increasing demand to deploy energy and resource-efficient models.
no code implementations • NeurIPS 2020 • Aditya Bhaskara, Amin Karbasi, Silvio Lattanzi, Morteza Zadimoghaddam
In this paper, we provide an efficient approximation algorithm for finding the most likelihood configuration (MAP) of size $k$ for Determinantal Point Processes (DPP) in the online setting where the data points arrive in an arbitrary order and the algorithm cannot discard the selected elements from its local memory.
no code implementations • NeurIPS 2020 • Aditya Bhaskara, Sreenivas Gollapudi, Kostas Kollias, Kamesh Munagala
Inspired by traffic routing applications, we consider the problem of finding the shortest path from a source $s$ to a destination $t$ in a graph, when the lengths of the edges are unknown.
no code implementations • NeurIPS 2020 • Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit
We study an online linear optimization (OLO) problem in which the learner is provided access to $K$ "hint" vectors in each round prior to making a decision.
no code implementations • 19 Jun 2020 • Mohsen Abbasi, Aditya Bhaskara, Suresh Venkatasubramanian
A core principle in most clustering problems is that a cluster center should be representative of the cluster it represents, by being "close" to the points associated with it.
no code implementations • ICML 2020 • Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit
We consider a variant of the classical online linear optimization problem in which at every step, the online player receives a "hint" vector before choosing the action for that round.
1 code implementation • NeurIPS 2019 • Aditya Bhaskara, Pruthuvi Maheshakya Wijewardena
The server performs an aggregation and computes the desired eigenvalues and vectors.
1 code implementation • NeurIPS 2019 • Aditya Bhaskara, Sharvaree Vadgama, Hong Xu
One the one hand, they possess good theoretical approximation guarantees, and on the other, they are fast and easy to implement.
no code implementations • 28 May 2019 • Aditya Bhaskara, Wai Ming Tai
The problem is formalized as factorizing a matrix $X (d \times n)$ (whose columns are the signals) as $X = AY$, where $A$ has a prescribed number $m$ of columns (typically $m \ll n$), and $Y$ has columns that are $k$-sparse (typically $k \ll d$).
no code implementations • 29 Nov 2018 • Aditya Bhaskara, Aidao Chen, Aidan Perreault, Aravindan Vijayaraghavan
Smoothed analysis is a powerful paradigm in overcoming worst-case intractability in unsupervised learning and high-dimensional data analysis.
no code implementations • ICML 2018 • Aditya Bhaskara, Maheshakya Wijewardena
Given the importance of clustering in the analysisof large scale data, distributed algorithms for formulations such as k-means, k-median, etc.
no code implementations • NeurIPS 2016 • Aditya Bhaskara, Mehrdad Ghadiri, Vahab Mirrokni, Ola Svensson
We first study the approximation quality of the algorithm by comparing with the LP objective.
no code implementations • 20 Nov 2015 • Felix X. Yu, Aditya Bhaskara, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang
To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix.
no code implementations • 7 Jan 2015 • Aditya Bhaskara, Ananda Theertha Suresh, Morteza Zadimoghaddam
For learning a mixture of $k$ axis-aligned Gaussians in $d$ dimensions, we give an algorithm that outputs a mixture of $O(k/\epsilon^3)$ Gaussians that is $\epsilon$-close in statistical distance to the true distribution, without any separation assumptions.
no code implementations • NeurIPS 2014 • Mohammadhossein Bateni, Aditya Bhaskara, Silvio Lattanzi, Vahab Mirrokni
Large-scale clustering of data points in metric spaces is an important problem in mining big data sets.
no code implementations • 3 Jan 2014 • Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma
In dictionary learning, also known as sparse coding, the algorithm is given samples of the form $y = Ax$ where $x\in \mathbb{R}^m$ is an unknown random sparse vector and $A$ is an unknown dictionary matrix in $\mathbb{R}^{n\times m}$ (usually $m > n$, which is the overcomplete case).
no code implementations • 14 Nov 2013 • Aditya Bhaskara, Moses Charikar, Ankur Moitra, Aravindan Vijayaraghavan
We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension).
no code implementations • 23 Oct 2013 • Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma
The analysis of the algorithm reveals interesting structure of neural networks with random edge weights.
no code implementations • 30 Apr 2013 • Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan
We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error.