no code implementations • 17 Feb 2025 • Yu Xia, Subhojyoti Mukherjee, Zhouhang Xie, Junda Wu, Xintong Li, Ryan Aponte, Hanjia Lyu, Joe Barrow, Hongjie Chen, Franck Dernoncourt, Branislav Kveton, Tong Yu, Ruiyi Zhang, Jiuxiang Gu, Nesreen K. Ahmed, Yu Wang, Xiang Chen, Hanieh Deilamsalehy, Sungchul Kim, Zhengmian Hu, Yue Zhao, Nedim Lipka, Seunghyun Yoon, Ting-Hao Kenneth Huang, Zichao Wang, Puneet Mathur, Soumyabrata Pal, Koyel Mukherjee, Zhehao Zhang, Namyong Park, Thien Huu Nguyen, Jiebo Luo, Ryan A. Rossi, Julian McAuley
Active Learning (AL) has been a powerful paradigm for improving model efficiency and performance by selecting the most informative data points for labeling and training.
no code implementations • 7 Dec 2024 • Soumya Suvra Ghosal, Soumyabrata Pal, Koyel Mukherjee, Dinesh Manocha
Large Language Models (LLMs) have recently demonstrated impressive few-shot learning capabilities through in-context learning (ICL).
no code implementations • 26 Oct 2024 • Adit Jain, Soumyabrata Pal, Sunav Choudhary, Ramasuri Narayanam, Vikram Krishnamurthy
This paper considers the problem of annotating datapoints using an expert with only a few annotation rounds in a label-scarce setting.
no code implementations • 26 Oct 2024 • Aniket Das, Dheeraj Nagaraj, Soumyabrata Pal, Arun Suggala, Prateek Varshney
We consider the problem of high-dimensional heavy-tailed statistical estimation in the streaming setting, which is much harder than the traditional batch setting due to memory constraints.
no code implementations • 16 Oct 2024 • Akriti Jain, Saransh Sharma, Koyel Mukherjee, Soumyabrata Pal
To address both limitations, we propose FiRST, an algorithm that reduces inference latency by using layer-specific routers to select a subset of transformer layers adaptively for each input sequence - the prompt (during the prefill stage) decides which layers will be skipped during decoding.
no code implementations • 11 Aug 2024 • Dheeraj Baby, Soumyabrata Pal
In the regime where ${M},{N} >> {T}$, we propose two distinct computationally efficient algorithms for recommending items to users and analyze them under the benign \emph{hott items} assumption. 1) First, for ${S}=1$, under additional incoherence/smoothness assumptions on ${R}$, we propose the phased algorithm \textsc{PhasedClusterElim}.
no code implementations • 17 Jan 2023 • Soumyabrata Pal, Arun Sai Suggala, Karthikeyan Shanmugam, Prateek Jain
Instead, we propose LATTICE (Latent bAndiTs via maTrIx ComplEtion) which allows exploitation of the latent cluster structure to provide the minimax optimal regret of $\widetilde{O}(\sqrt{(\mathsf{M}+\mathsf{N})\mathsf{T}})$, when the number of clusters is $\widetilde{O}(1)$.
no code implementations • 29 Oct 2022 • Namiko Matsumoto, Arya Mazumdar, Soumyabrata Pal
A {\em universal} measurement matrix for 1bCS refers to one set of measurements that work for all sparse signals.
no code implementations • 7 Oct 2022 • Soumyabrata Pal, Prateek Varshney, Prateek Jain, Abhradeep Guha Thakurta, Gagan Madan, Gaurav Aggarwal, Pradeep Shenoy, Gaurav Srivastava
We then study the framework in the linear setting, where the problem reduces to that of estimating the sum of a rank-$r$ and a $k$-column sparse matrix using a small number of linear measurements.
no code implementations • 8 Sep 2022 • Prateek Jain, Soumyabrata Pal
In each round, the algorithm recommends one item per user, for which it gets a (noisy) reward sampled from a low-rank user-item preference matrix.
no code implementations • 22 Jun 2022 • Sainyam Galhotra, Arya Mazumdar, Soumyabrata Pal, Barna Saha
We show that a simple triangle-counting algorithm to detect communities in the geometric block model is near-optimal.
no code implementations • 26 May 2022 • Avishek Ghosh, Arya Mazumdar, Soumyabrata Pal, Rajat Sen
In this paper we show that a version of the popular alternating minimization (AM) algorithm finds the best fit lines in a dataset even when a realizable model is not assumed, under some regularity conditions on the dataset and the initial points, and thereby provides a solution for the ERM.
no code implementations • 24 Feb 2022 • Arya Mazumdar, Soumyabrata Pal
Sparsity of parameter vectors is a natural constraint in variety of settings, and support recovery is a major step towards parameter estimation.
no code implementations • 2 Oct 2021 • Wasim Huleihel, Arya Mazumdar, Soumyabrata Pal
Under the alternative, there is a subgraph on $k$ vertices with edge probability $p>q$.
no code implementations • 2 Sep 2021 • Sami Davies, Arya Mazumdar, Soumyabrata Pal, Cyrus Rashtchian
Mixtures of high dimensional Gaussian distributions have been studied extensively in statistics and learning theory.
no code implementations • 19 Jul 2021 • Arya Mazumdar, Soumyabrata Pal
With universality, it is known that $\tilde{\Theta}(k^2)$ 1bCS measurements are necessary and sufficient for support recovery (where $k$ denotes the sparsity).
no code implementations • NeurIPS 2021 • Venkata Gandikota, Arya Mazumdar, Soumyabrata Pal
In this work, we study the number of measurements sufficient for recovering the supports of all the component vectors in a mixture in both these models.
1 code implementation • NeurIPS 2021 • Wasim Huleihel, Arya Mazumdar, Soumyabrata Pal
In particular, we provide algorithms for fuzzy clustering in this setting that asks $O(\mathsf{poly}(k)\log n)$ similarity queries and run with polynomial-time-complexity, where $n$ is the number of items.
no code implementations • 29 Jan 2021 • Wasim Huleihel, Soumyabrata Pal, Ofer Shayevitz
One of the main surprising observations in our experiments is the fact our algorithm outperforms other static algorithms even when preferences do not change over time.
no code implementations • NeurIPS 2020 • Venkata Gandikota, Arya Mazumdar, Soumyabrata Pal
We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse.
no code implementations • ICML 2020 • Arya Mazumdar, Soumyabrata Pal
Mixture of linear regressions is a popular learning theoretic model that is used widely to represent heterogeneous data.
no code implementations • 19 Jan 2020 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal
Our second approach uses algebraic and combinatorial tools and applies to binomial mixtures with shared trial parameter $N$ and differing success parameters, as well as to mixtures of geometric distributions.
no code implementations • NeurIPS 2019 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal
Ourtechniques are quite different from those in the previous work: for the noiselesscase, we rely on a property of sparse polynomials and for the noisy case, we providenew connections to learning Gaussian mixtures and use ideas from the theory of
no code implementations • 30 Oct 2019 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal
In the problem of learning mixtures of linear regressions, the goal is to learn a collection of signal vectors from a sequence of (possibly noisy) linear measurements, where each measurement is evaluated on an unknown signal drawn uniformly from this collection.
no code implementations • NeurIPS 2019 • Wasim Huleihel, Arya Mazumdar, Muriel Médard, Soumyabrata Pal
In this paper, we look at the more practical scenario of overlapping clusters, and provide upper bounds (with algorithms) on the sufficient number of queries.
no code implementations • 31 Mar 2019 • Arya Mazumdar, Soumyabrata Pal
In this paper, we show that a recently popular model of semi-supervised clustering is equivalent to locally encodable source coding.
no code implementations • 29 Jun 2018 • Raj Kumar Maity, Arya Mazumdar, Soumyabrata Pal
Recently Ermon et al. (2013) pioneered a way to practically compute approximations to large scale counting or discrete integration problems by using random hashes.
no code implementations • 12 Apr 2018 • Sainyam Galhotra, Arya Mazumdar, Soumyabrata Pal, Barna Saha
Our next contribution is in using the connectivity of random annulus graphs to provide necessary and sufficient conditions for efficient recovery of communities for {\em the geometric block model} (GBM).
no code implementations • NeurIPS 2017 • Arya Mazumdar, Soumyabrata Pal
In this paper, we show that a recently popular model of semisupervised clustering is equivalent to locally encodable source coding.
no code implementations • 16 Sep 2017 • Sainyam Galhotra, Arya Mazumdar, Soumyabrata Pal, Barna Saha
To capture the inherent geometric features of many community detection problems, we propose to use a new random graph model of communities that we call a Geometric Block Model.