no code implementations • 10 Dec 2024 • Nikos Tsikouras, Constantine Caramanis, Christos Tzamos
A first answer is no: as we show, the distance-preserving objective of JL has a non-convex landscape over the space of projection matrices, with many bad stationary points.
no code implementations • 14 Oct 2024 • Litu Rout, Yujia Chen, Nataniel Ruiz, Constantine Caramanis, Sanjay Shakkottai, Wen-Sheng Chu
Although Diffusion Models (DMs) have recently dominated the field of generative modeling for images, their inversion presents faithfulness and editability challenges due to nonlinearities in drift and diffusion.
no code implementations • 3 Jun 2024 • Jeongyeol Kwon, Shie Mannor, Constantine Caramanis, Yonathan Efroni
Our result builds off a new perspective on the role of off-policy evaluation guarantees and coverage coefficients in LMDPs, a perspective, that has been overlooked in the context of exploration in partially observed environments.
1 code implementation • 27 May 2024 • Litu Rout, Yujia Chen, Nataniel Ruiz, Abhishek Kumar, Constantine Caramanis, Sanjay Shakkottai, Wen-Sheng Chu
Existing training-free approaches exhibit difficulties in (a) style extraction from reference images in the absence of additional style or content text descriptions, (b) unwanted content leakage from reference style images, and (c) effective composition of style and content.
no code implementations • CVPR 2024 • Litu Rout, Yujia Chen, Abhishek Kumar, Constantine Caramanis, Sanjay Shakkottai, Wen-Sheng Chu
To our best knowledge, this is the first work to offer an efficient second-order approximation in solving inverse problems using latent diffusion and editing real-world images with corruptions.
no code implementations • 11 Oct 2023 • Jeongyeol Kwon, Yonathan Efroni, Shie Mannor, Constantine Caramanis
In such an environment, the latent information remains fixed throughout each episode, since the identity of the user does not change during an interaction.
no code implementations • 8 Oct 2023 • Constantine Caramanis, Dimitris Fotakis, Alkis Kalavasis, Vasilis Kontonis, Christos Tzamos
Deep Neural Networks and Reinforcement Learning methods have empirically shown great promise in tackling challenging combinatorial problems.
1 code implementation • NeurIPS 2023 • Litu Rout, Negin Raoof, Giannis Daras, Constantine Caramanis, Alexandros G. Dimakis, Sanjay Shakkottai
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
no code implementations • 15 Jun 2023 • Alexia Atsidakou, Branislav Kveton, Sumeet Katariya, Constantine Caramanis, Sujay Sanghavi
In a multi-armed bandit, we obtain $O(c_\Delta \log n)$ and $O(c_h \log^2 n)$ upper bounds for an upper confidence bound algorithm, where $c_h$ and $c_\Delta$ are constants depending on the prior distribution and the gaps of bandit instances sampled from it, respectively.
no code implementations • 22 Apr 2023 • Dhiraj Murthy, Constantine Caramanis, Koustav Rudra
Individuals involved in gang-related activity use mainstream social media including Facebook and Twitter to express taunts and threats as well as grief and memorializing.
no code implementations • 13 Feb 2023 • Matthew Faw, Litu Rout, Constantine Caramanis, Sanjay Shakkottai
Despite the richness, an emerging line of works achieves the $\widetilde{\mathcal{O}}(\frac{1}{\sqrt{T}})$ rate of convergence when the noise of the stochastic gradients is deterministically and uniformly bounded.
no code implementations • 2 Feb 2023 • Litu Rout, Advait Parulekar, Constantine Caramanis, Sanjay Shakkottai
To the best of our knowledge, this is the first linear convergence result for a diffusion based image inpainting algorithm.
no code implementations • 5 Oct 2022 • Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor
We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs): at the beginning of every episode nature randomly picks a latent reward model among $M$ candidates and an agent interacts with the MDP throughout the episode for $H$ time steps.
no code implementations • 5 Oct 2022 • Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor
Then, through a method-of-moments approach, we design a procedure that provably learns a near-optimal policy with $O(\texttt{poly}(A) + \texttt{poly}(M, H)^{\min(M, H)})$ interactions.
no code implementations • 29 May 2022 • Orestis Papadigenopoulos, Constantine Caramanis, Sanjay Shakkottai
Even assuming prior knowledge of the mean payoff functions, computing an optimal planning in the above model is NP-hard, while the state-of-the-art is a $1/4$-approximation algorithm for the case where at most one arm can be played per round.
no code implementations • 26 May 2022 • Alexia Atsidakou, Constantine Caramanis, Evangelia Gergatsouli, Orestis Papadigenopoulos, Christos Tzamos
Pandora's Box is a fundamental stochastic optimization problem, where the decision-maker must find a good alternative while minimizing the search cost of exploring the value of each alternative.
no code implementations • 11 Feb 2022 • Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel Ward
We study convergence rates of AdaGrad-Norm as an exemplar of adaptive stochastic gradient methods (SGD), where the step sizes change based on observed stochastic gradients, for minimizing non-convex, smooth objectives.
no code implementations • 30 Jan 2022 • Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor
This parallelization gain is fundamentally altered by the presence of adversarial users: unless there are super-polynomial number of users, we show a lower bound of $\tilde{\Omega}(\min(S, A) \cdot \alpha^2 / \epsilon^2)$ {\it per-user} interactions to learn an $\epsilon$-optimal policy for the good users.
no code implementations • NeurIPS 2021 • Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor
We study the problem of learning a near optimal policy for two reward-mixing MDPs.
no code implementations • 22 May 2021 • Alexia Atsidakou, Orestis Papadigenopoulos, Soumya Basu, Constantine Caramanis, Sanjay Shakkottai
Recent work has considered natural variations of the multi-armed bandit problem, where the reward distribution of each arm is a special function of the time passed since its last pulling.
no code implementations • NeurIPS 2021 • Orestis Papadigenopoulos, Constantine Caramanis
A recent line of research focuses on the study of stochastic multi-armed bandits (MAB), in the case where temporal correlations of specific structure are imposed between the player's actions and the reward distributions of the arms.
1 code implementation • 17 Feb 2021 • Ashish Katiyar, Soumya Basu, Vatsal Shah, Constantine Caramanis
Furthermore, we present a polynomial time, sample efficient algorithm that recovers the exact tree when this is possible, or up to the unidentifiability as promised by our characterization, when full recoverability is impossible.
no code implementations • NeurIPS 2021 • Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor
In this work, we consider the regret minimization problem for reinforcement learning in latent Markov Decision Processes (LMDP).
no code implementations • NeurIPS 2021 • Orestis Papadigenopoulos, Constantine Caramanis
A recent line of research focuses on the study of the stochastic multi-armed bandits problem (MAB), in the case where temporal correlations of specific structure are imposed between the player's actions and the reward distributions of the arms (Kleinberg and Immorlica [FOCS18], Basu et al. [NeurIPS19]).
no code implementations • 27 Jan 2021 • Jiacheng Zhuo, Jeongyeol Kwon, Nhat Ho, Constantine Caramanis
We consider solving the low rank matrix sensing problem with Factorized Gradient Descend (FGD) method when the true rank is unknown and over-specified, which we refer to as over-parameterized matrix sensing.
no code implementations • NeurIPS 2020 • Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari
In this paper we study the problem of escaping from saddle points and achieving second-order optimality in a decentralized setting where a group of agents collaborate to minimize their aggregate objective function.
no code implementations • 7 Jul 2020 • Jiacheng Zhuo, Liu Liu, Constantine Caramanis
However, the existing CG type methods are not robust to data corruption.
1 code implementation • NeurIPS 2020 • Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis
In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the vector is represented by a deep generative model $G: \mathbb{R}^k \rightarrow \mathbb{R}^n$.
no code implementations • 10 Jun 2020 • Ashish Katiyar, Vatsal Shah, Constantine Caramanis
We consider the task of learning Ising models when the signs of different random variables are flipped independently with possibly unequal, unknown probabilities.
no code implementations • 4 Jun 2020 • Jeongyeol Kwon, Nhat Ho, Constantine Caramanis
In the low SNR regime where the SNR is below $\mathcal{O}((d/n)^{1/4})$, we show that EM converges to a $\mathcal{O}((d/n)^{1/4})$ neighborhood of the true parameters, after $\mathcal{O}((n/d)^{1/2})$ iterations.
no code implementations • 6 Mar 2020 • Soumya Basu, Orestis Papadigenopoulos, Constantine Caramanis, Sanjay Shakkottai
Assuming knowledge of the context distribution and the mean reward of each arm-context pair, we cast the problem as an online bipartite matching problem, where the right-vertices (contexts) arrive stochastically and the left-vertices (arms) are blocked for a finite number of rounds each time they are matched.
no code implementations • 2 Feb 2020 • Jeongyeol Kwon, Constantine Caramanis
A fundamental previous result established that separation of $\Omega(\sqrt{\log k})$ is necessary and sufficient for identifiability of the parameters with polynomial sample complexity (Regev and Vijayaraghavan, 2017).
1 code implementation • NeurIPS 2019 • Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis
We propose a generalized variant of Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.
no code implementations • 17 Oct 2019 • Jiacheng Zhuo, Qi Lei, Alexandros G. Dimakis, Constantine Caramanis
Large-scale machine learning training suffers from two prior challenges, specifically for nuclear-norm constrained problems with distributed systems: the synchronization slowdown due to the straggling workers, and high communication costs.
1 code implementation • NeurIPS 2020 • Matthew Faw, Rajat Sen, Karthikeyan Shanmugam, Constantine Caramanis, Sanjay Shakkottai
We consider a covariate shift problem where one has access to several different training datasets for the same learning problem and a small validation set which possibly differs from all the individual training distributions.
no code implementations • NeurIPS 2016 • Xinyang Yi, Zhaoran Wang, Zhuoran Yang, Constantine Caramanis, Han Liu
We consider the weakly supervised binary classification problem where the labels are randomly flipped with probability $1- {\alpha}$.
no code implementations • ICML 2020 • Jessica Hoffmann, Soumya Basu, Surbhi Goel, Constantine Caramanis
When the conditions are met, i. e., when the graphs are connected with at least three edges, we give an efficient algorithm for learning the weights of both graphs with optimal sample complexity (up to log factors).
1 code implementation • 6 Jun 2019 • Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis
We propose a variant of the Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.
no code implementations • 28 May 2019 • Jeongyeol Kwon, Constantine Caramanis
In particular, our results imply exact recovery as $\sigma \rightarrow 0$, in contrast to most previous local convergence results for EM, where the statistical error scaled with the norm of parameters.
no code implementations • 6 Mar 2019 • Jessica Hoffmann, Constantine Caramanis
Finally, we give a polynomial time algorithm for learning the weights of general bounded-degree graphs in the limited-noise setting.
no code implementations • 25 Jan 2019 • Ashish Katiyar, Jessica Hoffmann, Constantine Caramanis
If we observe realizations of the variables, we can compute the covariance matrix, and it is well known that the support of the inverse covariance matrix corresponds to the edges of the graphical model.
no code implementations • 24 Jan 2019 • Liu Liu, Tianyang Li, Constantine Caramanis
We define a natural condition we call the Robust Descent Condition (RDC), and show that if a gradient estimator satisfies the RDC, then Robust Hard Thresholding (IHT using this gradient estimator), is guaranteed to obtain good statistical rates.
no code implementations • 12 Oct 2018 • Jeongyeol Kwon, Wei Qian, Constantine Caramanis, Yudong Chen, Damek Davis
Recent results established that EM enjoys global convergence for Gaussian Mixture Models.
no code implementations • NeurIPS 2020 • Murat Kocaoglu, Sanjay Shakkottai, Alexandros G. Dimakis, Constantine Caramanis, Sriram Vishwanath
We study the problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent.
no code implementations • 29 May 2018 • Liu Liu, Yanyao Shen, Tianyang Li, Constantine Caramanis
Our algorithm recovers the true sparse parameters with sub-linear sample complexity, in the presence of a constant fraction of arbitrary corruptions.
no code implementations • 23 May 2018 • Tianyang Li, Anastasios Kyrillidis, Liu Liu, Constantine Caramanis
We present a novel statistical inference framework for convex empirical risk minimization, using approximate stochastic Newton steps.
no code implementations • 22 Nov 2017 • Guy Tennenholtz, Constantine Caramanis, Shie Mannor
We devise a simple policy that only vaccinates neighbors of infected nodes and is optimal on regular trees and on general graphs for a sufficiently large budget.
no code implementations • 21 May 2017 • Tianyang Li, Liu Liu, Anastasios Kyrillidis, Constantine Caramanis
We present a novel method for frequentist statistical inference in $M$-estimation problems, based on stochastic gradient descent (SGD) with a fixed step size: we demonstrate that the average of such SGD sequences can be used for statistical inference, after proper scaling.
no code implementations • 12 Sep 2016 • Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, Sujay Sanghavi
We consider the non-square matrix sensing problem, under restricted isometry property (RIP) assumptions.
no code implementations • 19 Aug 2016 • Xinyang Yi, Constantine Caramanis, Sujay Sanghavi
We give a tractable algorithm for the mixed linear equation problem, and show that under some technical conditions, our algorithm is guaranteed to solve the problem exactly with sample complexity linear in the dimension, and polynomial in $k$, the number of components.
no code implementations • 10 Jun 2016 • Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, Sujay Sanghavi
We study such parameterization for optimization of generic convex objectives $f$, and focus on first-order, gradient descent algorithmic solutions.
no code implementations • 4 Jun 2016 • Dohyung Park, Anastasios Kyrillidis, Srinadh Bhojanapalli, Constantine Caramanis, Sujay Sanghavi
We study the projected gradient descent method on low-rank matrix problems with a strongly convex objective.
no code implementations • NeurIPS 2016 • Xinyang Yi, Dohyung Park, Yudong Chen, Constantine Caramanis
For the partially observed case, we show the complexity of our algorithm is no more than $\mathcal{O}(r^4d \log d \log(1/\varepsilon))$.
no code implementations • NeurIPS 2015 • Xinyang Yi, Constantine Caramanis
In particular, regularizing the M-step using the state-of-the-art high-dimensional prescriptions (e. g., Wainwright (2014)) is not guaranteed to provide this balance.
no code implementations • NeurIPS 2015 • Xinyang Yi, Zhaoran Wang, Constantine Caramanis, Han Liu
This model is known as the single-index model in statistics, and, among other things, it represents a significant generalization of one-bit compressed sensing.
no code implementations • 19 Feb 2015 • Xinyang Yi, Constantine Caramanis, Eric Price
Binary embedding is a nonlinear dimension reduction methodology where high dimensional data are embedded into the Hamming cube while preserving the structure of the original space.
Data Structures and Algorithms Information Theory Information Theory
no code implementations • NeurIPS 2014 • Dohyung Park, Constantine Caramanis, Sujay Sanghavi
We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces.
no code implementations • 6 Feb 2014 • Eli A. Meirom, Chris Milling, Constantine Caramanis, Shie Mannor, Ariel Orda, Sanjay Shakkottai
Our algorithm requires only local-neighbor knowledge of this graph, and in a broad array of settings that we describe, succeeds even when false negatives and false positives make up an overwhelming fraction of the data available.
no code implementations • 25 Dec 2013 • Yudong Chen, Xinyang Yi, Constantine Caramanis
We consider the mixed regression problem with two components, under adversarial and stochastic noise.
no code implementations • 14 Oct 2013 • Xinyang Yi, Constantine Caramanis, Sujay Sanghavi
Mixed linear regression involves the recovery of two (or more) unknown vectors from unlabeled linear measurements; that is, where each sample comes from exactly one of the vectors, but we do not know which one.
no code implementations • NeurIPS 2013 • Ioannis Mitliagkas, Constantine Caramanis, Prateek Jain
Standard algorithms require $O(p^2)$ memory; meanwhile no algorithm can do better than $O(kp)$ memory, since this is what the output itself requires.
no code implementations • 5 Jun 2012 • Yudong Chen, Constantine Caramanis
Many models for sparse regression typically assume that the covariates are known completely, and without noise.
no code implementations • 10 Feb 2011 • Yudong Chen, Huan Xu, Constantine Caramanis, Sujay Sanghavi
Moreover, we show by an information-theoretic argument that our guarantees are nearly optimal in terms of the fraction of sampled entries on the authentic columns, the fraction of corrupted columns, and the rank of the underlying matrix.
1 code implementation • NeurIPS 2010 • Huan Xu, Constantine Caramanis, Sujay Sanghavi
Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers.
no code implementations • NeurIPS 2008 • Huan Xu, Constantine Caramanis, Shie Mannor
We generalize this robust formulation to consider more general uncertainty sets, which all lead to tractable convex optimization problems.