Search Results for author: Constantine Caramanis

Found 61 papers, 7 papers with code

Beyond First-Order Tweedie: Solving Inverse Problems using Latent Diffusion

no code implementations1 Dec 2023 Litu Rout, Yujia Chen, Abhishek Kumar, Constantine Caramanis, Sanjay Shakkottai, Wen-Sheng Chu

To our best knowledge, this is the first work to offer an efficient second-order approximation in solving inverse problems using latent diffusion and editing real-world images with corruptions.

text-guided-image-editing

Prospective Side Information for Latent MDPs

no code implementations11 Oct 2023 Jeongyeol Kwon, Yonathan Efroni, Shie Mannor, Constantine Caramanis

In such an environment, the latent information remains fixed throughout each episode, since the identity of the user does not change during an interaction.

Decision Making

Finite-Time Logarithmic Bayes Regret Upper Bounds

no code implementations15 Jun 2023 Alexia Atsidakou, Branislav Kveton, Sumeet Katariya, Constantine Caramanis, Sujay Sanghavi

In a multi-armed bandit, we obtain $O(c_\Delta \log n)$ and $O(c_h \log^2 n)$ upper bounds for an upper confidence bound algorithm, where $c_h$ and $c_\Delta$ are constants depending on the prior distribution and the gaps of bandit instances sampled from it, respectively.

Understanding Lexical Biases when Identifying Gang-related Social Media Communications

no code implementations22 Apr 2023 Dhiraj Murthy, Constantine Caramanis, Koustav Rudra

Individuals involved in gang-related activity use mainstream social media including Facebook and Twitter to express taunts and threats as well as grief and memorializing.

Beyond Uniform Smoothness: A Stopped Analysis of Adaptive SGD

no code implementations13 Feb 2023 Matthew Faw, Litu Rout, Constantine Caramanis, Sanjay Shakkottai

Despite the richness, an emerging line of works achieves the $\widetilde{\mathcal{O}}(\frac{1}{\sqrt{T}})$ rate of convergence when the noise of the stochastic gradients is deterministically and uniformly bounded.

Reward-Mixing MDPs with a Few Latent Contexts are Learnable

no code implementations5 Oct 2022 Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor

We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs): at the beginning of every episode nature randomly picks a latent reward model among $M$ candidates and an agent interacts with the MDP throughout the episode for $H$ time steps.

Tractable Optimality in Episodic Latent MABs

no code implementations5 Oct 2022 Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor

Then, through a method-of-moments approach, we design a procedure that provably learns a near-optimal policy with $O(\texttt{poly}(A) + \texttt{poly}(M, H)^{\min(M, H)})$ interactions.

Non-Stationary Bandits under Recharging Payoffs: Improved Planning with Sublinear Regret

no code implementations29 May 2022 Orestis Papadigenopoulos, Constantine Caramanis, Sanjay Shakkottai

Even assuming prior knowledge of the mean payoff functions, computing an optimal planning in the above model is NP-hard, while the state-of-the-art is a $1/4$-approximation algorithm for the case where at most one arm can be played per round.

Scheduling

Contextual Pandora's Box

no code implementations26 May 2022 Alexia Atsidakou, Constantine Caramanis, Evangelia Gergatsouli, Orestis Papadigenopoulos, Christos Tzamos

Pandora's Box is a fundamental stochastic optimization problem, where the decision-maker must find a good alternative while minimizing the search cost of exploring the value of each alternative.

Multi-Armed Bandits Stochastic Optimization

The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance

no code implementations11 Feb 2022 Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel Ward

We study convergence rates of AdaGrad-Norm as an exemplar of adaptive stochastic gradient methods (SGD), where the step sizes change based on observed stochastic gradients, for minimizing non-convex, smooth objectives.

Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms

no code implementations30 Jan 2022 Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor

This parallelization gain is fundamentally altered by the presence of adversarial users: unless there are super-polynomial number of users, we show a lower bound of $\tilde{\Omega}(\min(S, A) \cdot \alpha^2 / \epsilon^2)$ {\it per-user} interactions to learn an $\epsilon$-optimal policy for the good users.

Collaborative Filtering Multi-Armed Bandits +1

Combinatorial Blocking Bandits with Stochastic Delays

no code implementations22 May 2021 Alexia Atsidakou, Orestis Papadigenopoulos, Soumya Basu, Constantine Caramanis, Sanjay Shakkottai

Recent work has considered natural variations of the multi-armed bandit problem, where the reward distribution of each arm is a special function of the time passed since its last pulling.

Blocking

Recurrent Submodular Welfare and Matroid Blocking Semi-Bandits

no code implementations NeurIPS 2021 Orestis Papadigenopoulos, Constantine Caramanis

A recent line of research focuses on the study of stochastic multi-armed bandits (MAB), in the case where temporal correlations of specific structure are imposed between the player's actions and the reward distributions of the arms.

Blocking Multi-Armed Bandits +1

Recoverability Landscape of Tree Structured Markov Random Fields under Symmetric Noise

1 code implementation17 Feb 2021 Ashish Katiyar, Soumya Basu, Vatsal Shah, Constantine Caramanis

Furthermore, we present a polynomial time, sample efficient algorithm that recovers the exact tree when this is possible, or up to the unidentifiability as promised by our characterization, when full recoverability is impossible.

RL for Latent MDPs: Regret Guarantees and a Lower Bound

no code implementations NeurIPS 2021 Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor

In this work, we consider the regret minimization problem for reinforcement learning in latent Markov Decision Processes (LMDP).

Recurrent Submodular Welfare and Matroid Blocking Bandits

no code implementations NeurIPS 2021 Orestis Papadigenopoulos, Constantine Caramanis

A recent line of research focuses on the study of the stochastic multi-armed bandits problem (MAB), in the case where temporal correlations of specific structure are imposed between the player's actions and the reward distributions of the arms (Kleinberg and Immorlica [FOCS18], Basu et al. [NeurIPS19]).

Blocking Multi-Armed Bandits +1

On the computational and statistical complexity of over-parameterized matrix sensing

no code implementations27 Jan 2021 Jiacheng Zhuo, Jeongyeol Kwon, Nhat Ho, Constantine Caramanis

We consider solving the low rank matrix sensing problem with Factorized Gradient Descend (FGD) method when the true rank is unknown and over-specified, which we refer to as over-parameterized matrix sensing.

Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking

no code implementations NeurIPS 2020 Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari

In this paper we study the problem of escaping from saddle points and achieving second-order optimality in a decentralized setting where a group of agents collaborate to minimize their aggregate objective function.

Robust Compressed Sensing using Generative Models

1 code implementation NeurIPS 2020 Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis

In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the vector is represented by a deep generative model $G: \mathbb{R}^k \rightarrow \mathbb{R}^n$.

Robust Estimation of Tree Structured Ising Models

no code implementations10 Jun 2020 Ashish Katiyar, Vatsal Shah, Constantine Caramanis

We consider the task of learning Ising models when the signs of different random variables are flipped independently with possibly unequal, unknown probabilities.

On the Minimax Optimality of the EM Algorithm for Learning Two-Component Mixed Linear Regression

no code implementations4 Jun 2020 Jeongyeol Kwon, Nhat Ho, Constantine Caramanis

In the low SNR regime where the SNR is below $\mathcal{O}((d/n)^{1/4})$, we show that EM converges to a $\mathcal{O}((d/n)^{1/4})$ neighborhood of the true parameters, after $\mathcal{O}((n/d)^{1/2})$ iterations.

regression

Contextual Blocking Bandits

no code implementations6 Mar 2020 Soumya Basu, Orestis Papadigenopoulos, Constantine Caramanis, Sanjay Shakkottai

Assuming knowledge of the context distribution and the mean reward of each arm-context pair, we cast the problem as an online bipartite matching problem, where the right-vertices (contexts) arrive stochastically and the left-vertices (arms) are blocked for a finite number of rounds each time they are matched.

Blocking Novel Concepts +1

The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians

no code implementations2 Feb 2020 Jeongyeol Kwon, Constantine Caramanis

A fundamental previous result established that separation of $\Omega(\sqrt{\log k})$ is necessary and sufficient for identifiability of the parameters with polynomial sample complexity (Regev and Vijayaraghavan, 2017).

Communication-Efficient Asynchronous Stochastic Frank-Wolfe over Nuclear-norm Balls

no code implementations17 Oct 2019 Jiacheng Zhuo, Qi Lei, Alexandros G. Dimakis, Constantine Caramanis

Large-scale machine learning training suffers from two prior challenges, specifically for nuclear-norm constrained problems with distributed systems: the synchronization slowdown due to the straggling workers, and high communication costs.

BIG-bench Machine Learning

Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions

1 code implementation NeurIPS 2020 Matthew Faw, Rajat Sen, Karthikeyan Shanmugam, Constantine Caramanis, Sanjay Shakkottai

We consider a covariate shift problem where one has access to several different training datasets for the same learning problem and a small validation set which possibly differs from all the individual training distributions.

Learning Mixtures of Graphs from Epidemic Cascades

no code implementations ICML 2020 Jessica Hoffmann, Soumya Basu, Surbhi Goel, Constantine Caramanis

When the conditions are met, i. e., when the graphs are connected with at least three edges, we give an efficient algorithm for learning the weights of both graphs with optimal sample complexity (up to log factors).

Primal-Dual Block Frank-Wolfe

1 code implementation6 Jun 2019 Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis

We propose a variant of the Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.

General Classification Multi-class Classification +1

EM Converges for a Mixture of Many Linear Regressions

no code implementations28 May 2019 Jeongyeol Kwon, Constantine Caramanis

In particular, our results imply exact recovery as $\sigma \rightarrow 0$, in contrast to most previous local convergence results for EM, where the statistical error scaled with the norm of parameters.

regression

Learning Graphs from Noisy Epidemic Cascades

no code implementations6 Mar 2019 Jessica Hoffmann, Constantine Caramanis

Finally, we give a polynomial time algorithm for learning the weights of general bounded-degree graphs in the limited-noise setting.

Robust estimation of tree structured Gaussian Graphical Model

no code implementations25 Jan 2019 Ashish Katiyar, Jessica Hoffmann, Constantine Caramanis

If we observe realizations of the variables, we can compute the covariance matrix, and it is well known that the support of the inverse covariance matrix corresponds to the edges of the graphical model.

High Dimensional Robust $M$-Estimation: Arbitrary Corruption and Heavy Tails

no code implementations24 Jan 2019 Liu Liu, Tianyang Li, Constantine Caramanis

We define a natural condition we call the Robust Descent Condition (RDC), and show that if a gradient estimator satisfies the RDC, then Robust Hard Thresholding (IHT using this gradient estimator), is guaranteed to obtain good statistical rates.

regression Vocal Bursts Intensity Prediction

Applications of Common Entropy for Causal Inference

no code implementations NeurIPS 2020 Murat Kocaoglu, Sanjay Shakkottai, Alexandros G. Dimakis, Constantine Caramanis, Sriram Vishwanath

We study the problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent.

Causal Inference valid

High Dimensional Robust Sparse Regression

no code implementations29 May 2018 Liu Liu, Yanyao Shen, Tianyang Li, Constantine Caramanis

Our algorithm recovers the true sparse parameters with sub-linear sample complexity, in the presence of a constant fraction of arbitrary corruptions.

regression Vocal Bursts Intensity Prediction

Approximate Newton-based statistical inference using only stochastic gradients

no code implementations23 May 2018 Tianyang Li, Anastasios Kyrillidis, Liu Liu, Constantine Caramanis

We present a novel statistical inference framework for convex empirical risk minimization, using approximate stochastic Newton steps.

Time Series Time Series Analysis

The Stochastic Firefighter Problem

no code implementations22 Nov 2017 Guy Tennenholtz, Constantine Caramanis, Shie Mannor

We devise a simple policy that only vaccinates neighbors of infected nodes and is optimal on regular trees and on general graphs for a sufficiently large budget.

Statistical inference using SGD

no code implementations21 May 2017 Tianyang Li, Liu Liu, Anastasios Kyrillidis, Constantine Caramanis

We present a novel method for frequentist statistical inference in $M$-estimation problems, based on stochastic gradient descent (SGD) with a fixed step size: we demonstrate that the average of such SGD sequences can be used for statistical inference, after proper scaling.

Non-square matrix sensing without spurious local minima via the Burer-Monteiro approach

no code implementations12 Sep 2016 Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, Sujay Sanghavi

We consider the non-square matrix sensing problem, under restricted isometry property (RIP) assumptions.

Solving a Mixture of Many Random Linear Equations by Tensor Decomposition and Alternating Minimization

no code implementations19 Aug 2016 Xinyang Yi, Constantine Caramanis, Sujay Sanghavi

We give a tractable algorithm for the mixed linear equation problem, and show that under some technical conditions, our algorithm is guaranteed to solve the problem exactly with sample complexity linear in the dimension, and polynomial in $k$, the number of components.

Tensor Decomposition

Finding Low-Rank Solutions via Non-Convex Matrix Factorization, Efficiently and Provably

no code implementations10 Jun 2016 Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, Sujay Sanghavi

We study such parameterization for optimization of generic convex objectives $f$, and focus on first-order, gradient descent algorithmic solutions.

Fast Algorithms for Robust PCA via Gradient Descent

no code implementations NeurIPS 2016 Xinyang Yi, Dohyung Park, Yudong Chen, Constantine Caramanis

For the partially observed case, we show the complexity of our algorithm is no more than $\mathcal{O}(r^4d \log d \log(1/\varepsilon))$.

Matrix Completion

Regularized EM Algorithms: A Unified Framework and Statistical Guarantees

no code implementations NeurIPS 2015 Xinyang Yi, Constantine Caramanis

In particular, regularizing the M-step using the state-of-the-art high-dimensional prescriptions (e. g., Wainwright (2014)) is not guaranteed to provide this balance.

regression

Optimal linear estimation under unknown nonlinear transform

no code implementations NeurIPS 2015 Xinyang Yi, Zhaoran Wang, Constantine Caramanis, Han Liu

This model is known as the single-index model in statistics, and, among other things, it represents a significant generalization of one-bit compressed sensing.

Binary Embedding: Fundamental Limits and Fast Algorithm

no code implementations19 Feb 2015 Xinyang Yi, Constantine Caramanis, Eric Price

Binary embedding is a nonlinear dimension reduction methodology where high dimensional data are embedded into the Hamming cube while preserving the structure of the original space.

Data Structures and Algorithms Information Theory Information Theory

Greedy Subspace Clustering

no code implementations NeurIPS 2014 Dohyung Park, Constantine Caramanis, Sujay Sanghavi

We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces.

Clustering Face Clustering +1

Localized epidemic detection in networks with overwhelming noise

no code implementations6 Feb 2014 Eli A. Meirom, Chris Milling, Constantine Caramanis, Shie Mannor, Ariel Orda, Sanjay Shakkottai

Our algorithm requires only local-neighbor knowledge of this graph, and in a broad array of settings that we describe, succeeds even when false negatives and false positives make up an overwhelming fraction of the data available.

A Convex Formulation for Mixed Regression with Two Components: Minimax Optimal Rates

no code implementations25 Dec 2013 Yudong Chen, Xinyang Yi, Constantine Caramanis

We consider the mixed regression problem with two components, under adversarial and stochastic noise.

regression

Alternating Minimization for Mixed Linear Regression

no code implementations14 Oct 2013 Xinyang Yi, Constantine Caramanis, Sujay Sanghavi

Mixed linear regression involves the recovery of two (or more) unknown vectors from unlabeled linear measurements; that is, where each sample comes from exactly one of the vectors, but we do not know which one.

regression

Memory Limited, Streaming PCA

no code implementations NeurIPS 2013 Ioannis Mitliagkas, Constantine Caramanis, Prateek Jain

Standard algorithms require $O(p^2)$ memory; meanwhile no algorithm can do better than $O(kp)$ memory, since this is what the output itself requires.

Orthogonal Matching Pursuit with Noisy and Missing Data: Low and High Dimensional Results

no code implementations5 Jun 2012 Yudong Chen, Constantine Caramanis

Many models for sparse regression typically assume that the covariates are known completely, and without noise.

regression

Matrix completion with column manipulation: Near-optimal sample-robustness-rank tradeoffs

no code implementations10 Feb 2011 Yudong Chen, Huan Xu, Constantine Caramanis, Sujay Sanghavi

Moreover, we show by an information-theoretic argument that our guarantees are nearly optimal in terms of the fraction of sampled entries on the authentic columns, the fraction of corrupted columns, and the rank of the underlying matrix.

Collaborative Filtering Matrix Completion

Robust PCA via Outlier Pursuit

1 code implementation NeurIPS 2010 Huan Xu, Constantine Caramanis, Sujay Sanghavi

Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers.

Collaborative Filtering Dimensionality Reduction +1

Robust Regression and Lasso

no code implementations NeurIPS 2008 Huan Xu, Constantine Caramanis, Shie Mannor

We generalize this robust formulation to consider more general uncertainty sets, which all lead to tractable convex optimization problems.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.