1 code implementation • 30 May 2023 • Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans
We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.
no code implementations • 6 Mar 2023 • Sitan Chen, Giannis Daras, Alexandros G. Dimakis
We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling.
1 code implementation • 17 Feb 2023 • Giannis Daras, Yuval Dagan, Alexandros G. Dimakis, Constantinos Daskalakis
Imperfect score-matching leads to a shift between the training and the sampling distribution of diffusion models.
1 code implementation • 30 Nov 2022 • Giannis Daras, Alexandros G. Dimakis
We extend Textual Inversion to learn pseudo-words that represent a concept at different resolutions.
1 code implementation • 20 Oct 2022 • Giannis Daras, Negin Raoof, Zoi Gkalitsiou, Alexandros G. Dimakis
We find a surprising connection between multitask learning and robustness to neuron failures.
no code implementations • 14 Oct 2022 • Matt Jordan, Jonathan Hayase, Alexandros G. Dimakis, Sewoong Oh
Neural network verification aims to provide provable bounds for the output of a neural network for a given input range.
no code implementations • 12 Sep 2022 • Giannis Daras, Mauricio Delbracio, Hossein Talebi, Alexandros G. Dimakis, Peyman Milanfar
To reverse these general diffusions, we propose a new objective called Soft Score Matching that provably learns the score function for any linear corruption process and yields state of the art results for CelebA.
Ranked #5 on
Image Generation
on CelebA 64x64
2 code implementations • 18 Jun 2022 • Giannis Daras, Yuval Dagan, Alexandros G. Dimakis, Constantinos Daskalakis
In practice, to allow for increased expressivity, we propose to do posterior sampling in the latent space of a pre-trained generative model.
no code implementations • 1 Jun 2022 • Giannis Daras, Alexandros G. Dimakis
We discover that DALLE-2 seems to have a hidden vocabulary that can be used to generate images with absurd prompts.
no code implementations • 16 Dec 2021 • Giannis Daras, Wen-Sheng Chu, Abhishek Kumar, Dmitry Lagun, Alexandros G. Dimakis
We introduce a novel framework for solving inverse problems using NeRF-style generative models.
no code implementations • CVPR 2022 • Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G. Dimakis, Peyman Milanfar
Unlike existing techniques, we train a stochastic sampler that refines the output of a deterministic predictor and is capable of producing a diverse set of plausible reconstructions for a given input.
1 code implementation • NeurIPS 2021 • Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis
The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A.
2 code implementations • NeurIPS 2021 • Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G. Dimakis, Jonathan I. Tamir
The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deep generative priors can be powerful tools for solving inverse problems.
1 code implementation • 6 Jul 2021 • Matt Jordan, Alexandros G. Dimakis
We present a scalable technique for upper bounding the Lipschitz constant of generative models.
1 code implementation • 23 Jun 2021 • Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alexandros G. Dimakis, Eric Price
This motivates the introduction of definitions that allow algorithms to be \emph{oblivious} to the relevant groupings.
1 code implementation • 21 Jun 2021 • Ajil Jalal, Sushrut Karmalkar, Alexandros G. Dimakis, Eric Price
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).
no code implementations • 5 Jun 2021 • Jay Whang, Anish Acharya, Hyeji Kim, Alexandros G. Dimakis
Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder.
2 code implementations • 15 Feb 2021 • Giannis Daras, Joseph Dean, Ajil Jalal, Alexandros G. Dimakis
We propose Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models.
no code implementations • 15 Dec 2020 • Nir Shlezinger, Jay Whang, Yonina C. Eldar, Alexandros G. Dimakis
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
1 code implementation • NeurIPS 2020 • Giannis Daras, Nikita Kitaev, Augustus Odena, Alexandros G. Dimakis
We propose a novel type of balanced clustering algorithm to approximate attention.
1 code implementation • 11 Oct 2020 • Giannis Daras, Nikita Kitaev, Augustus Odena, Alexandros G. Dimakis
We also show that SMYRF can be used interchangeably with dense attention before and after training.
1 code implementation • NeurIPS 2020 • Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis
In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the vector is represented by a deep generative model $G: \mathbb{R}^k \rightarrow \mathbb{R}^n$.
no code implementations • 12 May 2020 • Gregory Ongie, Ajil Jalal, Christopher A. Metzler, Richard G. Baraniuk, Alexandros G. Dimakis, Rebecca Willett
Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging.
no code implementations • 18 Mar 2020 • Jay Whang, Qi Lei, Alexandros G. Dimakis
We study image inverse problems with a normalizing flow prior.
1 code implementation • NeurIPS 2020 • Matt Jordan, Alexandros G. Dimakis
The local Lipschitz constant of a neural network is a useful metric with applications in robustness, generalization, and fairness evaluation.
no code implementations • 26 Feb 2020 • Jay Whang, Erik M. Lindgren, Alexandros G. Dimakis
We approach this problem as a task of conditional inference on the pre-trained unconditional flow model.
1 code implementation • NeurIPS 2019 • Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis
We propose a generalized variant of Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.
3 code implementations • CVPR 2020 • Giannis Daras, Augustus Odena, Han Zhang, Alexandros G. Dimakis
We introduce a new local sparse attention layer that preserves two-dimensional geometry and locality.
Ranked #16 on
Conditional Image Generation
on ImageNet 128x128
no code implementations • 17 Oct 2019 • Jiacheng Zhuo, Qi Lei, Alexandros G. Dimakis, Constantine Caramanis
Large-scale machine learning training suffers from two prior challenges, specifically for nuclear-norm constrained problems with distributed systems: the synchronization slowdown due to the straggling workers, and high communication costs.
no code implementations • ICML 2020 • Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis
Generative adversarial networks (GANs) are a widely used framework for learning generative models.
1 code implementation • NeurIPS 2019 • Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi
We give a simple algorithm to estimate the parameters (i. e., the weight matrix and bias vector of the ReLU neural network) up to an error $\epsilon||W||_F$ using $\tilde{O}(1/\epsilon^2)$ samples and $\tilde{O}(d^2/\epsilon^2)$ time (log factors are ignored for simplicity).
1 code implementation • NeurIPS 2019 • Qi Lei, Ajil Jalal, Inderjit S. Dhillon, Alexandros G. Dimakis
For generative models of arbitrary depth, we show that exact recovery is possible in polynomial time with high probability, if the layers are expanding and the weights are randomly selected.
1 code implementation • 6 Jun 2019 • Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis
We propose a variant of the Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.
no code implementations • 18 Apr 2019 • Sriram Ravula, Alexandros G. Dimakis
We extend the Deep Image Prior (DIP) framework to one-dimensional signals.
1 code implementation • NeurIPS 2019 • Matt Jordan, Justin Lewis, Alexandros G. Dimakis
We relate the problem of computing pointwise robustness of these networks to that of computing the maximum norm ball with a fixed center that can be contained in a non-convex polytope.
no code implementations • 21 Feb 2019 • Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis
To demonstrate the value of quantifying the perceptual distortion of adversarial examples, we present and employ a unifying framework fusing different attack styles.
1 code implementation • 1 Dec 2018 • Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock
In this paper we formulate the attacks with discrete input on a set function as an optimization task.
no code implementations • 26 Nov 2018 • Sungsoo Kim, Jin Soo Park, Christos G. Bampis, Jaeseong Lee, Mia K. Markey, Alexandros G. Dimakis, Alan C. Bovik
We propose a video compression framework using conditional Generative Adversarial Networks (GANs).
no code implementations • NeurIPS 2018 • Erik M. Lindgren, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath
We consider the minimum cost intervention design problem: Given the essential graph of a causal graph and a cost to intervene on a variable, identify the set of interventions with minimum total cost that can learn any causal graph with the given essential graph.
1 code implementation • NeurIPS 2019 • Shanshan Wu, Sujay Sanghavi, Alexandros G. Dimakis
We show that this algorithm can recover any arbitrary discrete pairwise graphical model, and also characterize its sample complexity as a function of model width, alphabet size, edge parameter accuracy, and the number of variables.
no code implementations • NeurIPS 2020 • Murat Kocaoglu, Sanjay Shakkottai, Alexandros G. Dimakis, Constantine Caramanis, Sriram Vishwanath
We study the problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent.
1 code implementation • 26 Jun 2018 • Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi, Felix X. Yu, Daniel Holtmann-Rice, Dmitry Storcheus, Afshin Rostamizadeh, Sanjiv Kumar
Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1. 1-3x) compared to the previous state-of-the-art methods.
1 code implementation • 17 Jun 2018 • Dave Van Veen, Ajil Jalal, Mahdi Soltanolkotabi, Eric Price, Sriram Vishwanath, Alexandros G. Dimakis
We propose a novel method for compressed sensing recovery using untrained deep generative models.
no code implementations • ICLR 2018 • Ashish Bora, Eric Price, Alexandros G. Dimakis
Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest.
1 code implementation • 26 Dec 2017 • Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis
Our formulation involves solving a min-max problem, where the min player sets the parameters of the classifier and the max player is running our attack, and is thus searching for adversarial examples in the {\em low-dimensional} input space of the spanner.
1 code implementation • NeurIPS 2017 • Rajat Sen, Ananda Theertha Suresh, Karthikeyan Shanmugam, Alexandros G. Dimakis, Sanjay Shakkottai
We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables.
2 code implementations • ICLR 2018 • Murat Kocaoglu, Christopher Snyder, Alexandros G. Dimakis, Sriram Vishwanath
We show that adversarial training can be used to learn a generative model with true observational and interventional distributions if the generator architecture is consistent with the given causal graph.
no code implementations • ICML 2017 • Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis
We propose a novel coding theoretic framework for mitigating stragglers in distributed learning.
no code implementations • ICML 2018 • Netanel Raviv, Itzhak Tamo, Rashish Tandon, Alexandros G. Dimakis
Gradient coding is a technique for straggler mitigation in distributed learning.
3 code implementations • ICML 2017 • Ashish Bora, Ajil Jalal, Eric Price, Alexandros G. Dimakis
The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain.
no code implementations • NeurIPS 2016 • Erik M. Lindgren, Shanshan Wu, Alexandros G. Dimakis
The facility location problem is widely used for summarizing large datasets and has additional applications in sensor placement, image retrieval, and clustering.
no code implementations • ICML 2017 • Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath
We consider the problem of learning a causal graph over a set of variables with interventions.
no code implementations • 8 Mar 2017 • Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sujay Sanghavi
We consider support recovery in the quadratic logistic regression setting - where the target depends on both p linear terms $x_i$ and up to $p^2$ quadratic terms $x_i x_j$.
no code implementations • ICML 2017 • Erik M. Lindgren, Alexandros G. Dimakis, Adam Klivans
We require that the number of fractional vertices in the LP relaxation exceeding the optimal solution is bounded by a polynomial in the problem size.
no code implementations • ICML 2017 • Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban
We provide new approximation guarantees for greedy low rank matrix estimation under standard assumptions of restricted strong convexity and smoothness.
no code implementations • 8 Mar 2017 • Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban, Joydeep Ghosh
Furthermore, we show that a bounded submodularity ratio can be used to provide data dependent bounds that can sometimes be tighter also for submodular functions.
1 code implementation • NeurIPS 2017 • Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, Amin Karbasi
In many machine learning applications, it is important to explain the predictions of a black-box classifier.
no code implementations • 28 Jan 2017 • Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi
This framework requires the solution of a minimum entropy coupling problem: Given marginal distributions of m discrete random variables, each on n states, find the joint distribution with minimum entropy, that respects the given marginals.
no code implementations • ICML 2017 • Rajat Sen, Karthikeyan Shanmugam, Alexandros G. Dimakis, Sanjay Shakkottai
Motivated by applications in computational advertising and systems biology, we consider the problem of identifying the best out of several possible soft interventions at a source node $V$ in an acyclic causal directed graph, to maximize the expected value of a target node $Y$ (located downstream of $V$).
2 code implementations • 10 Dec 2016 • Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis
We propose a novel coding theoretic framework for mitigating stragglers in distributed learning.
no code implementations • 2 Dec 2016 • Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, Sahand Negahban
Our results extend the work of Das and Kempe (2011) from the setting of linear regression to arbitrary objective functions.
1 code implementation • 12 Nov 2016 • Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi
We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem.
1 code implementation • NeurIPS 2016 • Shanshan Wu, Srinadh Bhojanapalli, Sujay Sanghavi, Alexandros G. Dimakis
In this paper we present a new algorithm for computing a low rank approximation of the product $A^TB$ by taking only a single pass of the two matrices $A$ and $B$.
no code implementations • 1 Jun 2016 • Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sanjay Shakkottai
Our algorithm achieves a regret of $\mathcal{O}\left(L\mathrm{poly}(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcal{O}(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context.
no code implementations • 9 Mar 2016 • Megasthenis Asteris, Anastasios Kyrillidis, Dimitris Papailiopoulos, Alexandros G. Dimakis
We present a novel approximation algorithm for $k$-BCC, a variant of BCC with an upper bound $k$ on the number of clusters.
no code implementations • NeurIPS 2015 • Megasthenis Asteris, Dimitris Papailiopoulos, Alexandros G. Dimakis
Our algorithm relies on a novel approximation to the related Nonnegative Principal Component Analysis (NNPCA) problem; given an arbitrary data matrix, NNPCA seeks $k$ nonnegative components that jointly capture most of the variance.
2 code implementations • NeurIPS 2015 • Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath
We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in the worst case.
no code implementations • NeurIPS 2015 • Megasthenis Asteris, Dimitris Papailiopoulos, Anastasios Kyrillidis, Alexandros G. Dimakis
We consider the following multi-component sparse PCA problem: given a set of data points, we seek to extract a small number of sparse components with disjoint supports that jointly capture the maximum possible variance.
no code implementations • 8 Jun 2015 • Megasthenis Asteris, Anastasios Kyrillidis, Alexandros G. Dimakis, Han-Gyol Yi and, Bharath Chandrasekaran
We introduce a variant of (sparse) PCA in which the set of feasible support sets is determined by a graph.
no code implementations • NeurIPS 2014 • Karthikeyan Shanmugam, Rashish Tandon, Alexandros G. Dimakis, Pradeep Ravikumar
We provide a general framework for computing lower-bounds on the sample complexity of recovering the underlying graphs of Ising models, given i. i. d samples.
no code implementations • NeurIPS 2014 • Murat Kocaoglu, Karthikeyan Shanmugam, Alexandros G. Dimakis, Adam Klivans
We give an algorithm for exactly reconstructing f given random examples from the uniform distribution on $\{-1, 1\}^n$ that runs in time polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique sign property: there is one output value which corresponds to a unique set of values of the participating parities.
no code implementations • 3 Mar 2013 • Dimitris S. Papailiopoulos, Alexandros G. Dimakis, Stavros Korokythakis
A key algorithmic component of our scheme is a combinatorial feature elimination step that is provably safe and in practice significantly reduces the running complexity of our algorithm.