Search Results for author: Alexandros G. Dimakis

Found 78 papers, 41 papers with code

Pre-training Small Base LMs with Fewer Tokens

3 code implementations12 Apr 2024 Sunny Sanyal, Sujay Sanghavi, Alexandros G. Dimakis

Here we show that smaller LMs trained utilizing some of the layers of GPT2-medium (355M) and GPT-2-large (770M) can effectively match the val loss of their bigger counterparts when trained from scratch for the same number of training steps on OpenWebText dataset with 9B tokens.

Language Modelling

Compressed Sensing using Generative Models

3 code implementations ICML 2017 Ashish Bora, Ajil Jalal, Eric Price, Alexandros G. Dimakis

The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain.

Robust Compressed Sensing MRI with Deep Generative Priors

2 code implementations NeurIPS 2021 Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G. Dimakis, Jonathan I. Tamir

The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deep generative priors can be powerful tools for solving inverse problems.

CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training

2 code implementations ICLR 2018 Murat Kocaoglu, Christopher Snyder, Alexandros G. Dimakis, Sriram Vishwanath

We show that adversarial training can be used to learn a generative model with true observational and interventional distributions if the generator architecture is consistent with the given causal graph.

Face Generation

Intermediate Layer Optimization for Inverse Problems using Deep Generative Models

2 code implementations15 Feb 2021 Giannis Daras, Joseph Dean, Ajil Jalal, Alexandros G. Dimakis

We propose Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models.

Denoising Super-Resolution

Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for Inverse Problems

2 code implementations18 Jun 2022 Giannis Daras, Yuval Dagan, Alexandros G. Dimakis, Constantinos Daskalakis

In practice, to allow for increased expressivity, we propose to do posterior sampling in the latent space of a pre-trained generative model.

Multiresolution Textual Inversion

1 code implementation30 Nov 2022 Giannis Daras, Alexandros G. Dimakis

We extend Textual Inversion to learn pseudo-words that represent a concept at different resolutions.

Compressed Sensing with Deep Image Prior and Learned Regularization

1 code implementation17 Jun 2018 Dave Van Veen, Ajil Jalal, Mahdi Soltanolkotabi, Eric Price, Sriram Vishwanath, Alexandros G. Dimakis

We propose a novel method for compressed sensing recovery using untrained deep generative models.

SMYRF: Efficient Attention using Asymmetric Clustering

1 code implementation11 Oct 2020 Giannis Daras, Nikita Kitaev, Augustus Odena, Alexandros G. Dimakis

We also show that SMYRF can be used interchangeably with dense attention before and after training.

16k Clustering

Ambient Diffusion: Learning Clean Distributions from Corrupted Data

1 code implementation NeurIPS 2023 Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans

We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.

Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes

1 code implementation NeurIPS 2019 Matt Jordan, Justin Lewis, Alexandros G. Dimakis

We relate the problem of computing pointwise robustness of these networks to that of computing the maximum norm ball with a fixed center that can be contained in a non-convex polytope.

Inverse Problems Leveraging Pre-trained Contrastive Representations

1 code implementation NeurIPS 2021 Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis

The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A.

Representation Learning

Streaming Weak Submodularity: Interpreting Neural Networks on the Fly

1 code implementation NeurIPS 2017 Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, Amin Karbasi

In many machine learning applications, it is important to explain the predictions of a black-box classifier.

Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling

1 code implementation26 Jun 2018 Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi, Felix X. Yu, Daniel Holtmann-Rice, Dmitry Storcheus, Afshin Rostamizadeh, Sanjiv Kumar

Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1. 1-3x) compared to the previous state-of-the-art methods.

Extreme Multi-Label Classification Multi-Label Learning +1

The Robust Manifold Defense: Adversarial Training using Generative Models

1 code implementation26 Dec 2017 Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis

Our formulation involves solving a min-max problem, where the min player sets the parameters of the classifier and the max player is running our attack, and is thus searching for adversarial examples in the {\em low-dimensional} input space of the spanner.

Instance-Optimal Compressed Sensing via Posterior Sampling

1 code implementation21 Jun 2021 Ajil Jalal, Sushrut Karmalkar, Alexandros G. Dimakis, Eric Price

We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).

Fairness for Image Generation with Uncertain Sensitive Attributes

1 code implementation23 Jun 2021 Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alexandros G. Dimakis, Eric Price

This motivates the introduction of definitions that allow algorithms to be \emph{oblivious} to the relevant groupings.

Fairness Image Generation +3

Exactly Computing the Local Lipschitz Constant of ReLU Networks

1 code implementation NeurIPS 2020 Matt Jordan, Alexandros G. Dimakis

The local Lipschitz constant of a neural network is a useful metric with applications in robustness, generalization, and fairness evaluation.

Fairness

Provable Lipschitz Certification for Generative Models

1 code implementation6 Jul 2021 Matt Jordan, Alexandros G. Dimakis

We present a scalable technique for upper bounding the Lipschitz constant of generative models.

Gradient Coding

2 code implementations10 Dec 2016 Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis

We propose a novel coding theoretic framework for mitigating stragglers in distributed learning.

Entropic Causal Inference

1 code implementation12 Nov 2016 Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi

We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem.

Causal Inference

Single Pass PCA of Matrix Products

1 code implementation NeurIPS 2016 Shanshan Wu, Srinadh Bhojanapalli, Sujay Sanghavi, Alexandros G. Dimakis

In this paper we present a new algorithm for computing a low rank approximation of the product $A^TB$ by taking only a single pass of the two matrices $A$ and $B$.

Robust Compressed Sensing using Generative Models

1 code implementation NeurIPS 2020 Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis

In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the vector is represented by a deep generative model $G: \mathbb{R}^k \rightarrow \mathbb{R}^n$.

Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion Generative Models

1 code implementation5 Jun 2023 Sriram Ravula, Brett Levac, Ajil Jalal, Jonathan I. Tamir, Alexandros G. Dimakis

Diffusion-based generative models have been used as powerful priors for magnetic resonance imaging (MRI) reconstruction.

MRI Reconstruction

Inverting Deep Generative models, One layer at a time

1 code implementation NeurIPS 2019 Qi Lei, Ajil Jalal, Inderjit S. Dhillon, Alexandros G. Dimakis

For generative models of arbitrary depth, we show that exact recovery is possible in polynomial time with high probability, if the layers are expanding and the weights are randomly selected.

Sparse Logistic Regression Learns All Discrete Pairwise Graphical Models

1 code implementation NeurIPS 2019 Shanshan Wu, Sujay Sanghavi, Alexandros G. Dimakis

We show that this algorithm can recover any arbitrary discrete pairwise graphical model, and also characterize its sample complexity as a function of model width, alphabet size, edge parameter accuracy, and the number of variables.

regression

One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from Electromagnetic Solvers

1 code implementation6 Jun 2023 Sriram Ravula, Varun Gorti, Bo Deng, Swagato Chakraborty, James Pingenot, Bhyrav Mutnury, Doug Wallace, Doug Winterberg, Adam Klivans, Alexandros G. Dimakis

DIP is a technique that optimizes the weights of a randomly-initialized convolutional neural network to fit a signal from noisy or under-determined measurements.

Learning Causal Graphs with Small Interventions

2 code implementations NeurIPS 2015 Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath

We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in the worst case.

Primal-Dual Block Frank-Wolfe

1 code implementation6 Jun 2019 Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis

We propose a variant of the Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.

General Classification Multi-class Classification +1

Which questions should I answer? Salience Prediction of Inquisitive Questions

1 code implementation16 Apr 2024 Yating Wu, Ritika Mangla, Alexandros G. Dimakis, Greg Durrett, Junyi Jessy Li

QSALIENCE is instruction-tuned over our dataset of linguist-annotated salience scores of 1, 766 (context, question) pairs.

Question Generation Question-Generation

Restricted Strong Convexity Implies Weak Submodularity

no code implementations2 Dec 2016 Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, Sahand Negahban

Our results extend the work of Das and Kempe (2011) from the setting of linear regression to arbitrary objective functions.

feature selection

Identifying Best Interventions through Online Importance Sampling

no code implementations ICML 2017 Rajat Sen, Karthikeyan Shanmugam, Alexandros G. Dimakis, Sanjay Shakkottai

Motivated by applications in computational advertising and systems biology, we consider the problem of identifying the best out of several possible soft interventions at a source node $V$ in an acyclic causal directed graph, to maximize the expected value of a target node $Y$ (located downstream of $V$).

Scalable Greedy Feature Selection via Weak Submodularity

no code implementations8 Mar 2017 Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban, Joydeep Ghosh

Furthermore, we show that a bounded submodularity ratio can be used to provide data dependent bounds that can sometimes be tighter also for submodular functions.

feature selection

On Approximation Guarantees for Greedy Low Rank Optimization

no code implementations ICML 2017 Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban

We provide new approximation guarantees for greedy low rank matrix estimation under standard assumptions of restricted strong convexity and smoothness.

Combinatorial Optimization

Leveraging Sparsity for Efficient Submodular Data Summarization

no code implementations NeurIPS 2016 Erik M. Lindgren, Shanshan Wu, Alexandros G. Dimakis

The facility location problem is widely used for summarizing large datasets and has additional applications in sensor placement, image retrieval, and clustering.

Clustering Data Summarization +2

Exact MAP Inference by Avoiding Fractional Vertices

no code implementations ICML 2017 Erik M. Lindgren, Alexandros G. Dimakis, Adam Klivans

We require that the number of fractional vertices in the LP relaxation exceeding the optimal solution is bounded by a polynomial in the problem size.

Open-Ended Question Answering

Sparse Quadratic Logistic Regression in Sub-quadratic Time

no code implementations8 Mar 2017 Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sujay Sanghavi

We consider support recovery in the quadratic logistic regression setting - where the target depends on both p linear terms $x_i$ and up to $p^2$ quadratic terms $x_i x_j$.

regression

Cost-Optimal Learning of Causal Graphs

1 code implementation ICML 2017 Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath

We consider the problem of learning a causal graph over a set of variables with interventions.

Graph Learning

Entropic Causality and Greedy Minimum Entropy Coupling

no code implementations28 Jan 2017 Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi

This framework requires the solution of a minimum entropy coupling problem: Given marginal distributions of m discrete random variables, each on n states, find the joint distribution with minimum entropy, that respects the given marginals.

Contextual Bandits with Latent Confounders: An NMF Approach

no code implementations1 Jun 2016 Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sanjay Shakkottai

Our algorithm achieves a regret of $\mathcal{O}\left(L\mathrm{poly}(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcal{O}(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context.

Matrix Completion Multi-Armed Bandits

Bipartite Correlation Clustering -- Maximizing Agreements

no code implementations9 Mar 2016 Megasthenis Asteris, Anastasios Kyrillidis, Dimitris Papailiopoulos, Alexandros G. Dimakis

We present a novel approximation algorithm for $k$-BCC, a variant of BCC with an upper bound $k$ on the number of clusters.

Clustering

Sparse PCA via Bipartite Matchings

no code implementations NeurIPS 2015 Megasthenis Asteris, Dimitris Papailiopoulos, Anastasios Kyrillidis, Alexandros G. Dimakis

We consider the following multi-component sparse PCA problem: given a set of data points, we seek to extract a small number of sparse components with disjoint supports that jointly capture the maximum possible variance.

Stay on path: PCA along graph paths

no code implementations8 Jun 2015 Megasthenis Asteris, Anastasios Kyrillidis, Alexandros G. Dimakis, Han-Gyol Yi and, Bharath Chandrasekaran

We introduce a variant of (sparse) PCA in which the set of feasible support sets is determined by a graph.

On the Information Theoretic Limits of Learning Ising Models

no code implementations NeurIPS 2014 Karthikeyan Shanmugam, Rashish Tandon, Alexandros G. Dimakis, Pradeep Ravikumar

We provide a general framework for computing lower-bounds on the sample complexity of recovering the underlying graphs of Ising models, given i. i. d samples.

Sparse Polynomial Learning and Graph Sketching

no code implementations NeurIPS 2014 Murat Kocaoglu, Karthikeyan Shanmugam, Alexandros G. Dimakis, Adam Klivans

We give an algorithm for exactly reconstructing f given random examples from the uniform distribution on $\{-1, 1\}^n$ that runs in time polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique sign property: there is one output value which corresponds to a unique set of values of the participating parities.

Sparse PCA through Low-rank Approximations

no code implementations3 Mar 2013 Dimitris S. Papailiopoulos, Alexandros G. Dimakis, Stavros Korokythakis

A key algorithmic component of our scheme is a combinatorial feature elimination step that is provably safe and in practice significantly reduces the running complexity of our algorithm.

Applications of Common Entropy for Causal Inference

no code implementations NeurIPS 2020 Murat Kocaoglu, Sanjay Shakkottai, Alexandros G. Dimakis, Constantine Caramanis, Sriram Vishwanath

We study the problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent.

Causal Inference valid

Experimental Design for Cost-Aware Learning of Causal Graphs

no code implementations NeurIPS 2018 Erik M. Lindgren, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath

We consider the minimum cost intervention design problem: Given the essential graph of a causal graph and a cost to intervene on a variable, identify the set of interventions with minimum total cost that can learn any causal graph with the given essential graph.

Experimental Design

Orthogonal NMF through Subspace Exploration

no code implementations NeurIPS 2015 Megasthenis Asteris, Dimitris Papailiopoulos, Alexandros G. Dimakis

Our algorithm relies on a novel approximation to the related Nonnegative Principal Component Analysis (NNPCA) problem; given an arbitrary data matrix, NNPCA seeks $k$ nonnegative components that jointly capture most of the variance.

Clustering

AmbientGAN: Generative models from lossy measurements

no code implementations ICLR 2018 Ashish Bora, Eric Price, Alexandros G. Dimakis

Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest.

Quantifying Perceptual Distortion of Adversarial Examples

no code implementations21 Feb 2019 Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis

To demonstrate the value of quantifying the perceptual distortion of adversarial examples, we present and employ a unifying framework fusing different attack styles.

SSIM

Learning Distributions Generated by One-Layer ReLU Networks

1 code implementation NeurIPS 2019 Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi

We give a simple algorithm to estimate the parameters (i. e., the weight matrix and bias vector of the ReLU neural network) up to an error $\epsilon||W||_F$ using $\tilde{O}(1/\epsilon^2)$ samples and $\tilde{O}(d^2/\epsilon^2)$ time (log factors are ignored for simplicity).

SGD Learns One-Layer Networks in WGANs

no code implementations ICML 2020 Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis

Generative adversarial networks (GANs) are a widely used framework for learning generative models.

Communication-Efficient Asynchronous Stochastic Frank-Wolfe over Nuclear-norm Balls

no code implementations17 Oct 2019 Jiacheng Zhuo, Qi Lei, Alexandros G. Dimakis, Constantine Caramanis

Large-scale machine learning training suffers from two prior challenges, specifically for nuclear-norm constrained problems with distributed systems: the synchronization slowdown due to the straggling workers, and high communication costs.

BIG-bench Machine Learning

Composing Normalizing Flows for Inverse Problems

no code implementations26 Feb 2020 Jay Whang, Erik M. Lindgren, Alexandros G. Dimakis

We approach this problem as a task of conditional inference on the pre-trained unconditional flow model.

Compressive Sensing Uncertainty Quantification +1

Deep Learning Techniques for Inverse Problems in Imaging

no code implementations12 May 2020 Gregory Ongie, Ajil Jalal, Christopher A. Metzler, Richard G. Baraniuk, Alexandros G. Dimakis, Rebecca Willett

Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging.

Model-Based Deep Learning

no code implementations15 Dec 2020 Nir Shlezinger, Jay Whang, Yonina C. Eldar, Alexandros G. Dimakis

We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.

Neural Distributed Source Coding

no code implementations5 Jun 2021 Jay Whang, Alliot Nagle, Anish Acharya, Hyeji Kim, Alexandros G. Dimakis

Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder.

Deblurring via Stochastic Refinement

no code implementations CVPR 2022 Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G. Dimakis, Peyman Milanfar

Unlike existing techniques, we train a stochastic sampler that refines the output of a deterministic predictor and is capable of producing a diverse set of plausible reconstructions for a given input.

Deblurring Image Deblurring

Solving Inverse Problems with NerfGANs

no code implementations16 Dec 2021 Giannis Daras, Wen-Sheng Chu, Abhishek Kumar, Dmitry Lagun, Alexandros G. Dimakis

We introduce a novel framework for solving inverse problems using NeRF-style generative models.

Attribute

Discovering the Hidden Vocabulary of DALLE-2

no code implementations1 Jun 2022 Giannis Daras, Alexandros G. Dimakis

We discover that DALLE-2 seems to have a hidden vocabulary that can be used to generate images with absurd prompts.

Soft Diffusion: Score Matching for General Corruptions

no code implementations12 Sep 2022 Giannis Daras, Mauricio Delbracio, Hossein Talebi, Alexandros G. Dimakis, Peyman Milanfar

To reverse these general diffusions, we propose a new objective called Soft Score Matching that provably learns the score function for any linear corruption process and yields state of the art results for CelebA.

Denoising Image Generation

Zonotope Domains for Lagrangian Neural Network Verification

no code implementations14 Oct 2022 Matt Jordan, Jonathan Hayase, Alexandros G. Dimakis, Sewoong Oh

Neural network verification aims to provide provable bounds for the output of a neural network for a given input range.

Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic Analysis For DDIM-Type Samplers

no code implementations6 Mar 2023 Sitan Chen, Giannis Daras, Alexandros G. Dimakis

We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.