no code implementations • NeurIPS 2018 • Jayadev Acharya, Arnab Bhattacharyya, Constantinos Daskalakis, Saravanan Kandasamy
We consider testing and learning problems on causal Bayesian networks as defined by Pearl (Pearl, 2009).
no code implementations • 23 Feb 2017 • Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis
Our first result is a strong converse of Banach's theorem, showing that it is a universal analysis tool for establishing global convergence of iterative methods to unique fixed points, and for bounding their convergence rate.
no code implementations • 22 Apr 2017 • Constantinos Daskalakis, Nishanth Dikkala, Nick Gravin
We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain.
no code implementations • 31 Jul 2017 • Constantinos Daskalakis, Gautam Kamath, John Wright
Given samples from an unknown distribution $p$ and a description of a distribution $q$, are $p$ and $q$ close or far?
no code implementations • 9 Dec 2016 • Constantinos Daskalakis, Nishanth Dikkala, Gautam Kamath
Given samples from an unknown multivariate distribution $p$, is it possible to distinguish whether $p$ is the product of its marginals versus $p$ being far from every product distribution?
1 code implementation • NeurIPS 2017 • Constantinos Daskalakis, Nishanth Dikkala, Gautam Kamath
We prove near-tight concentration of measure for polynomial functions of the Ising model under high temperature.
no code implementations • 1 Sep 2017 • Yang Cai, Constantinos Daskalakis
The second is a more general max-min learning setting that we introduce, where we are given "approximate distributions," and we seek to compute an auction whose revenue is approximately optimal simultaneously for all "true distributions" that are close to the given ones.
no code implementations • 1 Sep 2016 • Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis
In the finite sample regime, we show that, under a random initialization, $\tilde{O}(d/\epsilon^2)$ samples suffice to compute the unknown vectors to within $\epsilon$ in Mahalanobis distance, where $d$ is the dimension.
no code implementations • 9 Dec 2016 • Constantinos Daskalakis, Qinxuan Pan
As an application of our inequality, we show that distinguishing whether two Bayesian networks $P$ and $Q$ on the same (but potentially unknown) DAG satisfy $P=Q$ vs $d_{\rm TV}(P, Q)>\epsilon$ can be performed from $\tilde{O}(|\Sigma|^{3/4(d+1)} \cdot n/\epsilon^2)$ samples, where $d$ is the maximum in-degree of the DAG and $\Sigma$ the domain of each variable of the Bayesian networks.
no code implementations • 11 Nov 2015 • Constantinos Daskalakis, Anindya De, Gautam Kamath, Christos Tzamos
Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from $O_k(1/\varepsilon^2)$ samples in ${\rm poly}_k(1/\varepsilon)$-time, removing the quasi-polynomial dependence of the running time on $1/\varepsilon$ from the algorithm of Daskalakis, Kamath, and Tzamos.
no code implementations • 4 Nov 2015 • Constantinos Daskalakis, Vasilis Syrgkanis
Our results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts is infinite, and the payoff function of the learner is non-linear.
no code implementations • NeurIPS 2015 • Jayadev Acharya, Constantinos Daskalakis, Gautam Kamath
Given samples from an unknown distribution $p$, is it possible to distinguish whether $p$ belongs to some class of distributions $\mathcal{C}$ versus $p$ being far from every distribution in $\mathcal{C}$?
no code implementations • 30 Apr 2015 • Constantinos Daskalakis, Gautam Kamath, Christos Tzamos
We prove a structural characterization of these distributions, showing that, for all $\varepsilon >0$, any $(n, k)$-Poisson multinomial random vector is $\varepsilon$-close, in total variation distance, to the sum of a discretized multidimensional Gaussian and an independent $(\text{poly}(k/\varepsilon), k)$-Poisson multinomial random vector.
no code implementations • 11 Aug 2014 • Yang Cai, Constantinos Daskalakis, Christos H. Papadimitriou
We propose an optimum mechanism for providing monetary incentives to the data sources of a statistical estimator such as linear regression, so that high quality data is provided at low cost, in the sense that the sum of payments and estimation error is minimized.
no code implementations • 13 Jul 2011 • Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio
Our second main result is a {\em proper} learning algorithm that learns to $\eps$-accuracy using $\tilde{O}(1/\eps^2)$ samples, and runs in time $(1/\eps)^{\poly (\log (1/\eps))} \cdot \log n$.
no code implementations • 13 Oct 2014 • Jayadev Acharya, Constantinos Daskalakis
We provide a sample near-optimal algorithm for testing whether a distribution $P$ supported on $\{0,..., n\}$ to which we have sample access is a Poisson Binomial distribution, or far from all Poisson Binomial distributions.
no code implementations • 13 Jul 2011 • Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio
The learning algorithm is given access to independent samples drawn from an unknown $k$-modal distribution $p$, and it must output a hypothesis distribution $\widehat{p}$ such that with high probability the total variation distance between $p$ and $\widehat{p}$ is at most $\epsilon.$ Our main goal is to obtain \emph{computationally efficient} algorithms for this problem that use (close to) an information-theoretically optimal number of samples.
no code implementations • 4 Dec 2013 • Constantinos Daskalakis, Gautam Kamath
The algorithm requires ${O}(\log{N}/\varepsilon^2)$ samples from the unknown distribution and ${O}(N \log N/\varepsilon^2)$ time, which improves previous such results (such as the Scheff\'e estimator) from a quadratic dependence of the running time on $N$ to quasilinear.
no code implementations • 11 Jul 2018 • Constantinos Daskalakis, Ioannis Panageas
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent work of Daskalakis et al \cite{DISZ17} and follow-up work of Liang and Stokes \cite{LiangS18} have established that a variant of the widely used Gradient Descent/Ascent procedure, called "Optimistic Gradient Descent/Ascent (OGDA)", exhibits last-iterate convergence to saddle points in {\em unconstrained} convex-concave min-max optimization problems.
no code implementations • NeurIPS 2018 • Constantinos Daskalakis, Ioannis Panageas
Motivated by applications in Optimization, Game Theory, and the training of Generative Adversarial Networks, the convergence properties of first order methods in min-max problems have received extensive study.
no code implementations • 11 Sep 2018 • Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis
We provide an efficient algorithm for the classical problem, going back to Galton, Pearson, and Fisher, of estimating, with arbitrary accuracy the parameters of a multivariate normal distribution from truncated samples.
no code implementations • NeurIPS 2018 • Nima Anari, Constantinos Daskalakis, Wolfgang Maass, Christos H. Papadimitriou, Amin Saberi, Santosh Vempala
We give an application to recovering assemblies of neurons.
no code implementations • NeurIPS 2018 • Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti
Hence, the expectation of any function that is Lipschitz with respect to a power of the Hamming distance, can be estimated with a bias that grows logarithmically in $n$.
no code implementations • ICML 2017 • Bryan Cai, Constantinos Daskalakis, Gautam Kamath
We develop differentially private hypothesis testing methods for the small sample regime.
no code implementations • 8 May 2019 • Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas
The standard linear and logistic regression models assume that the response variables are independent, but share the same linear relationship to their corresponding vectors of covariates.
no code implementations • 21 Jun 2019 • Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti
Indeed, we show that the standard complexity measures of Gaussian and Rademacher complexities and VC dimension are sufficient measures of complexity for the purposes of bounding the generalization error and learning rates of hypothesis classes in our setting.
no code implementations • ICML 2020 • Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis
Generative adversarial networks (GANs) are a widely used framework for learning generative models.
no code implementations • 6 Nov 2019 • Johaness Brustle, Yang Cai, Constantinos Daskalakis
When item values are sampled from more general graphical models, we combine our robustness theorem with novel sample complexity results for learning Markov Random Fields or Bayesian Networks in Prokhorov distance, which may be of independent interest.
no code implementations • 31 Jan 2020 • Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, Asuman Ozdaglar
In this paper we study the smooth convex-concave saddle point problem.
no code implementations • 2 Mar 2020 • Mucong Ding, Constantinos Daskalakis, Soheil Feizi
GANs, however, are designed in a model-free fashion where no additional information about the underlying distribution is available.
no code implementations • 18 Mar 2020 • Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas
In this work we study extensions of these to models with higher-order sufficient statistics, modeling behavior on a social network with peer-group effects.
no code implementations • 20 Apr 2020 • Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Anthimos Vardis Kandiros
As corollaries of our main theorem, we derive bounds when the model's interaction matrix is a (sparse) linear combination of known matrices, or it belongs to a finite set, or to a high-dimensional manifold.
no code implementations • NeurIPS 2020 • Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis
Using this theorem we can show that a matrix concentration inequality known as the Weight Distribution Condition (WDC), which was previously only known to hold for Gaussian matrices with logarithmic aspect ratio, in fact holds for constant aspect ratios too.
no code implementations • NeurIPS 2020 • Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis
As a corollary, our guarantees imply a computationally efficient and information-theoretically optimal algorithm for compressed sensing with truncation, which may arise from measurement saturation effects.
no code implementations • 5 Aug 2020 • Liu Yang, Constantinos Daskalakis, George Em. Karniadakis
Particle coordinates at a single time instant, possibly noisy or truncated, are recorded in each snapshot but are unpaired across the snapshots.
no code implementations • 21 Sep 2020 • Constantinos Daskalakis, Stratis Skoulakis, Manolis Zampetakis
In this paper, we provide a characterization of the computational complexity of the problem, as well as of the limitations of first-order methods in constrained min-max optimization problems with nonconvex-nonconcave objectives and linear constraints.
no code implementations • 22 Oct 2020 • Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis
We provide a computationally and statistically efficient estimator for the classical problem of truncated linear regression, where the dependent variable $y = w^T x + \epsilon$ and its corresponding vector of covariates $x \in R^k$ are only revealed if the dependent variable falls in some subset $S \subseteq R$; otherwise the existence of the pair $(x, y)$ is hidden.
no code implementations • NeurIPS 2020 • Noah Golowich, Sarath Pattathil, Constantinos Daskalakis
We also show that the $O(1/\sqrt{T})$ rate is tight for all $p$-SCLI algorithms, which includes OG as a special case.
no code implementations • 28 Oct 2020 • Constantinos Daskalakis, Qinxuan Pan
We show that $n$-variable tree-structured Ising models can be learned computationally-efficiently to within total variation distance $\epsilon$ from an optimal $O(n \ln n/\epsilon^2)$ samples, where $O(\cdot)$ hides an absolute constant which, importantly, does not depend on the model being learned - neither its tree nor the magnitude of its edge strengths, on which we place no assumptions.
no code implementations • 31 Oct 2020 • Jelena Diakonikolas, Constantinos Daskalakis, Michael I. Jordan
The use of min-max optimization in adversarial training of deep neural network classifiers and training of generative adversarial networks has motivated the study of nonconvex-nonconcave optimization objectives, which frequently arise in these applications.
no code implementations • NeurIPS 2020 • Constantinos Daskalakis, Dylan J. Foster, Noah Golowich
We obtain global, non-asymptotic convergence guarantees for independent learning algorithms in competitive reinforcement learning settings with two agents (i. e., zero-sum stochastic games).
no code implementations • 20 Jul 2021 • Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Surbhi Goel, Anthimos Vardis Kandiros
We consider a general statistical estimation problem wherein binary labels across different observations are not independent conditioned on their feature vectors, but dependent, capturing settings where e. g. these observations are collected on a spatial domain, a temporal domain, or a social network, which induce dependencies.
no code implementations • NeurIPS 2021 • Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich
We show that Optimistic Hedge -- a common variant of multiplicative-weights-updates with recency bias -- attains ${\rm poly}(\log T)$ regret in multi-player general-sum games.
no code implementations • 25 Oct 2021 • Yang Cai, Constantinos Daskalakis
We propose a mechanism design framework for this setting, building on a recent robustification framework by Brustle et al., which disentangles the statistical challenge of estimating a multi-dimensional prior from the task of designing a good mechanism for it, and robustifies the performance of the latter against the estimation error of the former.
no code implementations • 11 Nov 2021 • Ioannis Anagnostides, Constantinos Daskalakis, Gabriele Farina, Maxwell Fishelson, Noah Golowich, Tuomas Sandholm
Recently, Daskalakis, Fishelson, and Golowich (DFG) (NeurIPS`21) showed that if all agents in a multi-player general-sum normal-form game employ Optimistic Multiplicative Weights Update (OMWU), the external regret of every player is $O(\textrm{polylog}(T))$ after $T$ repetitions of the game.
no code implementations • 17 Nov 2021 • Constantinos Daskalakis, Noah Golowich
Our contributions are two-fold: - In the realizable setting of nonparametric online regression with the absolute loss, we propose a randomized proper learning algorithm which gets a near-optimal cumulative loss in terms of the sequential fat-shattering dimension of the hypothesis class.
no code implementations • 8 Apr 2022 • Constantinos Daskalakis, Noah Golowich, Kaiqing Zhang
Previous work for learning Markov CCE policies all required exponential time and sample complexity in the number of players.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 4 May 2022 • Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis
We provide efficient estimation methods for first- and second-price auctions under independent (asymmetric) private values and partial observability.
no code implementations • 6 May 2022 • Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis
In known-index self-selection, the identity of the observed model output is observable; in unknown-index self-selection, it is not.
no code implementations • 18 Oct 2022 • Constantinos Daskalakis, Noah Golowich, Stratis Skoulakis, Manolis Zampetakis
In particular, our method is not designed to decrease some potential function, such as the distance of its iterate from the set of local min-max equilibria or the projected gradient of the objective, but is designed to satisfy a topological property that guarantees the avoidance of cycles and implies its convergence.
no code implementations • 21 Nov 2022 • Yuval Dagan, Constantinos Daskalakis, Anthimos Vardis Kandiros
Our results for the landscape of the log-likelihood function in general latent tree models provide support for the extensive practical use of maximum likelihood based-methods in this setting.
no code implementations • 23 Nov 2022 • Davin Choo, Yuval Dagan, Constantinos Daskalakis, Anthimos Vardis Kandiros
We provide time- and sample-efficient algorithms for learning and testing latent-tree Ising models, i. e. Ising models that may only be observed at their leaf nodes.
no code implementations • 7 Feb 2023 • Panos Stinis, Constantinos Daskalakis, Paul J. Atzberger
We introduce adversarial learning methods for data-driven generative modeling of the dynamics of $n^{th}$-order stochastic systems.
no code implementations • 4 Jul 2023 • Angelos Assos, Idan Attias, Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson
In this setting, we provide learning algorithms that only rely on best response oracles and converge to approximate-minimax equilibria in two-player zero-sum games and approximate coarse correlated equilibria in multi-player general-sum games, as long as the game has a bounded fat-threshold dimension.
no code implementations • 21 Sep 2023 • Constantinos Daskalakis, Noah Golowich, Nika Haghtalab, Abhishek Shetty
We show that both weak and strong $\sigma$-smooth Nash equilibria have superior computational properties to Nash equilibria: when $\sigma$ as well as an approximation parameter $\epsilon$ and the number of players are all constants, there is a constant-time randomized algorithm to find a weak $\epsilon$-approximate $\sigma$-smooth Nash equilibrium in normal-form games.
no code implementations • 30 Oct 2023 • Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich
We provide a novel reduction from swap-regret minimization to external-regret minimization, which improves upon the classical reductions of Blum-Mansour [BM07] and Stolz-Lugosi [SL05] in that it does not require finiteness of the space of actions.
no code implementations • 13 Mar 2024 • Yang Cai, Constantinos Daskalakis, Haipeng Luo, Chen-Yu Wei, Weiqiang Zheng
While Online Gradient Descent and other no-regret learning procedures are known to efficiently converge to coarse correlated equilibrium in games where each agent's utility is concave in their own strategy, this is not the case when the utilities are non-concave, a situation that is common in machine learning applications where the agents' strategies are parameterized by deep neural networks, or the agents' utilities are computed by a neural network, or both.
1 code implementation • 13 Dec 2021 • Constantinos Daskalakis, Petros Dellaportas, Aristeidis Panos
In particular, we bound the Kullback-Leibler divergence between an exact GP and one resulting from one of the afore-described low-rank approximations to its kernel, as well as between their corresponding predictive densities, and we also bound the error between predictive mean vectors and between predictive covariance matrices computed using the exact versus using the approximate GP.
1 code implementation • NeurIPS 2021 • Constantinos Daskalakis, Patroklos Stefanou, Rui Yao, Manolis Zampetakis
In this paper, we provide the first computationally and statistically efficient estimators for truncated linear regression when the noise variance is unknown, estimating both the linear model and the variance of the noise.
1 code implementation • 29 Mar 2017 • Bryan Cai, Constantinos Daskalakis, Gautam Kamath
We develop differentially private hypothesis testing methods for the small sample regime.
3 code implementations • 3 Apr 2020 • Constantinos Daskalakis, Petros Dellaportas, Aristeidis Panos
In particular, we bound the Kullback-Leibler divergence between an exact GP and one resulting from one of the afore-described low-rank approximations to its kernel, as well as between their corresponding predictive densities, and we also bound the error between predictive mean vectors and between predictive covariance matrices computed using the exact versus using the approximate GP.
1 code implementation • 26 Dec 2017 • Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis
Our formulation involves solving a min-max problem, where the min player sets the parameters of the classifier and the max player is running our attack, and is thus searching for adversarial examples in the {\em low-dimensional} input space of the spanner.
1 code implementation • ICLR 2018 • Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng
Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs.
2 code implementations • 18 Jun 2022 • Giannis Daras, Yuval Dagan, Alexandros G. Dimakis, Constantinos Daskalakis
In practice, to allow for increased expressivity, we propose to do posterior sampling in the latent space of a pre-trained generative model.