Search Results for author: Manolis Zampetakis

Found 20 papers, 1 papers with code

Learning and Covering Sums of Independent Random Variables with Unbounded Support

no code implementations24 Oct 2022 Alkis Kalavasis, Konstantinos Stavropoulos, Manolis Zampetakis

In this work, we address two questions: (i) Are there general families of SIIRVs with unbounded support that can be learned with sample complexity independent of both $n$ and the maximal element of the support?

STay-ON-the-Ridge: Guaranteed Convergence to Local Minimax Equilibrium in Nonconvex-Nonconcave Games

no code implementations18 Oct 2022 Constantinos Daskalakis, Noah Golowich, Stratis Skoulakis, Manolis Zampetakis

In particular, our method is not designed to decrease some potential function, such as the distance of its iterate from the set of local min-max equilibria or the projected gradient of the objective, but is designed to satisfy a topological property that guarantees the avoidance of cycles and implies its convergence.

Efficient Truncated Linear Regression with Unknown Noise Variance

1 code implementation NeurIPS 2021 Constantinos Daskalakis, Patroklos Stefanou, Rui Yao, Manolis Zampetakis

In this paper, we provide the first computationally and statistically efficient estimators for truncated linear regression when the noise variance is unknown, estimating both the linear model and the variance of the noise.


What Makes A Good Fisherman? Linear Regression under Self-Selection Bias

no code implementations6 May 2022 Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis

In known-index self-selection, the identity of the observed model output is observable; in unknown-index self-selection, it is not.

Econometrics Imitation Learning +2

Estimation of Standard Auction Models

no code implementations4 May 2022 Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis

We provide efficient estimation methods for first- and second-price auctions under independent (asymmetric) private values and partial observability.


First-Order Algorithms for Nonlinear Generalized Nash Equilibrium Problems

no code implementations7 Apr 2022 Michael I. Jordan, Tianyi Lin, Manolis Zampetakis

We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs) in which the strategy sets for each player are defined by equality and inequality constraints that may depend on the choices of rival players.

Last-Iterate Convergence of Saddle Point Optimizers via High-Resolution Differential Equations

no code implementations27 Dec 2021 Tatjana Chavdarova, Michael I. Jordan, Manolis Zampetakis

However, the convergence properties of these methods are qualitatively different even on simple bilinear games.

Robust Learning of Optimal Auctions

no code implementations NeurIPS 2021 Wenshuo Guo, Michael I. Jordan, Manolis Zampetakis

The proposed algorithms operate beyond the setting of bounded distributions that have been studied in prior works, and are guaranteed to obtain a fraction $1-O(\alpha)$ of the optimal revenue under the true distribution when the distributions are MHR.

Computationally and Statistically Efficient Truncated Regression

no code implementations22 Oct 2020 Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis

We provide a computationally and statistically efficient estimator for the classical problem of truncated linear regression, where the dependent variable $y = w^T x + \epsilon$ and its corresponding vector of covariates $x \in R^k$ are only revealed if the dependent variable falls in some subset $S \subseteq R$; otherwise the existence of the pair $(x, y)$ is hidden.


Optimal Approximation -- Smoothness Tradeoffs for Soft-Max Functions

no code implementations22 Oct 2020 Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Manolis Zampetakis

A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input.

The Complexity of Constrained Min-Max Optimization

no code implementations21 Sep 2020 Constantinos Daskalakis, Stratis Skoulakis, Manolis Zampetakis

In this paper, we provide a characterization of the computational complexity of the problem, as well as of the limitations of first-order methods in constrained min-max optimization problems with nonconvex-nonconcave objectives and linear constraints.

Truncated Linear Regression in High Dimensions

no code implementations NeurIPS 2020 Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis

As a corollary, our guarantees imply a computationally efficient and information-theoretically optimal algorithm for compressed sensing with truncation, which may arise from measurement saturation effects.


Estimation and Inference with Trees and Forests in High Dimensions

no code implementations7 Jul 2020 Vasilis Syrgkanis, Manolis Zampetakis

We prove that if only $r$ of the $d$ features are relevant for the mean outcome function, then shallow trees built greedily via the CART empirical MSE criterion achieve MSE rates that depend only logarithmically on the ambient dimension $d$.


Constant-Expansion Suffices for Compressed Sensing with Generative Priors

no code implementations NeurIPS 2020 Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis

Using this theorem we can show that a matrix concentration inequality known as the Weight Distribution Condition (WDC), which was previously only known to hold for Gaussian matrices with logarithmic aspect ratio, in fact holds for constant aspect ratios too.


Efficient Truncated Statistics with Unknown Truncation

no code implementations2 Aug 2019 Vasilis Kontonis, Christos Tzamos, Manolis Zampetakis

Our main result is a computationally and sample efficient algorithm for estimating the parameters of the Gaussian under arbitrary unknown truncation sets whose performance decays with a natural measure of complexity of the set, namely its Gaussian surface area.

Optimal Learning of Mallows Block Model

no code implementations3 Jun 2019 Róbert Busa-Fekete, Dimitris Fotakis, Balázs Szörényi, Manolis Zampetakis

The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model.

Efficient Statistics, in High Dimensions, from Truncated Samples

no code implementations11 Sep 2018 Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis

We provide an efficient algorithm for the classical problem, going back to Galton, Pearson, and Fisher, of estimating, with arbitrary accuracy the parameters of a multivariate normal distribution from truncated samples.

A Converse to Banach's Fixed Point Theorem and its CLS Completeness

no code implementations23 Feb 2017 Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis

Our first result is a strong converse of Banach's theorem, showing that it is a universal analysis tool for establishing global convergence of iterative methods to unique fixed points, and for bounding their convergence rate.

Ten Steps of EM Suffice for Mixtures of Two Gaussians

no code implementations1 Sep 2016 Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis

In the finite sample regime, we show that, under a random initialization, $\tilde{O}(d/\epsilon^2)$ samples suffice to compute the unknown vectors to within $\epsilon$ in Mahalanobis distance, where $d$ is the dimension.

Faster Sublinear Algorithms using Conditional Sampling

no code implementations16 Aug 2016 Themistoklis Gouleakis, Christos Tzamos, Manolis Zampetakis

In contrast to prior algorithms for the classic model, our algorithms have time, space and sample complexity that is polynomial in the dimension and polylogarithmic in the number of points.

Cannot find the paper you are looking for? You can Submit a new open access paper.