no code implementations • 27 Feb 2024 • Michael Celentano, William S. DeWitt, Sebastian Prillo, Yun S. Song

Here, we address this challenge by proving that, for any partially ascertained process from a general multi-type birth-death-mutation-sampling model, there exists an equivalent process with complete sampling and no death, a property which we leverage to develop a highly efficient algorithm for simulating trees.

no code implementations • 14 Nov 2023 • Michael Celentano, Zhou Fan, Licong Lin, Song Mei

In settings where it is conjectured that no efficient algorithm can find this local neighborhood, we prove analogous geometric properties for a local minimizer of the TAP free energy reachable by AMP, and show that posterior inference based on this minimizer remains correctly calibrated.

no code implementations • 5 Sep 2023 • Seunghoon Paik, Michael Celentano, Alden Green, Ryan J. Tibshirani

Maximum mean discrepancy (MMD) refers to a general class of nonparametric two-sample tests that are based on maximizing the mean difference over samples from one distribution $P$ versus another $Q$, over all choices of data transformations $f$ living in some function space $\mathcal{F}$.

no code implementations • 19 Aug 2022 • Michael Celentano

As an example of its use, we provide a new, and arguably simpler, proof of some of the results of Celentano et al. (2021), which establishes that the so-called TAP free energy in the $\mathbb{Z}_2$-synchronization problem is locally convex in the region to which AMP converges.

no code implementations • 21 Jun 2021 • Michael Celentano, Zhou Fan, Song Mei

This provides a rigorous foundation for variational inference in high dimensions via minimization of the TAP free energy.

no code implementations • 30 Mar 2021 • Michael Celentano, Theodor Misiakiewicz, Andrea Montanari

We study random features approximations to these norms and show that, for $p>1$, the number of random features required to approximate the original learning problem is upper bounded by a polynomial in the sample size.

no code implementations • 27 Jul 2020 • Michael Celentano, Andrea Montanari, Yuting Wei

On the other hand, the Lasso estimator can be precisely characterized in the regime in which both $n$ and $p$ are large and $n/p$ is of order one.

no code implementations • 28 Feb 2020 • Michael Celentano, Andrea Montanari, Yuchen Wu

These lower bounds are optimal in the sense that there exist algorithms whose estimation error matches the lower bounds up to asymptotically negligible terms.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.