1 code implementation • NeurIPS 2019 • Ganlin Song, Zhou Fan, John Lafferty
When initialized with random parameters $\theta_0$, we show that the objective $f_{\theta_0}(x)$ is "nice'' and easy to optimize with gradient descent.
1 code implementation • 1 Apr 2023 • Awni Altabaa, Taylor Webb, Jonathan Cohen, John Lafferty
An extension of Transformers is proposed that enables explicit relational reasoning through a novel module called the Abstractor.
1 code implementation • 21 Feb 2023 • Qi Lin, Zifan Li, John Lafferty, Ilker Yildirim
Much of what we remember is not due to intentional selection, but simply a by-product of perceiving.
2 code implementations • 5 Oct 2023 • Awni Altabaa, John Lafferty
A maturing area of research in deep learning is the study of architectures and inductive biases for learning representations of relational features.
no code implementations • ICML 2018 • Matt Bonakdarpour, Sabyasachi Chatterjee, Rina Foygel Barber, John Lafferty
Two methods are proposed for high-dimensional shape-constrained regression and classification.
no code implementations • ICML 2018 • Yuancheng Zhu, John Lafferty
In an intermediate regime, the statistical risk depends on both the sample size and the communication budget.
no code implementations • 25 Mar 2015 • Yuancheng Zhu, John Lafferty
We formulate the notion of minimax estimation under storage or communication constraints, and prove an extension to Pinsker's theorem for nonparametric estimation over Sobolev ellipsoids.
no code implementations • 23 May 2016 • Qinqing Zheng, John Lafferty
We address the rectangular matrix completion problem by lifting the unknown matrix to a positive semidefinite matrix in higher dimension, and optimizing a nonconvex objective over the semidefinite factor using a simple gradient descent scheme.
no code implementations • NeurIPS 2016 • Yuancheng Zhu, Sabyasachi Chatterjee, John Duchi, John Lafferty
The bounds are expressed in terms of a localized and computational analogue of the modulus of continuity that is central to statistical minimax analysis.
no code implementations • NeurIPS 2015 • Qinqing Zheng, John Lafferty
We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs.
no code implementations • 7 Nov 2014 • Min Xu, Minhua Chen, John Lafferty
We study the problem of variable selection in convex nonparametric regression.
no code implementations • NeurIPS 2014 • Yuancheng Zhu, John Lafferty
A central result in statistical theory is Pinsker's theorem, which characterizes the minimax rate in the normal means model of nonparametric estimation.
no code implementations • NeurIPS 2014 • Zhe Liu, John Lafferty
We combine the ideas behind trees and Gaussian graphical models to form a new nonparametric family of graphical models.
no code implementations • 16 Feb 2019 • Michihiro Yasunaga, John Lafferty
Scientific documents rely on both mathematics and text to communicate ideas.
Ranked #1 on Topic Models on Arxiv HEP-TH citation graph (Topic Coherence@50 metric)
no code implementations • 19 Jul 2019 • Dana Yang, John Lafferty, David Pollard
Quantile regression is a tool for learning conditional distributions.
no code implementations • 20 May 2020 • Chao Gao, John Lafferty
A new type of robust estimation problem is introduced where the goal is to recover a statistical model that has been corrupted after it has been estimated from data.
no code implementations • 26 Jun 2020 • Tuo Zhao, Han Liu, Kathryn Roeder, John Lafferty, Larry Wasserman
We describe an R package named huge which provides easy-to-use functions for estimating high dimensional undirected graphs from data.
no code implementations • 2 Oct 2017 • Chao Gao, John Lafferty
We study the problem of testing for community structure in networks using relations between the observed frequencies of small subgraphs.
Methodology Social and Information Networks Statistics Theory Applications Statistics Theory
no code implementations • NeurIPS 2021 • Ganlin Song, Ruitu Xu, John Lafferty
In this paper we study the mathematical properties of the feedback alignment procedure by analyzing convergence and alignment for two-layer networks under squared error loss.
no code implementations • 26 May 2022 • Leon Lufkin, Ashish Puri, Ganlin Song, Xinyi Zhong, John Lafferty
Local patterns of excitation and inhibition that can generate neural waves are studied as a computational mechanism underlying the organization of neuronal tunings.
no code implementations • 12 Sep 2023 • Taylor W. Webb, Steven M. Frankland, Awni Altabaa, Kamesh Krishnamurthy, Declan Campbell, Jacob Russin, Randall O'Reilly, John Lafferty, Jonathan D. Cohen
A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience.
no code implementations • 13 Feb 2024 • Awni Altabaa, John Lafferty
Inner products of neural network feature maps arises in a wide variety of machine learning frameworks as a method of modeling relations between inputs.