1 code implementation • 5 Apr 2024 • Jerry Yao-Chieh Hu, Bo-Yu Chen, Dennis Wu, Feng Ruan, Han Liu
We present a nonparametric construction for deep learning compatible modern Hopfield models and utilize this framework to debut an efficient variant.
no code implementations • 1 Apr 2024 • Xiang Li, Feng Ruan, Huiyuan Wang, Qi Long, Weijie J. Su
In particular, we derive optimal detection rules for these watermarks under our framework.
1 code implementation • 18 Oct 2023 • Yunlu Chen, Yang Li, Keli Liu, Feng Ruan
Assuming that the covariates have nonzero explanatory power for the response only through a low dimensional subspace (central mean subspace), we find that the global minimizer of the finite sample kernel learning objective is also low rank with high probability.
no code implementations • 29 Sep 2023 • Andrea Montanari, Feng Ruan, Basil Saeed, Youngtak Sohn
Working in the high-dimensional regime in which the number of features $p$, the number of samples $n$ and the input dimension $d$ (in the nonlinear featurization setting) diverge, with ratios of order one, we prove a universality result establishing that the asymptotic behavior is completely determined by the expected covariance of feature vectors and by the covariance between features and labels.
2 code implementations • 21 Aug 2022 • Xuelin Yang, Louis Abraham, Sejin Kim, Petr Smirnov, Feng Ruan, Benjamin Haibe-Kains, Robert Tibshirani
The Cox proportional hazards model is a canonical method in survival analysis for prediction of the life expectancy of a patient given clinical or genetic covariates -- it is a linear model in its original form.
no code implementations • 12 Oct 2021 • Michael I. Jordan, Keli Liu, Feng Ruan
We describe an implicit sparsity-inducing mechanism based on minimization over a family of kernels: \begin{equation*} \min_{\beta, f}~\widehat{\mathbb{E}}[L(Y, f(\beta^{1/q} \odot X)] + \lambda_n \|f\|_{\mathcal{H}_q}^2~~\text{subject to}~~\beta \ge 0, \end{equation*} where $L$ is the loss, $\odot$ is coordinate-wise multiplication and $\mathcal{H}_q$ is the reproducing kernel Hilbert space based on the kernel $k_q(x, x') = h(\|x-x'\|_q^q)$, where $\|\cdot\|_q$ is the $\ell_q$ norm.
no code implementations • 17 Jun 2021 • Feng Ruan, Keli Liu, Michael I. Jordan
Kernel-based feature selection is an important tool in nonparametric statistics.
no code implementations • 14 Dec 2020 • Lydia T. Liu, Feng Ruan, Horia Mania, Michael I. Jordan
We study two-sided matching markets in which one side of the market (the players) does not have a priori knowledge about its preferences for the other side (the arms) and is required to learn its preferences from experience.
no code implementations • 24 Nov 2020 • Keli Liu, Feng Ruan
The trick is to maximize a class of parametrized nonparametric dependence measures which we call metric learning objectives; the landscape of these nonconvex objective functions is sensitive to interactions but the objectives themselves do not explicitly model interactions.
no code implementations • 16 Mar 2020 • Koulik Khamaru, Ashwin Pananjady, Feng Ruan, Martin J. Wainwright, Michael. I. Jordan
We address the problem of policy evaluation in discounted Markov decision processes, and provide instance-dependent guarantees on the $\ell_\infty$-error under a generative model.
no code implementations • 5 Nov 2019 • Andrea Montanari, Feng Ruan, Youngtak Sohn, Jun Yan
They achieve this by learning nonlinear representations of the inputs that maps the data into linearly separable classes.
2 code implementations • 29 Jul 2019 • Ismael Lemhadri, Feng Ruan, Louis Abraham, Robert Tibshirani
Unlike other approaches to feature selection for neural nets, our method uses a modified objective function with constraints, and so integrates feature selection with the parameter learning directly.
no code implementations • 16 Dec 2016 • John Duchi, Feng Ruan
We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of H\'{a}jek and Le Cam for classical statistical problems.
1 code implementation • 30 Jun 2014 • Stanley Osher, Feng Ruan, Jiechao Xiong, Yuan YAO, Wotao Yin
In this paper, we recover sparse signals from their noisy linear measurements by solving nonlinear differential inclusions, which is based on the notion of inverse scale space (ISS) developed in applied mathematics.