Search Results for author: Feng Ruan

Found 14 papers, 5 papers with code

Nonparametric Modern Hopfield Models

1 code implementation5 Apr 2024 Jerry Yao-Chieh Hu, Bo-Yu Chen, Dennis Wu, Feng Ruan, Han Liu

We present a nonparametric construction for deep learning compatible modern Hopfield models and utilize this framework to debut an efficient variant.

Kernel Learning in Ridge Regression "Automatically" Yields Exact Low Rank Solution

1 code implementation18 Oct 2023 Yunlu Chen, Yang Li, Keli Liu, Feng Ruan

Assuming that the covariates have nonzero explanatory power for the response only through a low dimensional subspace (central mean subspace), we find that the global minimizer of the finite sample kernel learning objective is also low rank with high probability.

regression

Universality of max-margin classifiers

no code implementations29 Sep 2023 Andrea Montanari, Feng Ruan, Basil Saeed, Youngtak Sohn

Working in the high-dimensional regime in which the number of features $p$, the number of samples $n$ and the input dimension $d$ (in the nonlinear featurization setting) diverge, with ratios of order one, we prove a universality result establishing that the asymptotic behavior is completely determined by the expected covariance of feature vectors and by the covariance between features and labels.

Binary Classification

FastCPH: Efficient Survival Analysis for Neural Networks

2 code implementations21 Aug 2022 Xuelin Yang, Louis Abraham, Sejin Kim, Petr Smirnov, Feng Ruan, Benjamin Haibe-Kains, Robert Tibshirani

The Cox proportional hazards model is a canonical method in survival analysis for prediction of the life expectancy of a patient given clinical or genetic covariates -- it is a linear model in its original form.

Survival Analysis

On the Self-Penalization Phenomenon in Feature Selection

no code implementations12 Oct 2021 Michael I. Jordan, Keli Liu, Feng Ruan

We describe an implicit sparsity-inducing mechanism based on minimization over a family of kernels: \begin{equation*} \min_{\beta, f}~\widehat{\mathbb{E}}[L(Y, f(\beta^{1/q} \odot X)] + \lambda_n \|f\|_{\mathcal{H}_q}^2~~\text{subject to}~~\beta \ge 0, \end{equation*} where $L$ is the loss, $\odot$ is coordinate-wise multiplication and $\mathcal{H}_q$ is the reproducing kernel Hilbert space based on the kernel $k_q(x, x') = h(\|x-x'\|_q^q)$, where $\|\cdot\|_q$ is the $\ell_q$ norm.

feature selection

Bandit Learning in Decentralized Matching Markets

no code implementations14 Dec 2020 Lydia T. Liu, Feng Ruan, Horia Mania, Michael I. Jordan

We study two-sided matching markets in which one side of the market (the players) does not have a priori knowledge about its preferences for the other side (the arms) and is required to learn its preferences from experience.

A Self-Penalizing Objective Function for Scalable Interaction Detection

no code implementations24 Nov 2020 Keli Liu, Feng Ruan

The trick is to maximize a class of parametrized nonparametric dependence measures which we call metric learning objectives; the landscape of these nonconvex objective functions is sensitive to interactions but the objectives themselves do not explicitly model interactions.

Metric Learning Variable Selection

Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis

no code implementations16 Mar 2020 Koulik Khamaru, Ashwin Pananjady, Feng Ruan, Martin J. Wainwright, Michael. I. Jordan

We address the problem of policy evaluation in discounted Markov decision processes, and provide instance-dependent guarantees on the $\ell_\infty$-error under a generative model.

LassoNet: A Neural Network with Feature Sparsity

2 code implementations29 Jul 2019 Ismael Lemhadri, Feng Ruan, Louis Abraham, Robert Tibshirani

Unlike other approaches to feature selection for neural nets, our method uses a modified objective function with constraints, and so integrates feature selection with the parameter learning directly.

feature selection regression

Asymptotic Optimality in Stochastic Optimization

no code implementations16 Dec 2016 John Duchi, Feng Ruan

We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of H\'{a}jek and Le Cam for classical statistical problems.

Stochastic Optimization

Sparse Recovery via Differential Inclusions

1 code implementation30 Jun 2014 Stanley Osher, Feng Ruan, Jiechao Xiong, Yuan YAO, Wotao Yin

In this paper, we recover sparse signals from their noisy linear measurements by solving nonlinear differential inclusions, which is based on the notion of inverse scale space (ISS) developed in applied mathematics.

Cannot find the paper you are looking for? You can Submit a new open access paper.