Search Results for author: Gen Li

Found 45 papers, 9 papers with code

A Non-Asymptotic Framework for Approximate Message Passing in Spiked Models

no code implementations5 Aug 2022 Gen Li, Yuting Wei

As two concrete consequences of the proposed analysis recipe: (i) when solving $\mathbb{Z}_2$ synchronization, we predict the behavior of spectrally initialized AMP for up to $O\big(\frac{n}{\mathrm{poly}\log n}\big)$ iterations, showing that the algorithm succeeds without the need of a subsequent refinement stage (as conjectured recently by \citet{celentano2021local}); (ii) we characterize the non-asymptotic behavior of AMP in sparse PCA (in the spiked Wigner model) for a broad range of signal-to-noise ratio.

Real Image Restoration via Structure-preserving Complementarity Attention

no code implementations28 Jul 2022 Yuanfan Zhang, Gen Li, Lei Sun

Since convolutional neural networks perform well in learning generalizable image priors from large-scale data, these models have been widely used in image denoising tasks.

Image Denoising Image Restoration +1

FaceFormer: Scale-aware Blind Face Restoration with Transformers

no code implementations20 Jul 2022 Aijin Li, Gen Li, Lei Sun, Xintao Wang

Blind face restoration usually encounters with diverse scale face inputs, especially in the real world.

Blind Face Restoration

AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos

no code implementations14 Jun 2022 Yanze Wu, Xintao Wang, Gen Li, Ying Shan

This paper studies the problem of real-world video super-resolution (VSR) for animation videos, and reveals three key improvements for practical animation VSR.

Video Super-Resolution

VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder

1 code implementation13 May 2022 YuChao Gu, Xintao Wang, Liangbin Xie, Chao Dong, Gen Li, Ying Shan, Ming-Ming Cheng

Equipped with the VQ codebook as a facial detail dictionary and the parallel decoder design, the proposed VQFR can largely enhance the restored quality of facial details while keeping the fidelity to previous methods.

Blind Face Restoration Quantization

NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results

2 code implementations11 May 2022 Yawei Li, Kai Zhang, Radu Timofte, Luc van Gool, Fangyuan Kong, Mingxi Li, Songwei Liu, Zongcai Du, Ding Liu, Chenhui Zhou, Jingyi Chen, Qingrui Han, Zheyuan Li, Yingqi Liu, Xiangyu Chen, Haoming Cai, Yu Qiao, Chao Dong, Long Sun, Jinshan Pan, Yi Zhu, Zhikai Zong, Xiaoxiao Liu, Zheng Hui, Tao Yang, Peiran Ren, Xuansong Xie, Xian-Sheng Hua, Yanbo Wang, Xiaozhong Ji, Chuming Lin, Donghao Luo, Ying Tai, Chengjie Wang, Zhizhong Zhang, Yuan Xie, Shen Cheng, Ziwei Luo, Lei Yu, Zhihong Wen, Qi Wu1, Youwei Li, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Yuanfei Huang, Meiguang Jin, Hua Huang, Jing Liu, Xinjian Zhang, Yan Wang, Lingshun Long, Gen Li, Yuanfan Zhang, Zuowei Cao, Lei Sun, Panaetov Alexander, Yucong Wang, Minjie Cai, Li Wang, Lu Tian, Zheyuan Wang, Hongbing Ma, Jie Liu, Chao Chen, Yidong Cai, Jie Tang, Gangshan Wu, Weiran Wang, Shirui Huang, Honglei Lu, Huan Liu, Keyan Wang, Jun Chen, Shi Chen, Yuchun Miao, Zimo Huang, Lefei Zhang, Mustafa Ayazoğlu, Wei Xiong, Chengyi Xiong, Fei Wang, Hao Li, Ruimian Wen, Zhijing Yang, Wenbin Zou, Weixin Zheng, Tian Ye, Yuncheng Zhang, Xiangzhen Kong, Aditya Arora, Syed Waqas Zamir, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Dandan Gaoand Dengwen Zhouand Qian Ning, Jingzhu Tang, Han Huang, YuFei Wang, Zhangheng Peng, Haobo Li, Wenxue Guan, Shenghua Gong, Xin Li, Jun Liu, Wanjun Wang, Dengwen Zhou, Kun Zeng, Hanjiang Lin, Xinyu Chen, Jinsheng Fang

The aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics including runtime, parameters, FLOPs, activations, and memory consumption while at least maintaining the PSNR of 29. 00dB on DIV2K validation set.

Image Super-Resolution Single Image Super Resolution

Settling the Sample Complexity of Model-Based Offline Reinforcement Learning

no code implementations11 Apr 2022 Gen Li, Laixi Shi, Yuxin Chen, Yuejie Chi, Yuting Wei

We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without burn-in cost for tabular Markov decision processes (MDPs).

Offline RL reinforcement-learning

The Efficacy of Pessimism in Asynchronous Q-Learning

no code implementations14 Mar 2022 Yuling Yan, Gen Li, Yuxin Chen, Jianqing Fan

This paper is concerned with the asynchronous form of Q-learning, which applies a stochastic approximation scheme to Markovian data samples.

Q-Learning

Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity

no code implementations28 Feb 2022 Laixi Shi, Gen Li, Yuting Wei, Yuxin Chen, Yuejie Chi

Offline or batch reinforcement learning seeks to learn a near-optimal policy using history data without active exploration of the environment.

Offline RL Q-Learning +1

SuperStyleNet: Deep Image Synthesis with Superpixel Based Style Encoder

1 code implementation17 Dec 2021 Jonghyun Kim, Gen Li, Cheolkon Jung, Joongkyu Kim

First, we directly extract the style codes from the original image based on superpixels to consider local objects.

Image Generation Superpixels

Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free Reinforcement Learning

no code implementations NeurIPS 2021 Gen Li, Laixi Shi, Yuxin Chen, Yuantao Gu, Yuejie Chi

Achieving sample efficiency in online episodic reinforcement learning (RL) requires optimally balancing exploration and exploitation.

Q-Learning reinforcement-learning

Provable Identifiability of ReLU Neural Networks via Lasso Regularization

no code implementations29 Sep 2021 Gen Li, Ganghua Wang, Yuantao Gu, Jie Ding

In this paper, the territory of LASSO is extended to the neural network model, a fashionable and powerful nonlinear regression model.

Variable Selection

The Rate of Convergence of Variation-Constrained Deep Neural Networks

no code implementations22 Jun 2021 Gen Li, Jie Ding

To the best of our knowledge, the rate of convergence of neural networks shown by existing works is bounded by at most the order of $n^{-1/4}$ for a sample size of $n$.

Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting

no code implementations NeurIPS 2021 Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei

The current paper pertains to a scenario with value-based linear representation, which postulates the linear realizability of the optimal Q-function (also called the "linear $Q^{\star}$ problem").

reinforcement-learning

Adaptive Prototype Learning and Allocation for Few-Shot Segmentation

2 code implementations CVPR 2021 Gen Li, Varun Jampani, Laura Sevilla-Lara, Deqing Sun, Jonghyun Kim, Joongkyu Kim

By integrating the SGC and GPA together, we propose the Adaptive Superpixel-guided Network (ASGNet), which is a lightweight model and adapts to object scale and shape variation.

Softmax Policy Gradient Methods Can Take Exponential Time to Converge

no code implementations22 Feb 2021 Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

The softmax policy gradient (PG) method, which performs gradient ascent under softmax policy parameterization, is arguably one of the de facto implementations of policy optimization in modern reinforcement learning.

Policy Gradient Methods

Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis

no code implementations12 Feb 2021 Gen Li, Changxiao Cai, Yuxin Chen, Yuantao Gu, Yuting Wei, Yuejie Chi

This paper addresses these questions for the synchronous setting: (1) when $|\mathcal{A}|=1$ (so that Q-learning reduces to TD learning), we prove that the sample complexity of TD learning is minimax optimal and scales as $\frac{|\mathcal{S}|}{(1-\gamma)^3\varepsilon^2}$ (up to log factor); (2) when $|\mathcal{A}|\geq 2$, we settle the sample complexity of Q-learning to be on the order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^4\varepsilon^2}$ (up to log factor).

Natural Questions Q-Learning

THE EFFICACY OF L1 REGULARIZATION IN NEURAL NETWORKS

no code implementations1 Jan 2021 Gen Li, Yuantao Gu, Jie Ding

A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.

The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks

no code implementations2 Oct 2020 Gen Li, Yuantao Gu, Jie Ding

A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.

Edge and Identity Preserving Network for Face Super-Resolution

1 code implementation27 Aug 2020 Jonghyun Kim, Gen Li, Inyong Yun, Cheolkon Jung, Joongkyu Kim

In this paper, we propose a novel Edge and Identity Preserving Network for Face SR Network, named as EIPNet, to minimize the distortion by utilizing a lightweight edge block and identity information.

Super-Resolution

Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

no code implementations NeurIPS 2020 Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

Focusing on a $\gamma$-discounted MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$, we demonstrate that the $\ell_{\infty}$-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise $\varepsilon$-accurate estimate of the Q-function --- is at most on the order of $\frac{1}{\mu_{\min}(1-\gamma)^5\varepsilon^2}+ \frac{t_{mix}}{\mu_{\min}(1-\gamma)}$ up to some logarithmic factor, provided that a proper constant learning rate is adopted.

Q-Learning

Nonconvex Low-Rank Tensor Completion from Noisy Data

no code implementations NeurIPS 2019 Changxiao Cai, Gen Li, H. Vincent Poor, Yuxin Chen

We study a noisy tensor completion problem of broad practical interest, namely, the reconstruction of a low-rank tensor from highly incomplete and randomly corrupted observations of its entries.

Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees

no code implementations9 Oct 2019 Changxiao Cai, Gen Li, Yuejie Chi, H. Vincent Poor, Yuxin Chen

This paper is concerned with estimating the column space of an unknown low-rank matrix $\boldsymbol{A}^{\star}\in\mathbb{R}^{d_{1}\times d_{2}}$, given noisy and partial observations of its entries.

DABNet: Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation

2 code implementations26 Jul 2019 Gen Li, Inyoung Yun, Jonghyun Kim, Joongkyu Kim

As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance.

Real-Time Semantic Segmentation

Theory of Spectral Method for Union of Subspaces-Based Random Geometry Graph

no code implementations25 Jul 2019 Gen Li, Yuantao Gu

Spectral Method is a commonly used scheme to cluster data points lying close to Union of Subspaces by first constructing a Random Geometry Graph, called Subspace Clustering.

Compressed Subspace Learning Based on Canonical Angle Preserving Property

no code implementations14 Jul 2019 Yuchen Jiao, Gen Li, Yuantao Gu

In this paper, we prove that random projection with the so-called Johnson-Lindenstrauss (JL) property approximately preserves canonical angles between subspaces with overwhelming probability.

Dimensionality Reduction

Deep Reason: A Strong Baseline for Real-World Visual Reasoning

no code implementations24 May 2019 Chenfei Wu, Yanzhao Zhou, Gen Li, Nan Duan, Duyu Tang, Xiaojie Wang

This paper presents a strong baseline for real-world visual reasoning (GQA), which achieves 60. 93% in GQA 2019 challenge and won the sixth place.

Visual Reasoning

Unraveling the Veil of Subspace RIP Through Near-Isometry on Subspaces

no code implementations23 May 2019 Xingyu Xv, Gen Li, Yuantao Gu

Subspace Restricted Isometry Property, a newly-proposed concept, has proved to be a useful tool in analyzing the effect of dimensionality reduction algorithms on subspaces.

Dimensionality Reduction

Integrative Multi-View Reduced-Rank Regression: Bridging Group-Sparse and Low-Rank Models

1 code implementation26 Jul 2018 Gen Li, Xiaokang Liu, Kun Chen

Multi-view data have been routinely collected in various fields of science and engineering.

MULTI-VIEW LEARNING

Rigorous Restricted Isometry Property of Low-Dimensional Subspaces

no code implementations30 Jan 2018 Gen Li, Qinghua Liu, Yuantao Gu

As an analogy to JL Lemma and RIP for sparse vectors, this work allows the use of random projections to reduce the ambient dimension with the theoretical guarantee that the distance between subspaces after compression is well preserved.

Dimensionality Reduction

Linear Convergence of An Iterative Phase Retrieval Algorithm with Data Reuse

no code implementations5 Dec 2017 Gen Li, Yuchen Jiao, Yuantao Gu

In this work, we study for the first time, without the independence assumption, the convergence behavior of the randomized Kaczmarz method for phase retrieval.

Image Super-Resolution Using Dense Skip Connections

no code implementations ICCV 2017 Tong Tong, Gen Li, Xiejie Liu, Qinquan Gao

In this study, we present a novel single-image super-resolution method by introducing dense skip connections in a very deep network.

Image Super-Resolution Single Image Super Resolution

Active Orthogonal Matching Pursuit for Sparse Subspace Clustering

no code implementations16 Aug 2017 Yanxi Chen, Gen Li, Yuantao Gu

In this letter, we propose a novel Active OMP-SSC, which improves clustering accuracy of OMP-SSC by adaptively updating data points and randomly dropping data points in the OMP process, while still enjoying the low computational complexity of greedy pursuit algorithms.

Structural Learning and Integrative Decomposition of Multi-View Data

no code implementations20 Jul 2017 Irina Gaynanova, Gen Li

We call this model SLIDE for Structural Learning and Integrative DEcomposition of multi-view data.

Dimensionality Reduction

Restricted Isometry Property of Gaussian Random Projection for Finite Set of Subspaces

no code implementations7 Apr 2017 Gen Li, Yuantao Gu

Dimension reduction plays an essential role when decreasing the complexity of solving large-scale problems.

Dimensionality Reduction

Phase Transitions of Spectral Initialization for High-Dimensional Nonconvex Estimation

no code implementations21 Feb 2017 Yue M. Lu, Gen Li

We study a spectral initialization method that serves a key role in recent work on estimating signals in nonconvex settings.

Supervised multiway factorization

1 code implementation11 Sep 2016 Eric F. Lock, Gen Li

We describe a likelihood-based latent variable representation of the CP factorization, in which the latent variables are informed by additional covariates.

Dimensionality Reduction

Direction-Projection-Permutation for High Dimensional Hypothesis Tests

1 code implementation2 Apr 2013 Susan Wei, Chihoon Lee, Lindsay Wichers, Gen Li, J. S. Marron

Motivated by the prevalence of high dimensional low sample size datasets in modern statistical applications, we propose a general nonparametric framework, Direction-Projection-Permutation (DiProPerm), for testing high dimensional hypotheses.

Methodology

Cannot find the paper you are looking for? You can Submit a new open access paper.