Search Results for author: Gen Li

Found 31 papers, 6 papers with code

Analysing and Modelling of Discretionary Lane Change Duration Considering Driver Heterogeneity

no code implementations15 Aug 2021 Gen Li, Zhen Yang, Yiyong Pan, Jianxiao Ma

It is found that the LC duration varies across different vehicle types and LC directions.

The Rate of Convergence of Variation-Constrained Deep Neural Networks

no code implementations22 Jun 2021 Gen Li, Yuantao Gu, Jie Ding

To the best of our knowledge, the rate of convergence of neural networks shown by existing works is bounded by at most the order of $n^{-1/4}$ for a sample size of $n$.

Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting

no code implementations17 May 2021 Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei

The current paper pertains to a scenario with value-based linear representation, which postulates the linear realizability of the optimal Q-function (also called the "linear $Q^{\star}$ problem").

Minimax Estimation of Linear Functions of Eigenvectors in the Face of Small Eigen-Gaps

no code implementations7 Apr 2021 Gen Li, Changxiao Cai, Yuantao Gu, H. Vincent Poor, Yuxin Chen

Eigenvector perturbation analysis plays a vital role in various statistical data science applications.

Denoising

Adaptive Prototype Learning and Allocation for Few-Shot Segmentation

2 code implementations CVPR 2021 Gen Li, Varun Jampani, Laura Sevilla-Lara, Deqing Sun, Jonghyun Kim, Joongkyu Kim

By integrating the SGC and GPA together, we propose the Adaptive Superpixel-guided Network (ASGNet), which is a lightweight model and adapts to object scale and shape variation.

Softmax Policy Gradient Methods Can Take Exponential Time to Converge

no code implementations22 Feb 2021 Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

The softmax policy gradient (PG) method, which performs gradient ascent under softmax policy parameterization, is arguably one of the de facto implementations of policy optimization in modern reinforcement learning.

Policy Gradient Methods

Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis

no code implementations12 Feb 2021 Gen Li, Changxiao Cai, Yuxin Chen, Yuantao Gu, Yuting Wei, Yuejie Chi

Take a $\gamma$-discounted infinite-horizon MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$: to yield an entrywise $\varepsilon$-accurate estimate of the optimal Q-function, state-of-the-art theory for Q-learning proves that a sample size on the order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^5\varepsilon^{2}}$ is sufficient, which, however, fails to match with the existing minimax lower bound.

Q-Learning

THE EFFICACY OF L1 REGULARIZATION IN NEURAL NETWORKS

no code implementations1 Jan 2021 Gen Li, Yuantao Gu, Jie Ding

A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.

The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks

no code implementations2 Oct 2020 Gen Li, Yuantao Gu, Jie Ding

A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.

Edge and Identity Preserving Network for Face Super-Resolution

1 code implementation27 Aug 2020 Jonghyun Kim, Gen Li, Inyong Yun, Cheolkon Jung, Joongkyu Kim

In this paper, we propose a novel Edge and Identity Preserving Network for Face SR Network, named as EIPNet, to minimize the distortion by utilizing a lightweight edge block and identity information.

Super-Resolution

Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

no code implementations NeurIPS 2020 Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

Focusing on a $\gamma$-discounted MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$, we demonstrate that the $\ell_{\infty}$-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise $\varepsilon$-accurate estimate of the Q-function --- is at most on the order of $\frac{1}{\mu_{\min}(1-\gamma)^5\varepsilon^2}+ \frac{t_{mix}}{\mu_{\min}(1-\gamma)}$ up to some logarithmic factor, provided that a proper constant learning rate is adopted.

Q-Learning

Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model

no code implementations NeurIPS 2020 Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

We investigate the sample efficiency of reinforcement learning in a $\gamma$-discounted infinite-horizon Markov decision process (MDP) with state space $\mathcal{S}$ and action space $\mathcal{A}$, assuming access to a generative model.

Model-based Reinforcement Learning

Nonconvex Low-Rank Tensor Completion from Noisy Data

no code implementations NeurIPS 2019 Changxiao Cai, Gen Li, H. Vincent Poor, Yuxin Chen

We study a noisy tensor completion problem of broad practical interest, namely, the reconstruction of a low-rank tensor from highly incomplete and randomly corrupted observations of its entries.

Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees

no code implementations9 Oct 2019 Changxiao Cai, Gen Li, Yuejie Chi, H. Vincent Poor, Yuxin Chen

This paper is concerned with estimating the column space of an unknown low-rank matrix $\boldsymbol{A}^{\star}\in\mathbb{R}^{d_{1}\times d_{2}}$, given noisy and partial observations of its entries.

DABNet: Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation

2 code implementations26 Jul 2019 Gen Li, Inyoung Yun, Jonghyun Kim, Joongkyu Kim

As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance.

Real-Time Semantic Segmentation

Theory of Spectral Method for Union of Subspaces-Based Random Geometry Graph

no code implementations25 Jul 2019 Gen Li, Yuantao Gu

Spectral Method is a commonly used scheme to cluster data points lying close to Union of Subspaces by first constructing a Random Geometry Graph, called Subspace Clustering.

Compressed Subspace Learning Based on Canonical Angle Preserving Property

no code implementations14 Jul 2019 Yuchen Jiao, Gen Li, Yuantao Gu

In this paper, we prove that random projection with the so-called Johnson-Lindenstrauss (JL) property approximately preserves canonical angles between subspaces with overwhelming probability.

Dimensionality Reduction

Deep Reason: A Strong Baseline for Real-World Visual Reasoning

no code implementations24 May 2019 Chenfei Wu, Yanzhao Zhou, Gen Li, Nan Duan, Duyu Tang, Xiaojie Wang

This paper presents a strong baseline for real-world visual reasoning (GQA), which achieves 60. 93% in GQA 2019 challenge and won the sixth place.

Visual Reasoning

Unraveling the Veil of Subspace RIP Through Near-Isometry on Subspaces

no code implementations23 May 2019 Xingyu Xv, Gen Li, Yuantao Gu

Subspace Restricted Isometry Property, a newly-proposed concept, has proved to be a useful tool in analyzing the effect of dimensionality reduction algorithms on subspaces.

Dimensionality Reduction

Integrative Multi-View Reduced-Rank Regression: Bridging Group-Sparse and Low-Rank Models

1 code implementation26 Jul 2018 Gen Li, Xiaokang Liu, Kun Chen

Multi-view data have been routinely collected in various fields of science and engineering.

MULTI-VIEW LEARNING

Rigorous Restricted Isometry Property of Low-Dimensional Subspaces

no code implementations30 Jan 2018 Gen Li, Qinghua Liu, Yuantao Gu

As an analogy to JL Lemma and RIP for sparse vectors, this work allows the use of random projections to reduce the ambient dimension with the theoretical guarantee that the distance between subspaces after compression is well preserved.

Dimensionality Reduction

Linear Convergence of An Iterative Phase Retrieval Algorithm with Data Reuse

no code implementations5 Dec 2017 Gen Li, Yuchen Jiao, Yuantao Gu

In this work, we study for the first time, without the independence assumption, the convergence behavior of the randomized Kaczmarz method for phase retrieval.

Image Super-Resolution Using Dense Skip Connections

no code implementations ICCV 2017 Tong Tong, Gen Li, Xiejie Liu, Qinquan Gao

In this study, we present a novel single-image super-resolution method by introducing dense skip connections in a very deep network.

Image Super-Resolution

Active Orthogonal Matching Pursuit for Sparse Subspace Clustering

no code implementations16 Aug 2017 Yanxi Chen, Gen Li, Yuantao Gu

In this letter, we propose a novel Active OMP-SSC, which improves clustering accuracy of OMP-SSC by adaptively updating data points and randomly dropping data points in the OMP process, while still enjoying the low computational complexity of greedy pursuit algorithms.

Structural Learning and Integrative Decomposition of Multi-View Data

no code implementations20 Jul 2017 Irina Gaynanova, Gen Li

We call this model SLIDE for Structural Learning and Integrative DEcomposition of multi-view data.

Dimensionality Reduction

Restricted Isometry Property of Gaussian Random Projection for Finite Set of Subspaces

no code implementations7 Apr 2017 Gen Li, Yuantao Gu

Dimension reduction plays an essential role when decreasing the complexity of solving large-scale problems.

Dimensionality Reduction

Phase Transitions of Spectral Initialization for High-Dimensional Nonconvex Estimation

no code implementations21 Feb 2017 Yue M. Lu, Gen Li

We study a spectral initialization method that serves a key role in recent work on estimating signals in nonconvex settings.

Supervised multiway factorization

1 code implementation11 Sep 2016 Eric F. Lock, Gen Li

We describe a likelihood-based latent variable representation of the CP factorization, in which the latent variables are informed by additional covariates.

Dimensionality Reduction

Direction-Projection-Permutation for High Dimensional Hypothesis Tests

1 code implementation2 Apr 2013 Susan Wei, Chihoon Lee, Lindsay Wichers, Gen Li, J. S. Marron

Motivated by the prevalence of high dimensional low sample size datasets in modern statistical applications, we propose a general nonparametric framework, Direction-Projection-Permutation (DiProPerm), for testing high dimensional hypotheses.

Methodology

Cannot find the paper you are looking for? You can Submit a new open access paper.