Search Results for author: Mao Ye

Found 32 papers, 8 papers with code

Network Pruning by Greedy Subnetwork Selection

no code implementations ICML 2020 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

Theoretically, we show that the small networks pruned using our method achieve provably lower loss than small networks trained from scratch with the same size.

Network Pruning

MaxUp: Lightweight Adversarial Training With Data Augmentation Improves Neural Network Training

no code implementations CVPR 2021 Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu

The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data.

Data Augmentation Image Classification +1

VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments

1 code implementation14 Mar 2021 Lizhen Nie, Mao Ye, Qiang Liu, Dan Nicolae

Motivated by the rising abundance of observational data with continuous treatments, we investigate the problem of estimating the average dose-response curve (ADRF).

QoS-aware Link Scheduling Strategy for Data Transmission in SDVN

no code implementations1 Feb 2021 Yong Zhang, Mao Ye, Lin Guan

The original contributions of this paper are summarized as follows: (1) Model the packets collision probability of broadcast or NACK transmission in VANET with the combination theory and investigate the potential influence of miss my packets (MMP) problem.

Networking and Internet Architecture

Varying Coefficient Neural Network with Functional Targeted Regularization for Estimating Continuous Treatment Effects

no code implementations ICLR 2021 Lizhen Nie, Mao Ye, Qiang Liu, Dan Nicolae

With the rising abundance of observational data with continuous treatments, we investigate the problem of estimating average dose-response curve (ADRF).

Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough

1 code implementation NeurIPS 2020 Mao Ye, Lemeng Wu, Qiang Liu

Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size.

Adaptive Dense-to-Sparse Paradigm for Pruning Online Recommendation System with Non-Stationary Data

no code implementations16 Oct 2020 Mao Ye, Dhruv Choudhary, Jiecao Yu, Ellie Wen, Zeliang Chen, Jiyan Yang, Jongsoo Park, Qiang Liu, Arun Kejariwal

To the best of our knowledge, this is the first work to provide in-depth analysis and discussion of applying pruning to online recommendation systems with non-stationary data distribution.

Recommendation Systems

Go Wide, Then Narrow: Efficient Training of Deep Thin Networks

no code implementations ICML 2020 Denny Zhou, Mao Ye, Chen Chen, Tianjian Meng, Mingxing Tan, Xiaodan Song, Quoc Le, Qiang Liu, Dale Schuurmans

This is achieved by layerwise imitation, that is, forcing the thin network to mimic the intermediate outputs of the wide network from layer to layer.

Model Compression

SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions

1 code implementation ACL 2020 Mao Ye, Chengyue Gong, Qiang Liu

For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution.

Text Classification

Unsupervised Feature Selection via Multi-step Markov Transition Probability

no code implementations29 May 2020 Yan Min, Mao Ye, Liang Tian, Yulin Jian, Ce Zhu, Shangming Yang

Our main contributions are a novel feature section approach which uses multi-step transition probability to characterize the data structure, and three algorithms proposed from the positive and negative aspects for keeping data structure.

Dimensionality Reduction Feature Selection

Learning Various Length Dependence by Dual Recurrent Neural Networks

no code implementations28 May 2020 Chenpeng Zhang, Shuai Li, Mao Ye, Ce Zhu, Xue Li

Many variants of RNN have been proposed to solve the gradient problems of training RNNs and process long sequences.

Disentanglement Then Reconstruction: Learning Compact Features for Unsupervised Domain Adaptation

no code implementations28 May 2020 Lihua Zhou, Mao Ye, Xinpeng Li, Ce Zhu, Yiguang Liu, Xue Li

By this reconstructor, we can construct prototypes for the original features using class prototypes and domain prototypes correspondingly.

Unsupervised Domain Adaptation

Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting

no code implementations23 Mar 2020 Lemeng Wu, Mao Ye, Qi Lei, Jason D. Lee, Qiang Liu

Recently, Liu et al.[19] proposed a splitting steepest descent (S2D) method that jointly optimizes the neural parameters and architectures based on progressively growing network structures by splitting neurons into multiple copies in a steepest descent fashion.

Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection

1 code implementation3 Mar 2020 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

This differs from the existing methods based on backward elimination, which remove redundant neurons from the large network.

Network Pruning

Stein Self-Repulsive Dynamics: Benefits From Past Samples

no code implementations NeurIPS 2020 Mao Ye, Tongzheng Ren, Qiang Liu

Our idea is to introduce Stein variational gradient as a repulsive force to push the samples of Langevin dynamics away from the past trajectories.

Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework

no code implementations NeurIPS 2020 Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu

Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning.

MaxUp: A Simple Way to Improve Generalization of Neural Network Training

1 code implementation20 Feb 2020 Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu

The idea is to generate a set of augmented data with some random perturbations or transforms and minimize the maximum, or worst case loss over the augmented data.

Few-Shot Image Classification General Classification +1

Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision

no code implementations20 Feb 2020 Xingchao Liu, Mao Ye, Dengyong Zhou, Qiang Liu

We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vectors of low-bit numbers; this is in contrast to typical quantization methods that approximate each weight using a single low precision number.

Object Detection Quantization

Extended Stochastic Gradient MCMC for Large-Scale Bayesian Variable Selection

no code implementations7 Feb 2020 Qifan Song, Yan Sun, Mao Ye, Faming Liang

Stochastic gradient Markov chain Monte Carlo (MCMC) algorithms have received much attention in Bayesian computing for big data problems, but they are only applicable to a small class of problems for which the parameter space has a fixed dimension and the log-posterior density is differentiable with respect to the parameters.

Variable Selection

Distribution-Aware Coordinate Representation for Human Pose Estimation

4 code implementations CVPR 2020 Feng Zhang, Xiatian Zhu, Hanbin Dai, Mao Ye, Ce Zhu

Interestingly, we found that the process of decoding the predicted heatmaps into the final joint coordinates in the original image space is surprisingly significant for human pose estimation performance, which nevertheless was not recognised before.

 Ranked #1 on Multi-Person Pose Estimation on COCO (using extra training data)

Keypoint Detection Multi-Person Pose Estimation

Strain engineering of epitaxial oxide heterostructures beyond substrate limitations

no code implementations3 May 2019 Xiong Deng, Chao Chen, Deyang Chen, Xiangbin Cai, Xiaozhe Yin, Chao Xu, Fei Sun, Caiwen Li, Yan Li, Han Xu, Mao Ye, Guo Tian, Zhen Fan, Zhipeng Hou, Minghui Qin, Yu Chen, Zhenlin Luo, Xubing Lu, Guofu Zhou, Lang Chen, Ning Wang, Ye Zhu, Xingsen Gao, Jun-Ming Liu

The limitation of commercially available single-crystal substrates and the lack of continuous strain tunability preclude the ability to take full advantage of strain engineering for further exploring novel properties and exhaustively studying fundamental physics in complex oxides.

Materials Science

Fast Human Pose Estimation

1 code implementation CVPR 2019 Feng Zhang, Xiatian Zhu, Mao Ye

In this work, we investigate the under-studied but practically critical pose model efficiency problem.

Pose Estimation

Stein Neural Sampler

1 code implementation8 Oct 2018 Tianyang Hu, Zixiang Chen, Hanxi Sun, Jincheng Bai, Mao Ye, Guang Cheng

We propose two novel samplers to generate high-quality samples from a given (un-normalized) probability density.

Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach

no code implementations ICML 2018 Mao Ye, Yan Sun

We propose a variable selection method for high dimensional regression models, which allows for complex, nonlinear, and high-order interactions among variables.

Variable Selection

Do Convolutional Neural Networks Learn Class Hierarchy?

no code implementations17 Oct 2017 Bilal Alsallakh, Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren

We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data.

Hierarchical structure Image Classification

3D Reconstruction in the Presence of Glasses by Acoustic and Stereo Fusion

no code implementations CVPR 2015 Mao Ye, Yu Zhang, Ruigang Yang, Dinesh Manocha

We present a novel sensor fusion algorithm that first segments the depth map into different categories such as opaque/transparent/infinity (e. g., too far to measure) and then updates the depth map based on the segmentation outcome.

3D Reconstruction Sensor Fusion

Data-driven Flower Petal Modeling with Botany Priors

no code implementations CVPR 2014 Chenxi Zhang, Mao Ye, Bo Fu, Ruigang Yang

Each segmented petal is then fitted with a scale-invariant morphable petal shape model, which is constructed from individually scanned exemplar petals.

Real-time Simultaneous Pose and Shape Estimation for Articulated Objects Using a Single Depth Camera

no code implementations CVPR 2014 Mao Ye, Ruigang Yang

In this paper we present a novel real-time algorithm for simultaneous pose and shape estimation for articulated objects, such as human beings and animals.

Pose Estimation

Quality Dynamic Human Body Modeling Using a Single Low-cost Depth Camera

no code implementations CVPR 2014 Qing Zhang, Bo Fu, Mao Ye, Ruigang Yang

In this paper we present a novel autonomous pipeline to build a personalized parametric model (pose-driven avatar) using a single depth sensor.

Cannot find the paper you are looking for? You can Submit a new open access paper.