1 code implementation • 29 Jan 2023 • Yangguang Li, Bin Huang, Zeren Chen, Yufeng Cui, Feng Liang, Mingzhu Shen, Fenggang Liu, Enze Xie, Lu Sheng, Wanli Ouyang, Jing Shao
Our Fast-BEV consists of five parts, We novelly propose (1) a lightweight deployment-friendly view transformation which fast transfers 2D image feature to 3D voxel space, (2) an multi-scale image encoder which leverages multi-scale information for better performance, (3) an efficient BEV encoder which is particularly designed to speed up on-vehicle inference.
1 code implementation • 19 Jan 2023 • Bin Huang, Yangguang Li, Enze Xie, Feng Liang, Luya Wang, Mingzhu Shen, Fenggang Liu, Tianqi Wang, Ping Luo, Jing Shao
Recently, the pure camera-based Bird's-Eye-View (BEV) perception removes expensive Lidar sensors, making it a feasible solution for economical autonomous driving.
no code implementations • 5 Dec 2022 • Hung-Yueh Chiang, Natalia Frumkin, Feng Liang, Diana Marculescu
MobileTL trains the shifts for internal normalization layers to avoid storing activation maps for the backward pass.
1 code implementation • 9 Oct 2022 • Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, Diana Marculescu
To address this, we propose to finetune CLIP on a collection of masked image regions and their corresponding text descriptions.
no code implementations • 7 Sep 2022 • Haisheng Fu, Feng Liang
In addition, these methods based on the context-adaptive entropy model cannot be accelerated in the decoding process by parallel computing devices, e. g. FPGA or GPU.
no code implementations • 22 Jun 2022 • Yang Zhou, Feng Liang, Ting-Wu Chin, Diana Marculescu
Machine learning (ML) has entered the mobile era where an enormous number of ML models are deployed on edge devices.
no code implementations • 21 Jun 2022 • Haisheng Fu, Feng Liang, Jie Liang, Binglin Li, Guohe Zhang, Jingning Han
Based on this observation, we design an asymmetric paradigm, in which the encoder employs three stages of MSRBs to improve the learning capacity, whereas the decoder only needs one stage of MSRB to yield satisfactory reconstruction, thereby reducing the decoding complexity without sacrifcing performance.
2 code implementations • 28 May 2022 • Feng Liang, Yangguang Li, Diana Marculescu
The proposed Supervised MAE (SupMAE) only exploits a visible subset of image patches for classification, unlike the standard supervised pre-training where all image patches are used.
no code implementations • 8 May 2022 • Ngan Nguyen, Feng Liang, Dominik Engel, Ciril Bohak, Peter Wonka, Timo Ropinski, Ivan Viola
We propose a new microscopy simulation system that can depict atomistic models in a micrograph visual style, similar to results of physical electron microscopy imaging.
no code implementations • 11 Mar 2022 • Yang Liu, Juan Wang, Zhengxing Chen, Ian Fox, Imani Mufti, Jason Sukumaran, Baokun He, Xiling Sun, Feng Liang
Scheduled batch jobs have been widely used on the asynchronous computing platforms to execute various enterprise applications, including the scheduled notifications and the candidate pre-computation for the modern recommender systems.
1 code implementation • 11 Mar 2022 • Yufeng Cui, Lichen Zhao, Feng Liang, Yangguang Li, Jing Shao
This is because researchers do not choose consistent training recipes and even use different data, hampering the fair comparison between different methods.
no code implementations • 18 Jan 2022 • Luya Wang, Feng Liang, Yangguang Li, Honggang Zhang, Wanli Ouyang, Jing Shao
Recently, self-supervised vision transformers have attracted unprecedented attention for their impressive representation learning ability.
2 code implementations • ICLR 2022 • Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, Junjie Yan
Recently, large-scale Contrastive Language-Image Pre-training (CLIP) has attracted unprecedented attention for its impressive zero-shot recognition ability and excellent transferability to downstream tasks.
no code implementations • 8 Oct 2021 • Yinyin Chen, Shishuang He, Yun Yang, Feng Liang
Our theory introduces a new set of geometric conditions for topic model identifiability, conditions that are weaker than conventional separability conditions, which typically rely on the existence of pure topic documents or of anchor words.
1 code implementation • 14 Jul 2021 • Haisheng Fu, Feng Liang, Jianping Lin, Bing Li, Mohammad Akbari, Jie Liang, Guohe Zhang, Dong Liu, Chengjie Tu, Jingning Han
However, due to the vast diversity of images, it is not optimal to use one model for all images, even different regions of one image.
1 code implementation • CVPR 2021 • Jie Liu, Chuming Li, Feng Liang, Chen Lin, Ming Sun, Junjie Yan, Wanli Ouyang, Dong Xu
To develop a practical method for learning complex inception convolution based on the data, a simple but effective search algorithm, referred to as efficient dilation optimization (EDO), is developed.
no code implementations • 20 Dec 2020 • Yang Liu, Zhengxing Chen, Kittipat Virochsiri, Juan Wang, Jiahao Wu, Feng Liang
We demonstrate statistically significant improvement in daily metrics and resource efficiency by our method in several notification applications at a scale of billions of users.
no code implementations • 30 Nov 2020 • Hsin-Pai Cheng, Feng Liang, Meng Li, Bowen Cheng, Feng Yan, Hai Li, Vikas Chandra, Yiran Chen
We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation.
Ranked #5 on Multi-Person Pose Estimation on COCO test-dev
1 code implementation • ICCV 2021 • Mingzhu Shen, Feng Liang, Ruihao Gong, Yuhang Li, Chuming Li, Chen Lin, Fengwei Yu, Junjie Yan, Wanli Ouyang
Therefore, we propose to combine Network Architecture Search methods with quantization to enjoy the merits of the two sides.
no code implementations • 28 Sep 2020 • Mingzhu Shen, Feng Liang, Chuming Li, Chen Lin, Ming Sun, Junjie Yan, Wanli Ouyang
Automatic search of Quantized Neural Networks (QNN) has attracted a lot of attention.
no code implementations • 8 Jul 2020 • Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen
To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.
no code implementations • ICLR 2020 • Feng Liang, Chen Lin, Ronghao Guo, Ming Sun, Wei Wu, Junjie Yan, Wanli Ouyang
However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal.
1 code implementation • NeurIPS 2019 • Lingrui Gan, Xinming Yang, Naveen Narisetty, Feng Liang
In this paper, we propose a novel Bayesian group regularization method based on the spike and slab Lasso priors for jointly estimating multiple graphical models.
no code implementations • 15 Jul 2019 • Haisheng Fu, Feng Liang, Bo Lei, Nai Bian, Qian Zhang, Mohammad Akbari, Jie Liang, Chengjie Tu
Recently deep learning-based methods have been applied in image compression and achieved many promising results.
no code implementations • 3 Jul 2019 • Nai Bian, Feng Liang, Haisheng Fu, Bo Lei
In this paper, we propose a deep convolutional autoencoder compression network for face recognition tasks.
no code implementations • 6 May 2018 • Lingrui Gan, Naveen N. Narisetty, Feng Liang
We consider a Bayesian framework for estimating a high-dimensional sparse precision matrix, in which adaptive shrinkage and sparsity are induced by a mixture of Laplace priors.
no code implementations • 5 Sep 2017 • Yingzhen Yang, Feng Liang, Nebojsa Jojic, Shuicheng Yan, Jiashi Feng, Thomas S. Huang
By generalization analysis via Rademacher complexity, the generalization error bound for the kernel classifier learned from hypothetical labeling is expressed as the sum of pairwise similarity between the data from different classes, parameterized by the weights of the kernel classifier.
1 code implementation • 16 Feb 2017 • Yunbo Ouyang, Feng Liang
We propose an empirical Bayes estimator based on Dirichlet process mixture model for estimating the sparse normalized mean difference, which could be directly applied to the high dimensional linear classification.
no code implementations • 25 Aug 2015 • Daniel Khashabi, John Wieting, Jeffrey Yufei Liu, Feng Liang
Empirical studies have been carried out to compare our work with many constrained clustering algorithms from the literature on both a variety of data sets and under a variety of conditions such as using noisy side information and erroneous k values.
no code implementations • NeurIPS 2014 • Yingzhen Yang, Feng Liang, Shuicheng Yan, Zhangyang Wang, Thomas S. Huang
Modeling the underlying data distribution by nonparametric kernel density estimation, the generalization error bounds for both unsupervised nonparametric classifiers are the sum of nonparametric pairwise similarity terms between the data points for the purpose of clustering.
no code implementations • NeurIPS 2014 • James Ridgway, Pierre Alquier, Nicolas Chopin, Feng Liang
We also extend our method to a class of non-linear score functions, essentially leading to a nonparametric procedure, by considering a Gaussian process prior.
no code implementations • CVPR 2013 • Zhen Li, Shiyu Chang, Feng Liang, Thomas S. Huang, Liangliang Cao, John R. Smith
This paper proposes to learn a decision function for verification that can be viewed as a joint model of a distance metric and a locally adaptive thresholding rule.
no code implementations • NeurIPS 2009 • Jing Gao, Feng Liang, Wei Fan, Yizhou Sun, Jiawei Han
First, we can boost the diversity of classification ensemble by incorporating multiple clustering outputs, each of which provides grouping constraints for the joint label predictions of a set of related objects.
no code implementations • NeurIPS 2008 • Qiang Wu, Sayan Mukherjee, Feng Liang
We developed localized sliced inverse regression for supervised dimension reduction.