Search Results for author: Zhenguo Li

Found 98 papers, 31 papers with code

Arch-Graph: Acyclic Architecture Relation Predictor for Task-Transferable Neural Architecture Search

1 code implementation12 Apr 2022 Minbin Huang, Zhijian Huang, Changlin Li, Xin Chen, Hang Xu, Zhenguo Li, Xiaodan Liang

It is able to find top 0. 16\% and 0. 29\% architectures on average on two search spaces under the budget of only 50 models.

Neural Architecture Search

ManiTrans: Entity-Level Text-Guided Image Manipulation via Token-wise Semantic Alignment and Generation

no code implementations9 Apr 2022 Jianan Wang, Guansong Lu, Hang Xu, Zhenguo Li, Chunjing Xu, Yanwei Fu

Existing text-guided image manipulation methods aim to modify the appearance of the image or to edit a few objects in a virtual or simple scenario, which is far from practical application.

Image Generation Image Manipulation

Generalizing Few-Shot NAS with Gradient Matching

1 code implementation ICLR 2022 Shoukang Hu, Ruochen Wang, Lanqing Hong, Zhenguo Li, Cho-Jui Hsieh, Jiashi Feng

Efficient performance estimation of architectures drawn from large search spaces is essential to Neural Architecture Search.

Neural Architecture Search

CODA: A Real-World Road Corner Case Dataset for Object Detection in Autonomous Driving

no code implementations15 Mar 2022 Kaican Li, Kai Chen, Haoyu Wang, Lanqing Hong, Chaoqiang Ye, Jianhua Han, Yukuai Chen, Wei zhang, Chunjing Xu, Dit-yan Yeung, Xiaodan Liang, Zhenguo Li, Hang Xu

One main reason that impedes the development of truly reliably self-driving systems is the lack of public datasets for evaluating the performance of object detectors on corner cases.

Autonomous Driving Object Detection

Memory Replay with Data Compression for Continual Learning

1 code implementation ICLR 2022 Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu

In this work, we propose memory replay with data compression (MRDC) to reduce the storage cost of old training samples and thus increase their amount that can be stored in the memory buffer.

Autonomous Driving class-incremental learning +4

Long-tail Recognition via Compositional Knowledge Transfer

no code implementations13 Dec 2021 Sarah Parisot, Pedro M. Esperanca, Steven McDonagh, Tamas J. Madarasz, Yongxin Yang, Zhenguo Li

In this work, we introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem via training-free knowledge transfer.

Transfer Learning

Layer-Parallel Training of Residual Networks with Auxiliary-Variable Networks

no code implementations10 Dec 2021 Qi Sun, Hexin Dong, Zewei Chen, Jiacheng Sun, Zhenguo Li, Bin Dong

Gradient-based methods for the distributed training of residual networks (ResNets) typically require a forward pass of the input data, followed by back-propagating the error gradient to update model parameters, which becomes time-consuming as the network goes deeper.

Data Augmentation

Understanding Square Loss in Training Overparametrized Neural Network Classifiers

no code implementations7 Dec 2021 Tianyang Hu, Jun Wang, Wenjia Wang, Zhenguo Li

Comparing to cross-entropy, square loss has comparable generalization error but noticeable advantages in robustness and model calibration.

MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

no code implementations NeurIPS 2021 Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li

First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation.

Transfer Learning

FILIP: Fine-grained Interactive Language-Image Pre-Training

no code implementations ICLR 2022 Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu

In this paper, we introduce a large-scale Fine-grained Interactive Language-Image Pre-training (FILIP) to achieve finer-level alignment through a cross-modal late interaction mechanism, which uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective.

Image Classification Zero-Shot Image Classification

AIM: Automatic Interaction Machine for Click-Through Rate Prediction

1 code implementation5 Nov 2021 Chenxu Zhu, Bo Chen, Weinan Zhang, Jincai Lai, Ruiming Tang, Xiuqiang He, Zhenguo Li, Yong Yu

To address these three issues mentioned above, we propose Automatic Interaction Machine (AIM) with three core components, namely, Feature Interaction Search (FIS), Interaction Function Search (IFS) and Embedding Dimension Search (EDS), to select significant feature interactions, appropriate interaction functions and necessary embedding dimensions automatically in a unified framework.

Click-Through Rate Prediction

OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression

no code implementations NeurIPS 2021 Chen Zhang, Shifeng Zhang, Fabio Maria Carlucci, Zhenguo Li

To eliminate the requirement of saving separate models for different target datasets, we propose a novel setting that starts from a pretrained deep generative model and compresses the data batches while adapting the model with a dynamical system for only one epoch.

Density Estimation

iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder

no code implementations NeurIPS 2021 Shifeng Zhang, Ning Kang, Tom Ryder, Zhenguo Li

In this paper, we discuss lossless compression using normalizing flows which have demonstrated a great capacity for achieving high compression ratios.

Nonlinear ICA Using Volume-Preserving Transformations

no code implementations ICLR 2022 Xiaojiang Yang, Yi Wang, Jiacheng Sun, Xing Zhang, Shifeng Zhang, Zhenguo Li, Junchi Yan

Nonlinear ICA is a fundamental problem in machine learning, aiming to identify the underlying independent components (sources) from data which is assumed to be a nonlinear function (mixing function) of these sources.

Rethinking Adversarial Transferability from a Data Distribution Perspective

no code implementations ICLR 2022 Yao Zhu, Jiacheng Sun, Zhenguo Li

Adversarial transferability enables attackers to generate adversarial examples from the source model to attack the target model, which has raised security concerns about the deployment of DNNs in practice.

Adversarial Attack

How Well Does Self-Supervised Pre-Training Perform with Streaming ImageNet?

no code implementations NeurIPS Workshop ImageNet_PPF 2021 Dapeng Hu, Shipeng Yan, Qizhengqiu Lu, Lanqing Hong, Hailin Hu, Yifan Zhang, Zhenguo Li, Xinchao Wang, Jiashi Feng

Prior works on self-supervised pre-training focus on the joint training scenario, where massive unlabeled data are assumed to be given as input all at once, and only then is a learner trained.

Self-Supervised Learning

Layer-Parallel Training of Residual Networks with Auxiliary Variables

no code implementations NeurIPS Workshop DLDE 2021 Qi Sun, Hexin Dong, Zewei Chen, Weizhen Dian, Jiacheng Sun, Yitong Sun, Zhenguo Li, Bin Dong

Backpropagation algorithm is indispensable for training modern residual networks (ResNets) and usually tends to be time-consuming due to its inherent algorithmic lockings.

Data Augmentation

NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization

no code implementations ICCV 2021 Haoyue Bai, Fengwei Zhou, Lanqing Hong, Nanyang Ye, S. -H. Gary Chan, Zhenguo Li

In this work, we propose robust Neural Architecture Search for OoD generalization (NAS-OoD), which optimizes the architecture with respect to its performance on generated OoD data by gradient descent.

Domain Generalization Neural Architecture Search +1

Adversarial Robustness for Unsupervised Domain Adaptation

no code implementations ICCV 2021 Muhammad Awais, Fengwei Zhou, Hang Xu, Lanqing Hong, Ping Luo, Sung-Ho Bae, Zhenguo Li

Extensive Unsupervised Domain Adaptation (UDA) studies have shown great success in practice by learning transferable representations across a labeled source domain and an unlabeled target domain with deep models.

Adversarial Robustness Unsupervised Domain Adaptation

MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving

1 code implementation ICCV 2021 Kai Chen, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-yan Yeung

By pre-training on SODA10M, a large-scale autonomous driving dataset, MultiSiam exceeds the ImageNet pre-trained MoCo-v2, demonstrating the potential of domain-specific pre-training.

Autonomous Driving Image Clustering +2

Towards Understanding the Generative Capability of Adversarially Robust Classifiers

no code implementations ICCV 2021 Yao Zhu, Jiacheng Ma, Jiacheng Sun, Zewei Chen, Rongxin Jiang, Zhenguo Li

We find that adversarial training contributes to obtaining an energy function that is flat and has low energy around the real data, which is the key for generative capability.

Image Generation

G-DetKD: Towards General Distillation Framework for Object Detectors via Contrastive and Semantic-guided Feature Imitation

no code implementations ICCV 2021 Lewei Yao, Renjie Pi, Hang Xu, Wei zhang, Zhenguo Li, Tong Zhang

In this paper, we investigate the knowledge distillation (KD) strategy for object detection and propose an effective framework applicable to both homogeneous and heterogeneous student-teacher pairs.

Knowledge Distillation Object Detection

NASOA: Towards Faster Task-oriented Online Fine-tuning with a Zoo of Models

no code implementations ICCV 2021 Hang Xu, Ning Kang, Gengwei Zhang, Chuanlong Xie, Xiaodan Liang, Zhenguo Li

Fine-tuning from pre-trained ImageNet models has been a simple, effective, and popular approach for various computer vision tasks.

Neural Architecture Search

AutoBERT-Zero: Evolving BERT Backbone from Scratch

no code implementations15 Jul 2021 Jiahui Gao, Hang Xu, Han Shi, Xiaozhe Ren, Philip L. H. Yu, Xiaodan Liang, Xin Jiang, Zhenguo Li

Transformer-based pre-trained language models like BERT and its variants have recently achieved promising performance in various natural language processing (NLP) tasks.

Language Modelling Neural Architecture Search

SODA10M: A Large-Scale 2D Self/Semi-Supervised Object Detection Dataset for Autonomous Driving

no code implementations21 Jun 2021 Jianhua Han, Xiwen Liang, Hang Xu, Kai Chen, Lanqing Hong, Jiageng Mao, Chaoqiang Ye, Wei zhang, Zhenguo Li, Xiaodan Liang, Chunjing Xu

Experiments show that SODA10M can serve as a promising pre-training dataset for different self-supervised learning methods, which gives superior performance when fine-tuning with different downstream tasks (i. e., detection, semantic/instance segmentation) in autonomous driving domain.

Autonomous Driving Instance Segmentation +4

One Million Scenes for Autonomous Driving: ONCE Dataset

1 code implementation21 Jun 2021 Jiageng Mao, Minzhe Niu, Chenhan Jiang, Hanxue Liang, Jingheng Chen, Xiaodan Liang, Yamin Li, Chaoqiang Ye, Wei zhang, Zhenguo Li, Jie Yu, Hang Xu, Chunjing Xu

To facilitate future research on exploiting unlabeled data for 3D detection, we additionally provide a benchmark in which we reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.

3D Object Detection Autonomous Driving

Transformation Invariant Few-Shot Object Detection

no code implementations CVPR 2021 Aoxue Li, Zhenguo Li

To this end, we propose a simple yet effective Transformation Invariant Principle (TIP) that can be flexibly applied to various meta-learning models for boosting the detection performance on novel class objects.

Few-Shot Object Detection Meta-Learning

Adversarial Invariant Learning

1 code implementation CVPR 2021 Nanyang Ye, Jingxuan Tang, Huayu Deng, Xiao-Yun Zhou, Qianxiao Li, Zhenguo Li, Guang-Zhong Yang, Zhanxing Zhu

To the best of our knowledge, this is one of the first to adopt differentiable environment splitting method to enable stable predictions across environments without environment index information, which achieves the state-of-the-art performance on datasets with strong spurious correlation, such as Colored MNIST.

Domain Generalization Out-of-Distribution Generalization

Contextualizing Multiple Tasks via Learning to Decompose

no code implementations15 Jun 2021 Han-Jia Ye, Da-Wei Zhou, Lanqing Hong, Zhenguo Li, Xiu-Shen Wei, De-Chuan Zhan

One single instance could possess multiple portraits and reveal diverse relationships with others according to different contexts.

Few-Shot Image Classification Meta-Learning

Towards a Theoretical Framework of Out-of-Distribution Generalization

no code implementations NeurIPS 2021 Haotian Ye, Chuanlong Xie, Tianle Cai, Ruichen Li, Zhenguo Li, LiWei Wang

We also introduce a new concept of expansion function, which characterizes to what extent the variance is amplified in the test domains over the training domains, and therefore give a quantitative meaning of invariant features.

Domain Generalization Model Selection +1

Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation

no code implementations CVPR 2021 Lewei Yao, Renjie Pi, Hang Xu, Wei zhang, Zhenguo Li, Tong Zhang

For student morphism, weight inheritance strategy is adopted, allowing the student to flexibly update its architecture while fully utilize the predecessor's weights, which considerably accelerates the search; To facilitate dynamic distillation, an elastic teacher pool is trained via integrated progressive shrinking strategy, from which teacher detectors can be sampled without additional cost in subsequent searches.

Knowledge Distillation Neural Architecture Search +1

TransNAS-Bench-101: Improving Transferability and Generalizability of Cross-Task Neural Architecture Search

1 code implementation CVPR 2021 Yawen Duan, Xin Chen, Hang Xu, Zewei Chen, Xiaodan Liang, Tong Zhang, Zhenguo Li

While existing NAS methods mostly design architectures on a single task, algorithms that look beyond single-task search are surging to pursue a more efficient and universal solution across various tasks.

Neural Architecture Search Transfer Learning

BWCP: Probabilistic Learning-to-Prune Channels for ConvNets via Batch Whitening

no code implementations13 May 2021 Wenqi Shao, Hang Yu, Zhaoyang Zhang, Hang Xu, Zhenguo Li, Ping Luo

To address this problem, we develop a probability-based pruning algorithm, called batch whitening channel pruning (BWCP), which can stochastically discard unimportant channels by modeling the probability of a channel being activated.

How Well Does Self-Supervised Pre-Training Perform with Streaming Data?

no code implementations ICLR 2022 Dapeng Hu, Shipeng Yan, Qizhengqiu Lu, Lanqing Hong, Hailin Hu, Yifan Zhang, Zhenguo Li, Xinchao Wang, Jiashi Feng

Prior works on self-supervised pre-training focus on the joint training scenario, where massive unlabeled data are assumed to be given as input all at once, and only then is a learner trained.

Representation Learning Self-Supervised Learning

SparseBERT: Rethinking the Importance Analysis in Self-attention

1 code implementation25 Feb 2021 Han Shi, Jiahui Gao, Xiaozhe Ren, Hang Xu, Xiaodan Liang, Zhenguo Li, James T. Kwok

A surprising result is that diagonal elements in the attention map are the least important compared with other attention positions.

Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search

1 code implementation ICLR 2021 Peidong Liu, Gengwei Zhang, Bochao Wang, Hang Xu, Xiaodan Liang, Yong Jiang, Zhenguo Li

For object detection, the well-established classification and regression loss functions have been carefully designed by considering diverse learning challenges.

Object Detection

DetCo: Unsupervised Contrastive Learning for Object Detection

2 code implementations ICCV 2021 Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Peize Sun, Zhenguo Li, Ping Luo

Unlike most recent methods that focused on improving accuracy of image classification, we present a novel contrastive learning approach, named DetCo, which fully explores the contrasts between global image and local image patches to learn discriminative representations for object detection.

Contrastive Learning Image Classification +2

Relaxed Conditional Image Transfer for Semi-supervised Domain Adaptation

no code implementations5 Jan 2021 Qijun Luo, Zhili Liu, Lanqing Hong, Chongxuan Li, Kuo Yang, Liyuan Wang, Fengwei Zhou, Guilin Li, Zhenguo Li, Jun Zhu

Semi-supervised domain adaptation (SSDA), which aims to learn models in a partially labeled target domain with the assistance of the fully labeled source domain, attracts increasing attention in recent years.

Domain Adaptation

Optimal Designs of Gaussian Processes with Budgets for Hyperparameter Optimization

no code implementations1 Jan 2021 Yimin Huang, YuJun Li, Zhenguo Li, Zhihua Zhang

Moreover, comparisons between different initial designs with the same model show the advantage of the proposed optimal design.

Gaussian Processes Hyperparameter Optimization

Fewmatch: Dynamic Prototype Refinement for Semi-Supervised Few-Shot Learning

no code implementations1 Jan 2021 Xu Lan, Steven McDonagh, Shaogang Gong, Jiali Wang, Zhenguo Li, Sarah Parisot

Semi-Supervised Few-shot Learning (SS-FSL) investigates the benefit of incorporating unlabelled data in few-shot settings.

Few-Shot Learning

SAD: Saliency Adversarial Defense without Adversarial Training

no code implementations1 Jan 2021 Yao Zhu, Jiacheng Sun, Zewei Chen, Zhenguo Li

We justify the algorithm with a linear model that the added saliency maps pull data away from its closest decision boundary.

Adversarial Defense

TransNAS-Bench-101: Improving Transferrability and Generalizability of Cross-Task Neural Architecture Search

2 code implementations1 Jan 2021 Yawen Duan, Xin Chen, Hang Xu, Zewei Chen, Xiaodan Liang, Tong Zhang, Zhenguo Li

While existing NAS methods mostly design architectures on one single task, algorithms that look beyond single-task search are surging to pursue a more efficient and universal solution across various tasks.

Neural Architecture Search Transfer Learning

DiffAutoML: Differentiable Joint Optimization for Efficient End-to-End Automated Machine Learning

no code implementations1 Jan 2021 Kaichen Zhou, Lanqing Hong, Fengwei Zhou, Binxin Ru, Zhenguo Li, Trigoni Niki, Jiashi Feng

Our method performs co-optimization of the neural architectures, training hyper-parameters and data augmentation policies in an end-to-end fashion without the need of model retraining.

Data Augmentation Neural Architecture Search

Exploring Geometry-Aware Contrast and Clustering Harmonization for Self-Supervised 3D Object Detection

no code implementations ICCV 2021 Hanxue Liang, Chenhan Jiang, Dapeng Feng, Xin Chen, Hang Xu, Xiaodan Liang, Wei zhang, Zhenguo Li, Luc van Gool

Here we present a novel self-supervised 3D Object detection framework that seamlessly integrates the geometry-aware contrast and clustering harmonization to lift the unsupervised 3D representation learning, named GCC-3D.

3D Object Detection Representation Learning +1

NASOA: Towards Faster Task-oriented Online Fine-tuning

no code implementations1 Jan 2021 Hang Xu, Ning Kang, Gengwei Zhang, Xiaodan Liang, Zhenguo Li

The resulting model zoo is more training efficient than SOTA NAS models, e. g. 6x faster than RegNetY-16GF, and 1. 7x faster than EfficientNetB3.

Neural Architecture Search

An Embedding Learning Framework for Numerical Features in CTR Prediction

1 code implementation16 Dec 2020 Huifeng Guo, Bo Chen, Ruiming Tang, Weinan Zhang, Zhenguo Li, Xiuqiang He

In this paper, we propose a novel embedding learning framework for numerical features in CTR prediction (AutoDis) with high model capacity, end-to-end training and unique representation properties preserved.

Click-Through Rate Prediction Feature Engineering +1

Batch Group Normalization

no code implementations4 Dec 2020 Xiao-Yun Zhou, Jiacheng Sun, Nanyang Ye, Xu Lan, Qijun Luo, Bo-Lin Lai, Pedro Esperanca, Guang-Zhong Yang, Zhenguo Li

Among previous normalization methods, Batch Normalization (BN) performs well at medium and large batch sizes and is with good generalizability to multiple vision tasks, while its performance degrades significantly at small batch sizes.

Few-Shot Learning Image Classification +2

MOFA: Modular Factorial Design for Hyperparameter Optimization

no code implementations18 Nov 2020 Bo Xiong, Yimin Huang, Hanrong Ye, Steffen Staab, Zhenguo Li

MOFA pursues several rounds of HPO, where each round alternates between exploration of hyperparameter space by factorial design and exploitation of evaluation results by factorial analysis.

Hyperparameter Optimization Model Selection

A Practical Layer-Parallel Training Algorithm for Residual Networks

no code implementations3 Sep 2020 Qi Sun, Hexin Dong, Zewei Chen, Weizhen Dian, Jiacheng Sun, Yitong Sun, Zhenguo Li, Bin Dong

Gradient-based algorithms for training ResNets typically require a forward pass of the input data, followed by back-propagating the objective gradient to update parameters, which are time-consuming for deep ResNets.

Data Augmentation

CurveLane-NAS: Unifying Lane-Sensitive Architecture Search and Adaptive Point Blending

1 code implementation ECCV 2020 Hang Xu, Shaoju Wang, Xinyue Cai, Wei zhang, Xiaodan Liang, Zhenguo Li

In this paper, we propose a novel lane-sensitive architecture search framework named CurveLane-NAS to automatically capture both long-ranged coherent and accurate short-range curve information while unifying both architecture search and post-processing on curve lane predictions via point blending.

Autonomous Driving Lane Detection

AABO: Adaptive Anchor Box Optimization for Object Detection via Bayesian Sub-sampling

no code implementations ECCV 2020 Wenshuo Ma, Tingzhong Tian, Hang Xu, Yimin Huang, Zhenguo Li

By carefully analyzing the existing bounding box patterns on the feature hierarchy, we design a flexible and tight hyper-parameter space for anchor configurations.

Object Detection

An Asymptotically Optimal Multi-Armed Bandit Algorithm and Hyperparameter Optimization

1 code implementation11 Jul 2020 Yimin Huang, Yu-Jun Li, Hanrong Ye, Zhenguo Li, Zhihua Zhang

The evaluation of hyperparameters, neural architectures, or data augmentation policies becomes a critical model selection problem in advanced deep learning with a large hyperparameter search space.

Data Augmentation Hyperparameter Optimization +4

Decoder-free Robustness Disentanglement without (Additional) Supervision

no code implementations2 Jul 2020 Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang

Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.


New Interpretations of Normalization Methods in Deep Learning

no code implementations16 Jun 2020 Jiacheng Sun, Xiangyong Cao, Hanwen Liang, Weiran Huang, Zewei Chen, Zhenguo Li

In recent years, a variety of normalization methods have been proposed to help train neural networks, such as batch normalization (BN), layer normalization (LN), weight normalization (WN), group normalization (GN), etc.

Risk Variance Penalization

no code implementations13 Jun 2020 Chuanlong Xie, Haotian Ye, Fei Chen, Yue Liu, Rui Sun, Zhenguo Li

The key of the out-of-distribution (OOD) generalization is to generalize invariance from training domains to target domains.

Boosting Few-Shot Learning With Adaptive Margin Loss

no code implementations CVPR 2020 Aoxue Li, Weiran Huang, Xu Lan, Jiashi Feng, Zhenguo Li, Li-Wei Wang

Few-shot learning (FSL) has attracted increasing attention in recent years but remains challenging, due to the intrinsic difficulty in learning to generalize from a few examples.

Few-Shot Image Classification Semantic Similarity +1

Rethinking Performance Estimation in Neural Architecture Search

1 code implementation CVPR 2020 Xiawu Zheng, Rongrong Ji, Qiang Wang, Qixiang Ye, Zhenguo Li, Yonghong Tian, Qi Tian

In this paper, we provide a novel yet systematic rethinking of PE in a resource constrained regime, termed budgeted PE (BPE), which precisely and effectively estimates the performance of an architecture sampled from an architecture space.

Neural Architecture Search

AutoFIS: Automatic Feature Interaction Selection in Factorization Models for Click-Through Rate Prediction

4 code implementations25 Mar 2020 Bin Liu, Chenxu Zhu, Guilin Li, Wei-Nan Zhang, Jincai Lai, Ruiming Tang, Xiuqiang He, Zhenguo Li, Yong Yu

By implementing a regularized optimizer over the architecture parameters, the model can automatically identify and remove the redundant feature interactions during the training process of the model.

Click-Through Rate Prediction Recommendation Systems

EHSOD: CAM-Guided End-to-end Hybrid-Supervised Object Detection with Cascade Refinement

no code implementations18 Feb 2020 Linpu Fang, Hang Xu, Zhili Liu, Sarah Parisot, Zhenguo Li

In this paper, we study the hybrid-supervised object detection problem, aiming to train a high quality detector with only a limited amount of fullyannotated data and fully exploiting cheap data with imagelevel labels.

Object Detection

Universal-RCNN: Universal Object Detector via Transferable Graph R-CNN

no code implementations18 Feb 2020 Hang Xu, Linpu Fang, Xiaodan Liang, Wenxiong Kang, Zhenguo Li

Finally, an InterDomain Transfer Module is proposed to exploit diverse transfer dependencies across all domains and enhance the regional feature representation by attending and transferring semantic contexts globally.

Object Detection Transfer Learning

Multi-objective Neural Architecture Search via Non-stationary Policy Gradient

no code implementations23 Jan 2020 Zewei Chen, Fengwei Zhou, George Trimponias, Zhenguo Li

Despite recent progress, the problem of approximating the full Pareto front accurately and efficiently remains challenging.

Neural Architecture Search

MetaSelector: Meta-Learning for Recommendation with User-Level Adaptive Model Selection

no code implementations22 Jan 2020 Mi Luo, Fei Chen, Pengxiang Cheng, Zhenhua Dong, Xiuqiang He, Jiashi Feng, Zhenguo Li

Recommender systems often face heterogeneous datasets containing highly personalized historical data of users, where no single model could give the best recommendation for every user.

Meta-Learning Model Selection +1

Meta-Learning PAC-Bayes Priors in Model Averaging

no code implementations24 Dec 2019 Yimin Huang, Weiran Huang, Liang Li, Zhenguo Li

In this paper, we mainly consider the scenario in which we have a common model set used for model averaging instead of selecting a single final model via a model selection procedure to account for this model's uncertainty to improve reliability and accuracy of inferences.

Meta-Learning Model Selection

SM-NAS: Structural-to-Modular Neural Architecture Search for Object Detection

no code implementations22 Nov 2019 Lewei Yao, Hang Xu, Wei zhang, Xiaodan Liang, Zhenguo Li

In this paper, we present a two-stage coarse-to-fine searching strategy named Structural-to-Modular NAS (SM-NAS) for searching a GPU-friendly design of both an efficient combination of modules and better modular-level architecture for object detection.

Neural Architecture Search Object Detection

Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS

1 code implementation NeurIPS 2020 Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James T. Kwok, Tong Zhang

In this work, we propose BONAS (Bayesian Optimized Neural Architecture Search), a sample-based NAS framework which is accelerated using weight-sharing to evaluate multiple related architectures simultaneously.

Neural Architecture Search

Hierarchical Neural Architecture Search via Operator Clustering

1 code implementation26 Sep 2019 Guilin Li, Xing Zhang, Zitong Wang, Matthias Tan, Jiashi Feng, Zhenguo Li, Tong Zhang

Recently, the efficiency of automatic neural architecture design has been significantly improved by gradient-based search methods such as DARTS.

Neural Architecture Search

Multi-objective Neural Architecture Search via Predictive Network Performance Optimization

no code implementations25 Sep 2019 Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James T. Kwok, Tong Zhang

Inspired by the nature of the graph structure of a neural network, we propose BOGCN-NAS, a NAS algorithm using Bayesian Optimization with Graph Convolutional Network (GCN) predictor.

Neural Architecture Search

DARTS+: Improved Differentiable Architecture Search with Early Stopping

no code implementations13 Sep 2019 Hanwen Liang, Shifeng Zhang, Jiacheng Sun, Xingqiu He, Weiran Huang, Kechen Zhuang, Zhenguo Li

Therefore, we propose a simple and effective algorithm, named "DARTS+", to avoid the collapse and improve the original DARTS, by "early stopping" the search procedure when meeting a certain criterion.

Meta Reinforcement Learning with Task Embedding and Shared Policy

1 code implementation16 May 2019 Lin Lan, Zhenguo Li, Xiaohong Guan, Pinghui Wang

Despite significant progress, deep reinforcement learning (RL) suffers from data-inefficiency and limited generalization.

Meta-Learning Meta Reinforcement Learning +1

Formulating Camera-Adaptive Color Constancy as a Few-shot Meta-Learning Problem

no code implementations28 Nov 2018 Steven McDonagh, Sarah Parisot, Fengwei Zhou, Xing Zhang, Ales Leonardis, Zhenguo Li, Gregory Slabaugh

In this work, we propose a new approach that affords fast adaptation to previously unseen cameras, and robustness to changes in capture device by leveraging annotated samples across different cameras and datasets.

Few-Shot Camera-Adaptive Color Constancy Frame +1

DeepFM: An End-to-End Wide & Deep Learning Framework for CTR Prediction

5 code implementations12 Apr 2018 Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He, Zhenhua Dong

In this paper, we study two instances of DeepFM where its "deep" component is DNN and PNN respectively, for which we denote as DeepFM-D and DeepFM-P. Comprehensive experiments are conducted to demonstrate the effectiveness of DeepFM-D and DeepFM-P over the existing models for CTR prediction, on both benchmark data and commercial data.

Click-Through Rate Prediction Feature Engineering +1

Federated Meta-Learning with Fast Convergence and Efficient Communication

no code implementations22 Feb 2018 Fei Chen, Mi Luo, Zhenhua Dong, Zhenguo Li, Xiuqiang He

Statistical and systematic challenges in collaboratively training machine learning models across distributed networks of mobile devices have been the bottlenecks in the real-world application of federated learning.

Federated Learning Meta-Learning +1

Deep Meta-Learning: Learning to Learn in the Concept Space

no code implementations10 Feb 2018 Fengwei Zhou, Bin Wu, Zhenguo Li

Few-shot learning remains challenging for meta-learning that learns a learning algorithm (meta-learner) from many related tasks.

Few-Shot Learning

Graph Edge Partitioning via Neighborhood Heuristic

1 code implementation13 Aug 2017 Chenzi Zhang, Fan Wei, Qin Liu, Zhihao Gavin Tang, Zhenguo Li

We provide a worst-case upper bound of replication factor for our heuristic on general graphs.

Meta-SGD: Learning to Learn Quickly for Few-Shot Learning

7 code implementations31 Jul 2017 Zhenguo Li, Fengwei Zhou, Fei Chen, Hang Li

In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial.

Few-Shot Learning reinforcement-learning

New Insights Into Laplacian Similarity Search

no code implementations CVPR 2015 Xiao-Ming Wu, Zhenguo Li, Shih-Fu Chang

Graph-based computer vision applications rely critically on similarity metrics which compute the pairwise similarity between any pair of vertices on graphs.

Image Retrieval

Locally Linear Hashing for Extracting Non-Linear Manifolds

no code implementations CVPR 2014 Go Irie, Zhenguo Li, Xiao-Ming Wu, Shih-Fu Chang

Previous efforts in hashing intend to preserve data variance or pairwise affinity, but neither is adequate in capturing the manifold structures hidden in most visual data.


Analyzing the Harmonic Structure in Graph-Based Learning

no code implementations NeurIPS 2013 Xiao-Ming Wu, Zhenguo Li, Shih-Fu Chang

We show that either explicitly or implicitly, various well-known graph-based models exhibit a common significant \emph{harmonic} structure in its target function -- the value of a vertex is approximately the weighted average of the values of its adjacent neighbors.

Learning with Partially Absorbing Random Walks

no code implementations NeurIPS 2012 Xiao-Ming Wu, Zhenguo Li, Anthony M. So, John Wright, Shih-Fu Chang

We prove that under proper absorption rates, a random walk starting from a set $\mathcal{S}$ of low conductance will be mostly absorbed in $\mathcal{S}$.

Fast Graph Laplacian Regularized Kernel Learning via Semidefinite–Quadratic–Linear Programming

no code implementations NeurIPS 2009 Xiao-Ming Wu, Anthony M. So, Zhenguo Li, Shuo-Yen R. Li

In this paper, we show that a large class of kernel learning problems can be reformulated as semidefinite-quadratic-linear programs (SQLPs), which only contain a simple positive semidefinite constraint, a second-order cone constraint and a number of linear constraints.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.