Search Results for author: Xuanyi Dong

Found 31 papers, 18 papers with code

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining

1 code implementation17 May 2023 Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu

On the GLaM dataset, DoReMi, which has no knowledge of downstream tasks, even matches the performance of using domain weights tuned on downstream tasks.

Language Modelling

PyGlove: Efficiently Exchanging ML Ideas as Code

1 code implementation3 Feb 2023 Daiyi Peng, Xuanyi Dong, Esteban Real, Yifeng Lu, Quoc V. Le

We also perform a case study of a large codebase where PyGlove led to an 80% reduction in the number of lines of code.

Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision

no code implementations30 Aug 2021 Bo Li, Xinyang Jiang, Donglin Bai, Yuge Zhang, Ningxin Zheng, Xuanyi Dong, Lu Liu, Yuqing Yang, Dongsheng Li

The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change.

Model Compression

Isometric Propagation Network for Generalized Zero-shot Learning

no code implementations ICLR 2021 Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang

To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces.

Generalized Zero-Shot Learning

Supervision by Registration and Triangulation for Landmark Detection

1 code implementation25 Jan 2021 Xuanyi Dong, Yi Yang, Shih-En Wei, Xinshuo Weng, Yaser Sheikh, Shoou-I Yu

End-to-end training is made possible by differentiable registration and 3D triangulation modules.

Optical Flow Estimation

MASP: Model-Agnostic Sample Propagation for Few-shot learning

no code implementations1 Jan 2021 Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang

Few-shot learning aims to train a classifier given only a few samples per class that are highly insufficient to describe the whole data distribution.

Few-Shot Learning

NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size

2 code implementations28 Aug 2020 Xuanyi Dong, Lu Liu, Katarzyna Musial, Bogdan Gabrys

In this paper, we propose NATS-Bench, a unified benchmark on searching for both topology and size, for (almost) any up-to-date NAS algorithm.

Benchmarking Neural Architecture Search

AutoHAS: Efficient Hyperparameter and Architecture Search

no code implementations5 Jun 2020 Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, Quoc V. Le

Efficient hyperparameter or architecture search methods have shown remarkable results, but each of them is only applicable to searching for either hyperparameters (HPs) or architectures.

Hyperparameter Optimization Neural Architecture Search +1

One-Shot Neural Architecture Search via Self-Evaluated Template Network

4 code implementations ICCV 2019 Xuanyi Dong, Yi Yang

In this paper, we propose a Self-Evaluated Template Network (SETN) to improve the quality of the architecture candidates for evaluation so that it is more likely to cover competitive candidates.

Neural Architecture Search

Searching for A Robust Neural Architecture in Four GPU Hours

6 code implementations CVPR 2019 Xuanyi Dong, Yi Yang

To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG.

Neural Architecture Search

Teacher Supervises Students How to Learn From Partially Labeled Images for Facial Landmark Detection

2 code implementations ICCV 2019 Xuanyi Dong, Yi Yang

A typical approach is to (1) train a detector on the labeled images; (2) generate new training samples using this detector's prediction as pseudo labels of unlabeled images; (3) retrain the detector on the labeled samples and partial pseudo labeled samples.

 Ranked #1 on Facial Landmark Detection on 300W (Full) (using extra training data)

Facial Landmark Detection

Network Pruning via Transformable Architecture Search

4 code implementations NeurIPS 2019 Xuanyi Dong, Yi Yang

The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e. g., knowledge distillation, from the original networks.

Knowledge Distillation Network Pruning +2

Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks

2 code implementations22 Aug 2018 Yang He, Xuanyi Dong, Guoliang Kang, Yanwei Fu, Chenggang Yan, Yi Yang

With asymptotic pruning, the information of the training set would be gradually concentrated in the remaining filters, so the subsequent training and pruning process would be stable.

Image Classification

Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks

6 code implementations21 Aug 2018 Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang

Therefore, the network trained by our method has a larger model capacity to learn from the training data.

Style Aggregated Network for Facial Landmark Detection

1 code implementation CVPR 2018 Xuanyi Dong, Yan Yan, Wanli Ouyang, Yi Yang

In this work, we propose a style-aggregated approach to deal with the large intrinsic variance of image styles for facial landmark detection.

 Ranked #1 on Facial Landmark Detection on AFLW-Front (Mean NME metric)

Face Alignment Facial Landmark Detection

EraseReLU: A Simple Way to Ease the Training of Deep Convolution Neural Networks

no code implementations22 Sep 2017 Xuanyi Dong, Guoliang Kang, Kun Zhan, Yi Yang

For most state-of-the-art architectures, Rectified Linear Unit (ReLU) becomes a standard component accompanied with each layer.

Blocking Image Classification

Self-Paced Co-training

no code implementations ICML 2017 Fan Ma, Deyu Meng, Qi Xie, Zina Li, Xuanyi Dong

During co-training process, labels of unlabeled instances in the training pool are very likely to be false especially in the initial training rounds, while the standard co-training algorithm utilizes a “draw without replacement” manner and does not remove these false labeled instances from training.

PatchShuffle Regularization

no code implementations22 Jul 2017 Guoliang Kang, Xuanyi Dong, Liang Zheng, Yi Yang

This paper focuses on regularizing the training of the convolutional neural network (CNN).

General Classification

Few-Example Object Detection with Model Communication

1 code implementation26 Jun 2017 Xuanyi Dong, Liang Zheng, Fan Ma, Yi Yang, Deyu Meng

Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.


More is Less: A More Complicated Network with Less Inference Complexity

no code implementations CVPR 2017 Xuanyi Dong, Junshi Huang, Yi Yang, Shuicheng Yan

In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity.

Cannot find the paper you are looking for? You can Submit a new open access paper.