1 code implementation • 17 May 2023 • Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu
On the GLaM dataset, DoReMi, which has no knowledge of downstream tasks, even matches the performance of using domain weights tuned on downstream tasks.
7 code implementations • 13 Feb 2023 • Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, Yifeng Lu, Quoc V. Le
On diffusion models, Lion outperforms Adam by achieving a better FID score and reducing the training compute by up to 2. 3x.
Ranked #1 on
Image Classification
on ImageNet
1 code implementation • 3 Feb 2023 • Daiyi Peng, Xuanyi Dong, Esteban Real, Yifeng Lu, Quoc V. Le
We also perform a case study of a large codebase where PyGlove led to an 80% reduction in the number of lines of code.
no code implementations • 28 Apr 2022 • Razvan-Gabriel Cirstea, Chenjuan Guo, Bin Yang, Tung Kieu, Xuanyi Dong, Shirui Pan
(i) Linear complexity: we introduce a novel patch attention with linear complexity.
1 code implementation • 16 Dec 2021 • Xuanyi Dong, David Jacob Kedziora, Katarzyna Musial, Bogdan Gabrys
That stated, NAS is not the be-all and end-all of AutoDL.
2 code implementations • NeurIPS 2021 • Xinyang Jiang, Lu Liu, Caihua Shan, Yifei Shen, Xuanyi Dong, Dongsheng Li
In this paper, we consider a different data format for images: vector graphics.
no code implementations • 30 Aug 2021 • Bo Li, Xinyang Jiang, Donglin Bai, Yuge Zhang, Ningxin Zheng, Xuanyi Dong, Lu Liu, Yuqing Yang, Dongsheng Li
The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change.
no code implementations • 17 Feb 2021 • Yanqi Zhou, Xuanyi Dong, Berkin Akin, Mingxing Tan, Daiyi Peng, Tianjian Meng, Amir Yazdanbakhsh, Da Huang, Ravi Narayanaswami, James Laudon
In our work, we target the optimization of hardware and software configurations on an industry-standard edge accelerator.
no code implementations • ICLR 2021 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang
To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces.
1 code implementation • 25 Jan 2021 • Xuanyi Dong, Yi Yang, Shih-En Wei, Xinshuo Weng, Yaser Sheikh, Shoou-I Yu
End-to-end training is made possible by differentiable registration and 3D triangulation modules.
no code implementations • NeurIPS 2020 • Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu, Hanxiao Liu, Gabriel Bender, Adam Kraft, Chen Liang, Quoc V. Le
As a result, AutoML can be reformulated as an automated process of symbolic manipulation.
no code implementations • 1 Jan 2021 • Yanqi Zhou, Xuanyi Dong, Daiyi Peng, Ethan Zhu, Amir Yazdanbakhsh, Berkin Akin, Mingxing Tan, James Laudon
In this paper, we study the importance of co-designing neural architectures and hardware accelerators.
no code implementations • 1 Jan 2021 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang
Few-shot learning aims to train a classifier given only a few samples per class that are highly insufficient to describe the whole data distribution.
2 code implementations • 28 Aug 2020 • Xuanyi Dong, Lu Liu, Katarzyna Musial, Bogdan Gabrys
In this paper, we propose NATS-Bench, a unified benchmark on searching for both topology and size, for (almost) any up-to-date NAS algorithm.
no code implementations • 5 Jun 2020 • Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, Quoc V. Le
Efficient hyperparameter or architecture search methods have shown remarkable results, but each of them is only applicable to searching for either hyperparameters (HPs) or architectures.
4 code implementations • ICLR 2020 • Xuanyi Dong, Yi Yang
A variety of algorithms search architectures under different search space.
4 code implementations • ICCV 2019 • Xuanyi Dong, Yi Yang
In this paper, we propose a Self-Evaluated Template Network (SETN) to improve the quality of the architecture candidates for evaluation so that it is more likely to cover competitive candidates.
Ranked #17 on
Neural Architecture Search
on NAS-Bench-201, ImageNet-16-120
(Accuracy (Val) metric)
6 code implementations • CVPR 2019 • Xuanyi Dong, Yi Yang
To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG.
Ranked #18 on
Neural Architecture Search
on CIFAR-10
2 code implementations • ICCV 2019 • Xuanyi Dong, Yi Yang
A typical approach is to (1) train a detector on the labeled images; (2) generate new training samples using this detector's prediction as pseudo labels of unlabeled images; (3) retrain the detector on the labeled samples and partial pseudo labeled samples.
Ranked #1 on
Facial Landmark Detection
on 300W (Full)
(using extra training data)
4 code implementations • NeurIPS 2019 • Xuanyi Dong, Yi Yang
The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e. g., knowledge distillation, from the original networks.
Ranked #1 on
Network Pruning
on CIFAR-10
3 code implementations • ICCV 2019 • Ruijie Quan, Xuanyi Dong, Yu Wu, Linchao Zhu, Yi Yang
We propose to automatically search for a CNN architecture that is specifically suitable for the reID task.
Ranked #9 on
Person Re-Identification
on CUHK03 detected
2 code implementations • 22 Aug 2018 • Yang He, Xuanyi Dong, Guoliang Kang, Yanwei Fu, Chenggang Yan, Yi Yang
With asymptotic pruning, the information of the training set would be gradually concentrated in the remaining filters, so the subsequent training and pruning process would be stable.
6 code implementations • 21 Aug 2018 • Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang
Therefore, the network trained by our method has a larger model capacity to learn from the training data.
1 code implementation • CVPR 2018 • Xuanyi Dong, Shoou-I Yu, Xinshuo Weng, Shih-En Wei, Yi Yang, Yaser Sheikh
In this paper, we present supervision-by-registration, an unsupervised approach to improve the precision of facial landmark detectors on both images and video.
Ranked #1 on
Facial Landmark Detection
on 300-VW (C)
no code implementations • CVPR 2018 • Yu Wu, Yutian Lin, Xuanyi Dong, Yan Yan, Wanli Ouyang, Yi Yang
We focus on the one-shot learning for video-based person re-Identification (re-ID).
1 code implementation • CVPR 2018 • Xuanyi Dong, Yan Yan, Wanli Ouyang, Yi Yang
In this work, we propose a style-aggregated approach to deal with the large intrinsic variance of image styles for facial landmark detection.
Ranked #1 on
Facial Landmark Detection
on AFLW-Front
(Mean NME metric)
no code implementations • 22 Sep 2017 • Xuanyi Dong, Guoliang Kang, Kun Zhan, Yi Yang
For most state-of-the-art architectures, Rectified Linear Unit (ReLU) becomes a standard component accompanied with each layer.
Ranked #12 on
Image Classification
on SVHN
no code implementations • ICML 2017 • Fan Ma, Deyu Meng, Qi Xie, Zina Li, Xuanyi Dong
During co-training process, labels of unlabeled instances in the training pool are very likely to be false especially in the initial training rounds, while the standard co-training algorithm utilizes a “draw without replacement” manner and does not remove these false labeled instances from training.
no code implementations • 22 Jul 2017 • Guoliang Kang, Xuanyi Dong, Liang Zheng, Yi Yang
This paper focuses on regularizing the training of the convolutional neural network (CNN).
1 code implementation • 26 Jun 2017 • Xuanyi Dong, Liang Zheng, Fan Ma, Yi Yang, Deyu Meng
Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.
Ranked #1 on
Weakly Supervised Object Detection
on COCO
no code implementations • CVPR 2017 • Xuanyi Dong, Junshi Huang, Yi Yang, Shuicheng Yan
In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity.