Search Results for author: Daiyi Peng

Found 9 papers, 3 papers with code

RL-DARTS: Differentiable Architecture Search for Reinforcement Learning

no code implementations4 Jun 2021 Yingjie Miao, Xingyou Song, Daiyi Peng, Summer Yue, John D. Co-Reyes, Eugene Brevdo, Aleksandra Faust

Recently, Differentiable Architecture Search (DARTS) has become one of the most popular Neural Architecture Search (NAS) methods successfully applied in supervised learning (SL).

Neural Architecture Search

ES-ENAS: Blackbox Optimization over Hybrid Spaces via Combinatorial and Continuous Evolution

1 code implementation19 Jan 2021 Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Qiuyi Zhang, Daiyi Peng, Deepali Jain, Wenbo Gao, Aldo Pacchiano, Tamas Sarlos, Yuxiang Yang

Due to the modularity of the algorithm, we also are able incorporate a wide variety of popular techniques ranging from use of different continuous and combinatorial optimizers, as well as constrained optimization.

Combinatorial Optimization Continuous Control +3

Evolving Reinforcement Learning Algorithms

3 code implementations ICLR 2021 John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, Aleksandra Faust

Learning from scratch on simple classical control and gridworld tasks, our method rediscovers the temporal-difference (TD) algorithm.

Atari Games Meta-Learning

Towards NNGP-guided Neural Architecture Search

1 code implementation11 Nov 2020 Daniel S. Park, Jaehoon Lee, Daiyi Peng, Yuan Cao, Jascha Sohl-Dickstein

Since NNGP inference provides a cheap measure of performance of a network architecture, we investigate its potential as a signal for neural architecture search (NAS).

Neural Architecture Search

AutoHAS: Efficient Hyperparameter and Architecture Search

no code implementations5 Jun 2020 Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, Quoc V. Le

Efficient hyperparameter or architecture search methods have shown remarkable results, but each of them is only applicable to searching for either hyperparameters (HPs) or architectures.

Hyperparameter Optimization Neural Architecture Search

Domain Adaptive Transfer Learning with Specialist Models

no code implementations16 Nov 2018 Jiquan Ngiam, Daiyi Peng, Vijay Vasudevan, Simon Kornblith, Quoc V. Le, Ruoming Pang

Our method to compute importance weights follow from ideas in domain adaptation, and we show a novel application to transfer learning.

Ranked #2 on Fine-Grained Image Classification on Stanford Cars (using extra training data)

Domain Adaptation Fine-Grained Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.