Search Results for author: Yiyang Zhao

Found 9 papers, 4 papers with code

CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework

no code implementations3 Jun 2024 Yiyang Zhao, Yunzhuo Liu, Bo Jiang, Tian Guo

For open-domain NAS tasks, CE-NAS achieves SOTA results with 97. 35% top-1 accuracy on CIFAR-10 with only 1. 68M parameters and a carbon consumption of 38. 53 lbs of CO2.

Neural Architecture Search

Multi-Objective Neural Architecture Search by Learning Search Space Partitions

no code implementations1 Jun 2024 Yiyang Zhao, Linnan Wang, Tian Guo

This results in deep learning model designers leveraging multi-objective optimization to design effective deep neural networks in multiple criteria.

Bayesian Optimization Neural Architecture Search

Carbon-Efficient Neural Architecture Search

no code implementations9 Jul 2023 Yiyang Zhao, Tian Guo

This work presents a novel approach to neural architecture search (NAS) that aims to reduce energy costs and increase carbon efficiency during the model design process.

Neural Architecture Search

Multi-objective Optimization by Learning Space Partitions

1 code implementation7 Oct 2021 Yiyang Zhao, Linnan Wang, Kevin Yang, Tianjun Zhang, Tian Guo, Yuandong Tian

In this paper, we propose LaMOO, a novel multi-objective optimizer that learns a model from observed samples to partition the search space and then focus on promising regions that are likely to contain a subset of the Pareto frontier.

Neural Architecture Search

Multi-objective Optimization by Learning Space Partition

no code implementations ICLR 2022 Yiyang Zhao, Linnan Wang, Kevin Yang, Tianjun Zhang, Tian Guo, Yuandong Tian

In this paper, we propose LaMOO, a novel multi-objective optimizer that learns a model from observed samples to partition the search space and then focus on promising regions that are likely to contain a subset of the Pareto frontier.

Neural Architecture Search

Few-shot Neural Architecture Search

2 code implementations11 Jun 2020 Yiyang Zhao, Linnan Wang, Yuandong Tian, Rodrigo Fonseca, Tian Guo

supernet, to approximate the performance of every architecture in the search space via weight-sharing.

Neural Architecture Search Transfer Learning

AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search

1 code implementation26 Mar 2019 Linnan Wang, Yiyang Zhao, Yuu Jinnai, Yuandong Tian, Rodrigo Fonseca

Neural Architecture Search (NAS) has shown great success in automating the design of neural networks, but the prohibitive amount of computations behind current NAS methods requires further investigations in improving the sample efficiency and the network evaluation cost to get better results in a shorter time.

Image Captioning Neural Architecture Search +4

Neural Architecture Search using Deep Neural Networks and Monte Carlo Tree Search

2 code implementations18 May 2018 Linnan Wang, Yiyang Zhao, Yuu Jinnai, Yuandong Tian, Rodrigo Fonseca

Neural Architecture Search (NAS) has shown great success in automating the design of neural networks, but the prohibitive amount of computations behind current NAS methods requires further investigations in improving the sample efficiency and the network evaluation cost to get better results in a shorter time.

Image Captioning Neural Architecture Search +4

SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

no code implementations13 Jan 2018 Linnan Wang, Jinmian Ye, Yiyang Zhao, Wei Wu, Ang Li, Shuaiwen Leon Song, Zenglin Xu, Tim Kraska

Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance.

Management Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.