Search Results for author: Ruochen Wang

Found 9 papers, 6 papers with code

Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory

1 code implementation19 Nov 2022 Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh

Among recently proposed methods, Matching Training Trajectories (MTT) achieves state-of-the-art performance on CIFAR-10/100, while having difficulty scaling to ImageNet-1k dataset due to the large memory requirement when performing unrolled gradient computation through back-propagation.

Efficient Non-Parametric Optimizer Search for Diverse Tasks

no code implementations27 Sep 2022 Ruochen Wang, Yuanhao Xiong, Minhao Cheng, Cho-Jui Hsieh

Efficient and automated design of optimizers plays a crucial role in full-stack AutoML systems.

AutoML

FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning

1 code implementation20 Jul 2022 Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu, Cho-Jui Hsieh

Federated learning~(FL) has recently attracted increasing attention from academia and industry, with the ultimate goal of achieving collaborative training under privacy and communication constraints.

Federated Learning Image Classification

DC-BENCH: Dataset Condensation Benchmark

2 code implementations20 Jul 2022 Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh

Dataset Condensation is a newly emerging technique aiming at learning a tiny dataset that captures the rich information encoded in the original dataset.

Data Augmentation Data Compression +2

Generalizing Few-Shot NAS with Gradient Matching

1 code implementation ICLR 2022 Shoukang Hu, Ruochen Wang, Lanqing Hong, Zhenguo Li, Cho-Jui Hsieh, Jiashi Feng

Efficient performance estimation of architectures drawn from large search spaces is essential to Neural Architecture Search.

Neural Architecture Search

Learning to Schedule Learning rate with Graph Neural Networks

no code implementations ICLR 2022 Yuanhao Xiong, Li-Cheng Lan, Xiangning Chen, Ruochen Wang, Cho-Jui Hsieh

By constructing a directed graph for the underlying neural network of the target problem, GNS encodes current dynamics with a graph message passing network and trains an agent to control the learning rate accordingly via reinforcement learning.

Image Classification Stochastic Optimization

Rethinking Architecture Selection in Differentiable NAS

1 code implementation ICLR 2021 Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh

Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.