Search Results for author: Ruochen Wang

Found 11 papers, 9 papers with code

DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers

1 code implementation25 Feb 2024 Xirui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, Cho-Jui Hsieh

DrAttack includes three key components: (a) `Decomposition' of the original prompt into sub-prompts, (b) `Reconstruction' of these sub-prompts implicitly by in-context learning with semantically similar but harmless reassembling demo, and (c) a `Synonym Search' of sub-prompts, aiming to find sub-prompts' synonyms that maintain the original intent while jailbreaking LLMs.

In-Context Learning

MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion

1 code implementation20 Feb 2024 Sen Li, Ruochen Wang, Cho-Jui Hsieh, Minhao Cheng, Tianyi Zhou

Moreover, MuLan adopts a vision-language model (VLM) to provide feedback to the image generated in each sub-task and control the diffusion model to re-generate the image if it violates the original prompt.

Attribute Language Modelling +2

Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory

2 code implementations19 Nov 2022 Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh

The resulting algorithm sets new SOTA on ImageNet-1K: we can scale up to 50 IPCs (Image Per Class) on ImageNet-1K on a single GPU (all previous methods can only scale to 2 IPCs on ImageNet-1K), leading to the best accuracy (only 5. 9% accuracy drop against full dataset training) while utilizing only 4. 2% of the number of data points - an 18. 2% absolute gain over prior SOTA.

Efficient Non-Parametric Optimizer Search for Diverse Tasks

1 code implementation27 Sep 2022 Ruochen Wang, Yuanhao Xiong, Minhao Cheng, Cho-Jui Hsieh

Efficient and automated design of optimizers plays a crucial role in full-stack AutoML systems.

AutoML Math

DC-BENCH: Dataset Condensation Benchmark

2 code implementations20 Jul 2022 Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh

Dataset Condensation is a newly emerging technique aiming at learning a tiny dataset that captures the rich information encoded in the original dataset.

Data Augmentation Data Compression +2

FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning

1 code implementation CVPR 2023 Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu, Cho-Jui Hsieh

Federated learning~(FL) has recently attracted increasing attention from academia and industry, with the ultimate goal of achieving collaborative training under privacy and communication constraints.

Federated Learning Image Classification

Generalizing Few-Shot NAS with Gradient Matching

1 code implementation ICLR 2022 Shoukang Hu, Ruochen Wang, Lanqing Hong, Zhenguo Li, Cho-Jui Hsieh, Jiashi Feng

Efficient performance estimation of architectures drawn from large search spaces is essential to Neural Architecture Search.

Neural Architecture Search

Learning to Schedule Learning rate with Graph Neural Networks

no code implementations ICLR 2022 Yuanhao Xiong, Li-Cheng Lan, Xiangning Chen, Ruochen Wang, Cho-Jui Hsieh

By constructing a directed graph for the underlying neural network of the target problem, GNS encodes current dynamics with a graph message passing network and trains an agent to control the learning rate accordingly via reinforcement learning.

Benchmarking Image Classification +2

Rethinking Architecture Selection in Differentiable NAS

1 code implementation ICLR 2021 Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh

Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.