Search Results for author: Xiangning Chen

Found 17 papers, 9 papers with code

Red Teaming Language Model Detectors with Language Models

2 code implementations31 May 2023 Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, Cho-Jui Hsieh

The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users.

Adversarial Robustness Language Modelling +2

Symbol tuning improves in-context learning in language models

no code implementations15 May 2023 Jerry Wei, Le Hou, Andrew Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, Quoc V. Le

We present symbol tuning - finetuning language models on in-context input-label pairs where natural language labels (e. g., "positive/negative sentiment") are replaced with arbitrary symbols (e. g., "foo/bar").

In-Context Learning

Towards Efficient and Scalable Sharpness-Aware Minimization

2 code implementations CVPR 2022 Yong liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, Yang You

Recently, Sharpness-Aware Minimization (SAM), which connects the geometry of the loss landscape and generalization, has demonstrated significant performance boosts on training large-scale models such as vision transformers.

Can Vision Transformers Perform Convolution?

no code implementations2 Nov 2021 Shanda Li, Xiangning Chen, Di He, Cho-Jui Hsieh

Several recent studies have demonstrated that attention-based networks, such as Vision Transformer (ViT), can outperform Convolutional Neural Networks (CNNs) on several computer vision tasks without using convolutional layers.

Learning to Schedule Learning rate with Graph Neural Networks

no code implementations ICLR 2022 Yuanhao Xiong, Li-Cheng Lan, Xiangning Chen, Ruochen Wang, Cho-Jui Hsieh

By constructing a directed graph for the underlying neural network of the target problem, GNS encodes current dynamics with a graph message passing network and trains an agent to control the learning rate accordingly via reinforcement learning.

Benchmarking Image Classification +2

Sharpness-Aware Minimization in Large-Batch Training: Training Vision Transformer In Minutes

no code implementations29 Sep 2021 Yong liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, Yang You

Large-batch training is an important direction for distributed machine learning, which can improve the utilization of large-scale clusters and therefore accelerate the training process.

Rethinking Architecture Selection in Differentiable NAS

1 code implementation ICLR 2021 Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh

Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms.

Neural Architecture Search

Concurrent Adversarial Learning for Large-Batch Training

no code implementations ICLR 2022 Yong liu, Xiangning Chen, Minhao Cheng, Cho-Jui Hsieh, Yang You

Current methods usually use extensive data augmentation to increase the batch size, but we found the performance gain with data augmentation decreases as batch size increases, and data augmentation will become insufficient after certain point.

Data Augmentation

2.5D Visual Relationship Detection

1 code implementation26 Apr 2021 Yu-Chuan Su, Soravit Changpinyo, Xiangning Chen, Sathish Thoppay, Cho-Jui Hsieh, Lior Shapira, Radu Soricut, Hartwig Adam, Matthew Brown, Ming-Hsuan Yang, Boqing Gong

To enable progress on this task, we create a new dataset consisting of 220k human-annotated 2. 5D relationships among 512K objects from 11K images.

Benchmarking Depth Estimation +2

Robust and Accurate Object Detection via Adversarial Learning

1 code implementation CVPR 2021 Xiangning Chen, Cihang Xie, Mingxing Tan, Li Zhang, Cho-Jui Hsieh, Boqing Gong

Data augmentation has become a de facto component for training high-performance deep image classifiers, but its potential is under-explored for object detection.

AutoML Data Augmentation +3

Stabilizing Differentiable Architecture Search via Perturbation-based Regularization

1 code implementation ICML 2020 Xiangning Chen, Cho-Jui Hsieh

Furthermore, we mathematically show that SDARTS implicitly regularizes the Hessian norm of the validation loss, which accounts for a smoother loss landscape and improved performance.

Adversarial Attack Neural Architecture Search

Efficient Neural Interaction Function Search for Collaborative Filtering

2 code implementations28 Jun 2019 Quanming Yao, Xiangning Chen, James Kwok, Yong Li, Cho-Jui Hsieh

Motivated by the recent success of automated machine learning (AutoML), we propose in this paper the search for simple neural interaction functions (SIF) in CF.

AutoML Collaborative Filtering

Learning to Recommend with Multiple Cascading Behaviors

no code implementations21 Sep 2018 Chen Gao, Xiangnan He, Dahua Gan, Xiangning Chen, Fuli Feng, Yong Li, Tat-Seng Chua, Lina Yao, Yang song, Depeng Jin

To fully exploit the signal in the data of multiple types of behaviors, we perform a joint optimization based on the multi-task learning framework, where the optimization on a behavior is treated as a task.

Multi-Task Learning Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.