no code implementations • 12 Nov 2024 • Youngseok Yoon, Sangwoo Hong, Hyungjoon Joo, Yao Qin, Haewon Jeong, Jungwoo Lee
Long-tailed image recognition is a computer vision problem considering a real-world class distribution rather than an artificial uniform.
no code implementations • 21 Oct 2024 • Zhiyu Xue, Haohan Wang, Yao Qin, Ramtin Pedarsani
Adversarial training is the most effective method to obtain adversarial robustness for deep neural networks by directly involving adversarial samples in the training procedure.
1 code implementation • 1 Oct 2024 • Kenan Tang, Peiyang Song, Yao Qin, Xifeng Yan
As a type of figurative language, an East Asian idiom condenses rich cultural background into only a few characters.
no code implementations • 4 Jul 2024 • Youngseok Yoon, Dainong Hu, Iain Weissburg, Yao Qin, Haewon Jeong
Our theoretical analysis aligns with empirical observations of the generated images in the Chain of Diffusion.
no code implementations • 4 Jul 2024 • Andong Hua, Mehak Preet Dhaliwal, Ryan Burke, Laya Pullela, Yao Qin
We present NutriBench, the first publicly available natural language meal description nutrition benchmark.
no code implementations • 24 Jun 2024 • Yash Kumar Lal, Preethi Lahoti, Aradhana Sinha, Yao Qin, Ananth Balashankar
We formalize the task of automated adversarial discovery for safety classifiers - to find new attacks along previously unseen harm dimensions that expose new weaknesses in the classifier.
no code implementations • 9 May 2024 • Meng Song, Xuezhi Wang, Tanay Biradar, Yao Qin, Manmohan Chandraker
Transformer-based methods have exhibited significant generalization ability when prompted with target-domain demonstrations or example solutions during inference.
1 code implementation • 15 Dec 2023 • Litian Liu, Yao Qin
By regularizing the distances to decision boundaries based on feature deviation from the mean, we develop a hyperparameter-free, auxiliary model-free OOD detector.
Computational Efficiency Out of Distribution (OOD) Detection
1 code implementation • CVPR 2024 • Andong Hua, Jindong Gu, Zhiyu Xue, Nicholas Carlini, Eric Wong, Yao Qin
Based on this, we propose Robust Linear Initialization (RoLI) for adversarial finetuning, which initializes the linear head with the weights obtained by adversarial linear probing to maximally inherit the robustness from pretraining.
no code implementations • 2 Nov 2023 • Litian Liu, Yao Qin
By analyzing this trend, we discover that features of in-distribution (ID) samples cluster closer to the weight vectors compared to features of OOD samples.
1 code implementation • 2 Nov 2023 • Bhagyashree Puranik, Ahmad Beirami, Yao Qin, Upamanyu Madhow
State-of-the-art techniques for enhancing robustness of deep networks mostly rely on empirical risk minimization with suitable data augmentation.
no code implementations • 25 Oct 2023 • Ananth Balashankar, Xiao Ma, Aradhana Sinha, Ahmad Beirami, Yao Qin, Jilin Chen, Alex Beutel
As large language models (LLMs) are widely adopted, new safety issues and policies emerge, to which existing safety classifiers do not generalize well.
2 code implementations • 24 Jul 2023 • Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr
This paper aims to provide a comprehensive survey of cutting-edge research in prompt engineering on three types of vision-language models: multimodal-to-text generation models (e. g. Flamingo), image-text matching models (e. g.
1 code implementation • 22 May 2023 • Xinlu Zhang, Shiyang Li, Xianjun Yang, Chenxin Tian, Yao Qin, Linda Ruth Petzold
Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns.
no code implementations • 22 May 2023 • Ananth Balashankar, Xuezhi Wang, Yao Qin, Ben Packer, Nithum Thain, Jilin Chen, Ed H. Chi, Alex Beutel
We demonstrate that with a small amount of human-annotated counterfactual data (10%), we can generate a counterfactual augmentation dataset with learned labels, that provides an 18-20% improvement in robustness and a 14-21% reduction in errors on 6 out-of-domain datasets, comparable to that of a fully human-annotated counterfactual dataset for both sentiment classification and question paraphrase tasks.
no code implementations • 17 Apr 2023 • Jindong Gu, Ahmad Beirami, Xuezhi Wang, Alex Beutel, Philip Torr, Yao Qin
With the advent of vision-language models (VLMs) that can perform in-context and prompt-based learning, how can we design prompting approaches that robustly generalize to distribution shift and can be used on novel classes outside the support set of the prompts?
no code implementations • 19 Mar 2023 • Shaila Niazi, Navid Anjum Aadit, Masoud Mohseni, Shuvro Chowdhury, Yao Qin, Kerem Y. Camsari
These results demonstrate the potential of using Ising machines for traditionally hard-to-train deep generative Boltzmann networks, with further possible improvement in nanodevice-based realizations.
no code implementations • 22 Feb 2023 • Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed H. Chi, Alex Beutel
A wide breadth of research has devised data augmentation approaches that can improve both accuracy and generalization performance for neural networks.
1 code implementation • 10 Feb 2023 • Yuanxin Ye, Mengmeng Wang, Liang Zhou, Guangyang Lei, Jianwei Fan, Yao Qin
First, through the inner fusion property of 3D convolution, we design a new feature fusion way that can simultaneously extract and fuse the feature information from bi-temporal images.
no code implementations • 28 Oct 2022 • Jieyu Zhao, Xuezhi Wang, Yao Qin, Jilin Chen, Kai-Wei Chang
Large pre-trained language models have shown remarkable performance over the past few years.
no code implementations • 20 Nov 2021 • Jindong Gu, Volker Tresp, Yao Qin
However, when ViTs are attacked by an adversary, the attention mechanism can be easily fooled to focus more on the adversarially perturbed patches and cause a mistake.
no code implementations • 15 Oct 2021 • Yao Qin, Chiyuan Zhang, Ting Chen, Balaji Lakshminarayanan, Alex Beutel, Xuezhi Wang
We show that patch-based negative augmentation consistently improves robustness of ViTs across a wide set of ImageNet based robustness benchmarks.
no code implementations • 29 Sep 2021 • Jindong Gu, Volker Tresp, Yao Qin
Based on extensive qualitative and quantitative experiments, we discover that ViT's stronger robustness to natural corrupted patches and higher vulnerability against adversarial patches are both caused by the attention mechanism.
no code implementations • 16 Feb 2021 • Junzheng Wu, Biao Li, Yao Qin, Weiping Ni, Han Zhang, Yuli Sun
In this paper, a novel CD method based on the graph convolutional network (GCN) and multiscale object-based technique is proposed for both homogeneous and heterogeneous images.
no code implementations • 1 Jan 2021 • Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed Chi, Alex Beutel
Despite this, most existing work simply reuses the original label from the clean data, and the choice of label accompanying the augmented data is relatively less explored.
no code implementations • EMNLP 2020 • Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, Ed Chi
Experiments on real-world NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts, compared to many existing adversarial text generation approaches.
no code implementations • NeurIPS 2021 • Yao Qin, Xuezhi Wang, Alex Beutel, Ed H. Chi
To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary.
no code implementations • 18 Feb 2020 • Yao Qin, Nicholas Frosst, Colin Raffel, Garrison Cottrell, Geoffrey Hinton
There has been an ongoing cycle where stronger defenses against adversarial attacks are subsequently broken by a more advanced defense-aware attack.
no code implementations • ICLR 2020 • Yao Qin, Nicholas Frosst, Sara Sabour, Colin Raffel, Garrison Cottrell, Geoffrey Hinton
Then, we diagnose the adversarial examples for CapsNets and find that the success of the reconstructive attack is highly related to the visual similarity between the source and target class.
no code implementations • ICLR 2019 • Ian Goodfellow, Yao Qin, David Berthelot
Current machine learning algorithms can be easily fooled by adversarial examples.
1 code implementation • 22 Mar 2019 • Yao Qin, Nicholas Carlini, Ian Goodfellow, Garrison Cottrell, Colin Raffel
Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 29 Aug 2018 • Yao Qin, Lorenzo Bruzzone, Biao Li, Yuanxin Ye
To be specific, the proposed CDCL method is an iterative process of three main stages, i. e. twice of RW-based pseudolabeling and cross domain learning via C-CCA.
no code implementations • 29 Aug 2018 • Yao Qin, Lorenzo Bruzzone, Biao Li
Then we consider the subspace invariance between two domains as projection matrices and original tensors are projected as core tensors with lower dimensions into the invariant tensor subspace by applying Tucker decomposition.
3 code implementations • 22 May 2018 • Yao Qin, Konstantinos Kamnitsas, Siddharth Ancha, Jay Nanavati, Garrison Cottrell, Antonio Criminisi, Aditya Nori
We propose the autofocus convolutional layer for semantic segmentation with the objective of enhancing the capabilities of neural networks for multi-scale processing.
Ranked #5 on Brain Tumor Segmentation on BRATS-2015
1 code implementation • 26 May 2017 • Yao Qin, Mengyang Feng, Huchuan Lu, Garrison W. Cottrell
The CCA can act as an efficient pixel-wise aggregation algorithm that can integrate state-of-the-art methods, resulting in even better results.
13 code implementations • 7 Apr 2017 • Yao Qin, Dongjin Song, Haifeng Chen, Wei Cheng, Guofei Jiang, Garrison Cottrell
The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades.
no code implementations • CVPR 2015 • Yao Qin, Huchuan Lu, Yiqun Xu, He Wang
In this paper, we introduce Cellular Automata--a dynamic evolution model to intuitively detect the salient object.