1 code implementation • 4 Oct 2024 • James Wang, Ran Li, Junfeng Yang, Chengzhi Mao
We also show that examples generated by RAFT can be used to train adversarially robust detectors.
no code implementations • 8 Aug 2024 • Wei Hao, Ran Li, Weiliang Zhao, Junfeng Yang, Chengzhi Mao
Large language models (LLMs) can be abused at scale to create non-factual content and spread disinformation.
no code implementations • 13 Jun 2024 • Qingyuan Liu, Pengyuan Shi, Yun-Yun Tsai, Chengzhi Mao, Junfeng Yang
In this paper, we propose a novel framework for detecting videos synthesized from multiple state-of-the-art (SOTA) generative models, such as Stable Video Diffusion.
1 code implementation • CVPR 2024 • Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao
In this work, we introduce generative model as a data source for synthesizing hard images that benchmark deep models' robustness.
1 code implementation • 16 Mar 2024 • Haozhe Chen, Carl Vondrick, Chengzhi Mao
How do large language models (LLMs) obtain their answers?
1 code implementation • 23 Jan 2024 • Chengzhi Mao, Carl Vondrick, Hao Wang, Junfeng Yang
We find that large language models (LLMs) are more likely to modify human-written text than AI-generated text when tasked with rewriting.
no code implementations • 29 Oct 2023 • Noah Thomas McDermott, Junfeng Yang, Chengzhi Mao
Instead, we propose to make the language models robust at test time.
1 code implementation • 16 Oct 2023 • Haozhe Chen, Junfeng Yang, Carl Vondrick, Chengzhi Mao
Large-scale pre-trained vision foundation models, such as CLIP, have become de facto backbones for various vision tasks.
no code implementations • 12 Oct 2023 • Md Mahbubur Rahman, Ira Ceka, Chengzhi Mao, Saikat Chakraborty, Baishakhi Ray, Wei Le
Our results show that CausalVul consistently improved the model accuracy, robustness and OOD performance for all the state-of-the-art models and datasets we experimented.
no code implementations • 12 May 2023 • Wei Hao, Zixi Wang, Lauren Hong, Lingxiao Li, Nader Karayanni, Chengzhi Mao, Junfeng Yang, Asaf Cidon
ML models are increasingly being pushed to mobile devices, for low-latency inference and offline operation.
no code implementations • 22 Mar 2023 • Yun-Yun Tsai, Ju-Chin Chao, Albert Wen, Zhaoyuan Yang, Chengzhi Mao, Tapan Shah, Junfeng Yang
Test-time defenses solve these issues but most existing test-time defenses require adapting the model weights, therefore they do not work on frozen models and complicate model memory management.
no code implementations • CVPR 2023 • Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, Carl Vondrick
Experiments and visualizations show that the method is able to generate multiple possible solutions that are consistent with the observation of the shadow.
2 code implementations • 14 Dec 2022 • Chengzhi Mao, Scott Geng, Junfeng Yang, Xin Wang, Carl Vondrick
We apply this training loss to two adaption methods, model finetuning and visual prompt tuning.
no code implementations • 13 Dec 2022 • Lingyu Zhang, Chengzhi Mao, Junfeng Yang, Carl Vondrick
Even under adaptive attacks where the adversary knows our defense, our algorithm is still effective.
1 code implementation • CVPR 2023 • Chengzhi Mao, Revant Teotia, Amrutha Sundar, Sachit Menon, Junfeng Yang, Xin Wang, Carl Vondrick
We propose a ``doubly right'' object recognition benchmark, where the metric requires the model to simultaneously produce both the right labels as well as the right rationales.
1 code implementation • 12 Dec 2022 • Chengzhi Mao, Lingyu Zhang, Abhishek Joshi, Junfeng Yang, Hao Wang, Carl Vondrick
In this paper, we introduce a framework that uses the dense intrinsic constraints in natural images to robustify inference.
no code implementations • 7 Oct 2022 • Hao Wang, WanYu Lin, Hao He, Di Wang, Chengzhi Mao, Muhan Zhang
Recent years have seen advances on principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe.
no code implementations • 17 Jun 2022 • Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, Carl Vondrick
Experiments and visualizations show that the method is able to generate multiple possible solutions that are consistent with the observation of the shadow.
no code implementations • ICCV 2023 • Ruoshi Liu, Chengzhi Mao, Purva Tendulkar, Hao Wang, Carl Vondrick
Many machine learning methods operate by inverting a neural network at inference time, which has become a popular technique for solving inverse problems in computer vision, robotics, and graphics.
1 code implementation • CVPR 2022 • Chengzhi Mao, Kevin Xia, James Wang, Hao Wang, Junfeng Yang, Elias Bareinboim, Carl Vondrick
Visual representations underlie object recognition tasks, but they often contain both robust and non-robust features.
1 code implementation • 22 Apr 2022 • Wei Hao, Aahil Awatramani, Jiayang Hu, Chengzhi Mao, Pin-Chun Chen, Eyal Cidon, Asaf Cidon, Junfeng Yang
In this paper, we introduce a new evasive attack, DIVA, that exploits these differences in edge adaptation, by adding adversarial noise to input data that maximizes the output difference between the original and adapted model.
1 code implementation • 7 Apr 2022 • Matthew Lawhon, Chengzhi Mao, Junfeng Yang
In this paper, we propose a novel defense that can dynamically adapt the input using the intrinsic structure from multiple self-supervised tasks.
no code implementations • ICLR 2022 • Mia Chiquier, Chengzhi Mao, Carl Vondrick
Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • ICLR 2022 • Chengzhi Mao, Lu Jiang, Mostafa Dehghani, Carl Vondrick, Rahul Sukthankar, Irfan Essa
Vision Transformer (ViT) is emerging as the state-of-the-art architecture for image recognition.
Ranked #3 on Domain Generalization on Stylized-ImageNet
1 code implementation • ICCV 2021 • Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, Carl Vondrick
We find that images contain intrinsic structure that enables the reversal of many adversarial attacks.
1 code implementation • CVPR 2021 • Chengzhi Mao, Augustine Cha, Amogh Gupta, Hao Wang, Junfeng Yang, Carl Vondrick
We introduce a framework for learning robust visual representations that generalize to new viewpoints, backgrounds, and scene contexts.
Ranked #44 on Image Classification on ObjectNet (using extra training data)
1 code implementation • ECCV 2020 • Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, Carl Vondrick
Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network.
1 code implementation • 22 Apr 2020 • Robby Costales, Chengzhi Mao, Raphael Norwitz, Bryan Kim, Junfeng Yang
We propose a live attack on deep learning systems that patches model parameters in memory to achieve predefined malicious behavior on a certain set of inputs.
no code implementations • 6 Oct 2019 • Guangyu Shen, Chengzhi Mao, Junfeng Yang, Baishakhi Ray
Due to the inherent robustness of segmentation models, traditional norm-bounded attack methods show limited effect on such type of models.
1 code implementation • NeurIPS 2019 • Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, Baishakhi Ray
Deep networks are well-known to be fragile to adversarial attacks.
no code implementations • 6 Feb 2019 • Hao Wang, Chengzhi Mao, Hao He, Ming-Min Zhao, Tommi S. Jaakkola, Dina Katabi
We consider the problem of inferring the values of an arbitrary set of variables (e. g., risk of diseases) given other observed variables (e. g., symptoms and diagnosed diseases) and high-dimensional signals (e. g., MRI images or EEG).