Search Results for author: Chengzhi Mao

Found 28 papers, 16 papers with code

ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object

1 code implementation27 Mar 2024 Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao

In this work, we introduce generative model as a data source for synthesizing hard images that benchmark deep models' robustness.

Benchmarking

Raidar: geneRative AI Detection viA Rewriting

1 code implementation23 Jan 2024 Chengzhi Mao, Carl Vondrick, Hao Wang, Junfeng Yang

We find that large language models (LLMs) are more likely to modify human-written text than AI-generated text when tasked with rewriting.

Interpreting and Controlling Vision Foundation Models via Text Explanations

1 code implementation16 Oct 2023 Haozhe Chen, Junfeng Yang, Carl Vondrick, Chengzhi Mao

Large-scale pre-trained vision foundation models, such as CLIP, have become de facto backbones for various vision tasks.

Model Editing Visual Reasoning

Towards Causal Deep Learning for Vulnerability Detection

no code implementations12 Oct 2023 Md Mahbubur Rahman, Ira Ceka, Chengzhi Mao, Saikat Chakraborty, Baishakhi Ray, Wei Le

Our results show that CausalVul consistently improved the model accuracy, robustness and OOD performance for all the state-of-the-art models and datasets we experimented.

Vulnerability Detection

Monitoring and Adapting ML Models on Mobile Devices

no code implementations12 May 2023 Wei Hao, Zixi Wang, Lauren Hong, Lingxiao Li, Nader Karayanni, Chengzhi Mao, Junfeng Yang, Asaf Cidon

ML models are increasingly being pushed to mobile devices, for low-latency inference and offline operation.

Test-time Detection and Repair of Adversarial Samples via Masked Autoencoder

no code implementations22 Mar 2023 Yun-Yun Tsai, Ju-Chin Chao, Albert Wen, Zhaoyuan Yang, Chengzhi Mao, Tapan Shah, Junfeng Yang

Test-time defenses solve these issues but most existing test-time defenses require adapting the model weights, therefore they do not work on frozen models and complicate model memory management.

Contrastive Learning Management

What You Can Reconstruct From a Shadow

no code implementations CVPR 2023 Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, Carl Vondrick

Experiments and visualizations show that the method is able to generate multiple possible solutions that are consistent with the observation of the shadow.

3D Reconstruction Object +1

Adversarially Robust Video Perception by Seeing Motion

no code implementations13 Dec 2022 Lingyu Zhang, Chengzhi Mao, Junfeng Yang, Carl Vondrick

Even under adaptive attacks where the adversary knows our defense, our algorithm is still effective.

Adversarial Robustness

Robust Perception through Equivariance

1 code implementation12 Dec 2022 Chengzhi Mao, Lingyu Zhang, Abhishek Joshi, Junfeng Yang, Hao Wang, Carl Vondrick

In this paper, we introduce a framework that uses the dense intrinsic constraints in natural images to robustify inference.

Adversarial Robustness Instance Segmentation +2

Doubly Right Object Recognition: A Why Prompt for Visual Rationales

1 code implementation CVPR 2023 Chengzhi Mao, Revant Teotia, Amrutha Sundar, Sachit Menon, Junfeng Yang, Xin Wang, Carl Vondrick

We propose a ``doubly right'' object recognition benchmark, where the metric requires the model to simultaneously produce both the right labels as well as the right rationales.

Object Recognition

1st ICLR International Workshop on Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data (PAIR^2Struct)

no code implementations7 Oct 2022 Hao Wang, WanYu Lin, Hao He, Di Wang, Chengzhi Mao, Muhan Zhang

Recent years have seen advances on principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe.

Shadows Shed Light on 3D Objects

no code implementations17 Jun 2022 Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, Carl Vondrick

Experiments and visualizations show that the method is able to generate multiple possible solutions that are consistent with the observation of the shadow.

3D Reconstruction Object +1

Landscape Learning for Neural Network Inversion

no code implementations ICCV 2023 Ruoshi Liu, Chengzhi Mao, Purva Tendulkar, Hao Wang, Carl Vondrick

Many machine learning methods operate by inverting a neural network at inference time, which has become a popular technique for solving inverse problems in computer vision, robotics, and graphics.

Adversarial Defense

A Tale of Two Models: Constructing Evasive Attacks on Edge Models

1 code implementation22 Apr 2022 Wei Hao, Aahil Awatramani, Jiayang Hu, Chengzhi Mao, Pin-Chun Chen, Eyal Cidon, Asaf Cidon, Junfeng Yang

In this paper, we introduce a new evasive attack, DIVA, that exploits these differences in edge adaptation, by adding adversarial noise to input data that maximizes the output difference between the original and adapted model.

Quantization Vocal Bursts Valence Prediction

Using Multiple Self-Supervised Tasks Improves Model Robustness

1 code implementation7 Apr 2022 Matthew Lawhon, Chengzhi Mao, Junfeng Yang

In this paper, we propose a novel defense that can dynamically adapt the input using the intrinsic structure from multiple self-supervised tasks.

Real-Time Neural Voice Camouflage

no code implementations ICLR 2022 Mia Chiquier, Chengzhi Mao, Carl Vondrick

Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Adversarial Attacks are Reversible with Natural Supervision

1 code implementation ICCV 2021 Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, Carl Vondrick

We find that images contain intrinsic structure that enables the reversal of many adversarial attacks.

Generative Interventions for Causal Learning

1 code implementation CVPR 2021 Chengzhi Mao, Augustine Cha, Amogh Gupta, Hao Wang, Junfeng Yang, Carl Vondrick

We introduce a framework for learning robust visual representations that generalize to new viewpoints, backgrounds, and scene contexts.

Ranked #44 on Image Classification on ObjectNet (using extra training data)

Image Classification Out-of-Distribution Generalization

Multitask Learning Strengthens Adversarial Robustness

1 code implementation ECCV 2020 Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, Carl Vondrick

Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network.

Adversarial Defense Adversarial Robustness

Live Trojan Attacks on Deep Neural Networks

1 code implementation22 Apr 2020 Robby Costales, Chengzhi Mao, Raphael Norwitz, Bryan Kim, Junfeng Yang

We propose a live attack on deep learning systems that patches model parameters in memory to achieve predefined malicious behavior on a certain set of inputs.

AdvSPADE: Realistic Unrestricted Attacks for Semantic Segmentation

no code implementations6 Oct 2019 Guangyu Shen, Chengzhi Mao, Junfeng Yang, Baishakhi Ray

Due to the inherent robustness of segmentation models, traditional norm-bounded attack methods show limited effect on such type of models.

Adversarial Attack Segmentation +1

Bidirectional Inference Networks: A Class of Deep Bayesian Networks for Health Profiling

no code implementations6 Feb 2019 Hao Wang, Chengzhi Mao, Hao He, Ming-Min Zhao, Tommi S. Jaakkola, Dina Katabi

We consider the problem of inferring the values of an arbitrary set of variables (e. g., risk of diseases) given other observed variables (e. g., symptoms and diagnosed diseases) and high-dimensional signals (e. g., MRI images or EEG).

Computational Efficiency EEG +2

Cannot find the paper you are looking for? You can Submit a new open access paper.