Search Results for author: Yihan Wang

Found 28 papers, 12 papers with code

On Lp-norm Robustness of Ensemble Decision Stumps and Trees

no code implementations ICML 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the robustness verification and defense with respect to general $\ell_p$ norm perturbation for ensemble trees and stumps.

Efficient dual-scale generalized Radon-Fourier transform detector family for long time coherent integration

no code implementations11 Mar 2024 Suqi Li, Yihan Wang, Bailu Wang, Giorgio Battistelli, Luigi Chisci, Guolong Cui

Since RM and DFM are induced by the same motion parameters, existing approaches such as the generalized Radon-Fourier transform (GRFT) or the keystone transform (KT)-matching filter process (MFP) adopt the same search space for the motion parameters in order to eliminate both effects, thus leading to large redundancy in computation.

Computational Efficiency

Defending LLMs against Jailbreaking Attacks via Backtranslation

1 code implementation26 Feb 2024 Yihan Wang, Zhouxing Shi, Andrew Bai, Cho-Jui Hsieh

The inferred prompt is called the backtranslated prompt which tends to reveal the actual intent of the original prompt, since it is generated based on the LLM's response and is not directly manipulated by the attacker.

Language Modelling

Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously

no code implementations6 Feb 2024 Yihan Wang, Yifan Zhu, Xiao-Shan Gao

Availability attacks can prevent the unauthorized use of private data and commercial datasets by generating imperceptible noise and making unlearnable examples before release.

Contrastive Learning

Game-Theoretic Unlearnable Example Generator

1 code implementation31 Jan 2024 Shuang Liu, Yihan Wang, Xiao-Shan Gao

Unlearnable example attacks are data poisoning attacks aiming to degrade the clean test accuracy of deep learning by adding imperceptible perturbations to the training samples, which can be formulated as a bi-level optimization problem.

Data Poisoning

Data-Dependent Stability Analysis of Adversarial Training

no code implementations6 Jan 2024 Yihan Wang, Shuang Liu, Xiao-Shan Gao

Stability analysis is an essential aspect of studying the generalization ability of deep learning, as it involves deriving generalization bounds for stochastic gradient descent-based training algorithms.

Data Poisoning Generalization Bounds

Improving the Generation Quality of Watermarked Large Language Models via Word Importance Scoring

no code implementations16 Nov 2023 Yuhang Li, Yihan Wang, Zhouxing Shi, Cho-Jui Hsieh

In this work, we propose to improve the quality of texts generated by a watermarked language model by Watermarking with Importance Scoring (WIS).

Language Modelling

MVFAN: Multi-View Feature Assisted Network for 4D Radar Object Detection

no code implementations25 Oct 2023 Qiao Yan, Yihan Wang

Additionally, we propose a pioneering backbone, the Radar Feature Assisted backbone, explicitly crafted to fully exploit the valuable Doppler velocity and reflectivity data provided by the 4D radar sensor.

3D Object Detection Autonomous Driving +2

ThermRad: A Multi-modal Dataset for Robust 3D Object Detection under Challenging Conditions

no code implementations20 Aug 2023 Qiao Yan, Yihan Wang

To validate the robustness of 4D radars and thermal cameras for 3D object detection in challenging weather conditions, we propose a new multi-modal fusion method called RTDF-RCNN, which leverages the complementary strengths of 4D radars and thermal cameras to boost object detection performance.

Object object-detection +1

Restore Translation Using Equivariant Neural Networks

no code implementations29 Jun 2023 Yihan Wang, Lijia Yu, Xiao-Shan Gao

Invariance to spatial transformations such as translations and rotations is a desirable property and a basic design principle for classification neural networks.

Translation

Red Teaming Language Model Detectors with Language Models

2 code implementations31 May 2023 Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, Cho-Jui Hsieh

The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users.

Adversarial Robustness Language Modelling +2

Democratizing Pathological Image Segmentation with Lay Annotators via Molecular-empowered Learning

1 code implementation31 May 2023 Ruining Deng, Yanwei Li, Peize Li, Jiacheng Wang, Lucas W. Remedios, Saydolimkhon Agzamkhodjaev, Zuhayr Asad, Quan Liu, Can Cui, Yaohong Wang, Yihan Wang, Yucheng Tang, Haichun Yang, Yuankai Huo

The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data.

Cell Segmentation Image Segmentation +3

Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation

2 code implementations13 Oct 2022 Zhouxing Shi, Yihan Wang, huan zhang, Zico Kolter, Cho-Jui Hsieh

In this paper, we develop an efficient framework for computing the $\ell_\infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian via linear bound propagation.

Fairness

Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation

1 code implementation CVPR 2022 Yihan Wang, Muyang Li, Han Cai, Wei-Ming Chen, Song Han

Inspired by this finding, we design LitePose, an efficient single-branch architecture for pose estimation, and introduce two simple approaches to enhance the capacity of LitePose, including Fusion Deconv Head and Large Kernel Convs.

Ranked #5 on Multi-Person Pose Estimation on MS COCO (Validation AP metric)

2D Human Pose Estimation Multi-Person Pose Estimation

Adversarial Parameter Attack on Deep Neural Networks

no code implementations20 Mar 2022 Lijia Yu, Yihan Wang, Xiao-Shan Gao

In this paper, a new parameter perturbation attack on DNNs, called adversarial parameter attack, is proposed, in which small perturbations to the parameters of the DNN are made such that the accuracy of the attacked DNN does not decrease much, but its robustness becomes much lower.

On the Convergence of Certified Robust Training with Interval Bound Propagation

no code implementations ICLR 2022 Yihan Wang, Zhouxing Shi, Quanquan Gu, Cho-Jui Hsieh

Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature.

A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks

no code implementations29 Sep 2021 huan zhang, Shiqi Wang, Kaidi Xu, Yihan Wang, Suman Jana, Cho-Jui Hsieh, J Zico Kolter

In this work, we formulate an adversarial attack using a branch-and-bound (BaB) procedure on ReLU neural networks and search adversarial examples in the activation space corresponding to binary variables in a mixed integer programming (MIP) formulation.

Adversarial Attack

Interactive Plot Manipulation using Natural Language

no code implementations NAACL 2021 Yihan Wang, Yutong Shao, Ndapa Nakashole

This plotting model while accurate in most cases, still makes errors, therefore, the system allows a feedback mode, wherein the user is presented with a top-k list of plots, among which the user can pick the desired one.

Fast Certified Robust Training with Short Warmup

2 code implementations NeurIPS 2021 Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh

Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.

Adversarial Defense

DOP: Off-Policy Multi-Agent Decomposed Policy Gradients

no code implementations ICLR 2021 Yihan Wang, Beining Han, Tonghan Wang, Heng Dong, Chongjie Zhang

In this paper, we investigate causes that hinder the performance of MAPG algorithms and present a multi-agent decomposed policy gradient method (DOP).

Multi-agent Reinforcement Learning Starcraft +1

On $\ell_p$-norm Robustness of Ensemble Stumps and Trees

1 code implementation20 Aug 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the problem of robustness verification and certified defense with respect to general $\ell_p$ norm perturbations for ensemble decision stumps and trees.

Off-Policy Multi-Agent Decomposed Policy Gradients

1 code implementation24 Jul 2020 Yihan Wang, Beining Han, Tonghan Wang, Heng Dong, Chongjie Zhang

In this paper, we investigate causes that hinder the performance of MAPG algorithms and present a multi-agent decomposed policy gradient method (DOP).

Multi-agent Reinforcement Learning Starcraft +1

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.