Search Results for author: Yiwen Guo

Found 48 papers, 25 papers with code

UniTSyn: A Large-Scale Dataset Capable of Enhancing the Prowess of Large Language Models for Program Testing

no code implementations4 Feb 2024 Yifeng He, Jiabo Huang, Yuyang Rong, Yiwen Guo, Ethan Wang, Hao Chen

The remarkable capability of large language models (LLMs) in generating high-quality code has drawn increasing attention in the software testing community.

FILP-3D: Enhancing 3D Few-shot Class-incremental Learning with Pre-trained Vision-Language Models

1 code implementation28 Dec 2023 Wan Xu, Tianyu Huang, Tianyu Qu, Guanglei Yang, Yiwen Guo, WangMeng Zuo

Few-shot class-incremental learning (FSCIL) aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.

Dimensionality Reduction Few-Shot Class-Incremental Learning +2

Black-Box Tuning of Vision-Language Models with Effective Gradient Approximation

1 code implementation26 Dec 2023 Zixian Guo, Yuxiang Wei, Ming Liu, Zhilong Ji, Jinfeng Bai, Yiwen Guo, WangMeng Zuo

Parameter-efficient fine-tuning (PEFT) methods have provided an effective way for adapting large vision-language models to specific tasks or scenarios.

Adversarial Examples Are Not Real Features

1 code implementation NeurIPS 2023 Ang Li, Yifei Wang, Yiwen Guo, Yisen Wang

A well-known theory by \citet{ilyas2019adversarial} explains adversarial vulnerability from a data perspective by showing that one can extract non-robust features from adversarial examples and these features alone are useful for classification.

Contrastive Learning Self-Supervised Learning

Learning with Noisy Labels Using Collaborative Sample Selection and Contrastive Semi-Supervised Learning

no code implementations24 Oct 2023 Qing Miao, Xiaohe Wu, Chao Xu, Yanli Ji, WangMeng Zuo, Yiwen Guo, Zhaopeng Meng

By incorporating auxiliary information from CLIP and utilizing prompt fine-tuning, we effectively eliminate noisy samples from the clean set and mitigate confirmation bias during training.

Learning with noisy labels

DualAug: Exploiting Additional Heavy Augmentation with OOD Data Rejection

1 code implementation12 Oct 2023 Zehao Wang, Yiwen Guo, Qizhang Li, Guanglei Yang, WangMeng Zuo

Most existing data augmentation methods tend to find a compromise in augmenting the data, \textit{i. e.}, increasing the amplitude of augmentation carefully to avoid degrading some data too much and doing harm to the model performance.

Data Augmentation Image Classification +1

Code Representation Pre-training with Complements from Program Executions

no code implementations4 Sep 2023 Jiabo Huang, Jianyu Zhao, Yuyang Rong, Yiwen Guo, Yifeng He, Hao Chen

The test cases are obtained with the assistance of a customized fuzzer and are only required during pre-training.

Code Search Language Modelling

Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models

no code implementations31 Aug 2023 Minheng Ni, Yabo Zhang, Kailai Feng, Xiaoming Li, Yiwen Guo, WangMeng Zuo

In this work, we introduce a novel Referring Diffusional segmentor (Ref-Diff) for this task, which leverages the fine-grained multi-modal information from generative models.

Image Segmentation Instance Segmentation +2

Improving Transferability of Adversarial Examples via Bayesian Attacks

no code implementations21 Jul 2023 Qizhang Li, Yiwen Guo, Xiaochen Yang, WangMeng Zuo, Hao Chen

Our ICLR work advocated for enhancing transferability in adversarial examples by incorporating a Bayesian formulation into model parameters, which effectively emulates the ensemble of infinitely many deep neural networks, while, in this paper, we introduce a novel extension by incorporating the Bayesian formulation into the model input as well, enabling the joint diversification of both the model input and model parameters.

Understanding Programs by Exploiting (Fuzzing) Test Cases

1 code implementation23 May 2023 Jianyu Zhao, Yuyang Rong, Yiwen Guo, Yifeng He, Hao Chen

The effectiveness of the proposed method is verified on two program understanding tasks including code clone detection and code classification, and it outperforms current state-of-the-arts by large margins.

Clone Detection Code Classification +2

Improving Adversarial Transferability via Intermediate-level Perturbation Decay

2 code implementations NeurIPS 2023 Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen

In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously.

CFA: Class-wise Calibrated Fair Adversarial Training

1 code implementation CVPR 2023 Zeming Wei, Yifei Wang, Yiwen Guo, Yisen Wang

Adversarial training has been widely acknowledged as the most effective method to improve the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs).

Adversarial Robustness Fairness

Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples

1 code implementation10 Feb 2023 Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen

In this paper, by contrast, we opt for the diversity in substitute models and advocate to attack a Bayesian model for achieving desirable transferability.

MHCN: A Hyperbolic Neural Network Model for Multi-view Hierarchical Clustering

no code implementations ICCV 2023 Fangfei Lin, Bing Bai, Yiwen Guo, Hao Chen, Yazhou Ren, Zenglin Xu

Multi-view hierarchical clustering (MCHC) plays a pivotal role in comprehending the structures within multi-view data, which hinges on the skillful interaction between hierarchical feature learning and comprehensive representation learning across multiple views.

Clustering MULTI-VIEW LEARNING +1

When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture

1 code implementation14 Oct 2022 Yichuan Mo, Dongxian Wu, Yifei Wang, Yiwen Guo, Yisen Wang

We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs.

Adversarial Robustness

Squeeze Training for Adversarial Robustness

1 code implementation23 May 2022 Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen

The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted great attention in the machine learning community.

Adversarial Robustness

An Intermediate-level Attack Framework on The Basis of Linear Regression

1 code implementation21 Mar 2022 Yiwen Guo, Qizhang Li, WangMeng Zuo, Hao Chen

This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.

regression

On Steering Multi-Annotations per Sample for Multi-Task Learning

no code implementations6 Mar 2022 Yuanze Li, Yiwen Guo, Qizhang Li, Hongzhi Zhang, WangMeng Zuo

Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.

Instance Segmentation Multi-Task Learning +2

A Theoretical View of Linear Backpropagation and Its Convergence

1 code implementation21 Dec 2021 Ziang Li, Yiwen Guo, Haodi Liu, ChangShui Zhang

This paper serves as a complement and somewhat an extension to Guo et al.'s paper, by providing theoretical analyses on LinBP in neural-network-involved learning tasks, including adversarial attack and model training.

Adversarial Attack

Learned ISTA with Error-based Thresholding for Adaptive Sparse Coding

no code implementations21 Dec 2021 Ziang Li, Kailun Wu, Yiwen Guo, ChangShui Zhang

Drawing on theoretical insights, we advocate an error-based thresholding (EBT) mechanism for learned ISTA (LISTA), which utilizes a function of the layer-wise reconstruction error to suggest a specific threshold for each observation in the shrinkage function of each layer.

Membership Inference Attack in Face of Data Transformations

no code implementations29 Sep 2021 Jiyu Chen, Yiwen Guo, Hao Chen

We demonstrated the effectiveness of our attacks by extensive evaluations on multiple common data transformations and comparison with other state-of-the-art attacks.

Inference Attack Membership Inference Attack

Linear Backpropagation Leads to Faster Convergence

no code implementations29 Sep 2021 Li Ziang, Yiwen Guo, Haodi Liu, ChangShui Zhang

In this paper, we study the very recent method called ``linear backpropagation'' (LinBP), which modifies the standard backpropagation and can improve the transferability in black-box adversarial attack.

Adversarial Attack

Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems

no code implementations NeurIPS 2021 Zixiu Wang, Yiwen Guo, Hu Ding

In this paper, we propose a novel robust coreset method for the {\em continuous-and-bounded learning} problems (with outliers) which includes a broad range of popular optimization objectives in machine learning, {\em e. g.,} logistic regression and $ k $-means clustering.

BIG-bench Machine Learning

Deepfake Forensics via An Adversarial Game

1 code implementation25 Mar 2021 Zhi Wang, Yiwen Guo, WangMeng Zuo

In this paper, we advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.

Classification DeepFake Detection +2

Recent Advances in Large Margin Learning

no code implementations25 Mar 2021 Yiwen Guo, ChangShui Zhang

This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs) that are probably the most prominent machine learning models for large-scale data in the community over the past decade.

Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples

no code implementations ICLR 2021 Ziang Yan, Yiwen Guo, Jian Liang, ChangShui Zhang

To craft black-box adversarial examples, adversaries need to query the victim model and take proper advantage of its feedback.

Image Classification

Backpropagating Linearly Improves Transferability of Adversarial Examples

1 code implementation NeurIPS 2020 Yiwen Guo, Qizhang Li, Hao Chen

The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community.

Practical No-box Adversarial Attacks against DNNs

2 code implementations NeurIPS 2020 Qizhang Li, Yiwen Guo, Hao Chen

We propose three mechanisms for training with a very small dataset (on the order of tens of examples) and find that prototypical reconstruction is the most effective.

Face Verification Image Classification

Yet Another Intermediate-Level Attack

2 code implementations ECCV 2020 Qizhang Li, Yiwen Guo, Hao Chen

The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks.

On Connections between Regularizations for Improving DNN Robustness

no code implementations4 Jul 2020 Yiwen Guo, Long Chen, Yurong Chen, Chang-Shui Zhang

This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs), from a theoretical point of view.

Adversarial Robustness BIG-bench Machine Learning +1

Towards Certified Robustness of Distance Metric Learning

1 code implementation10 Jun 2020 Xiaochen Yang, Yiwen Guo, Mingzhi Dong, Jing-Hao Xue

Many existing methods consider maximizing or at least constraining a distance margin in the feature space that separates similar and dissimilar pairs of instances to guarantee their generalization ability.

Metric Learning

Sparse Coding with Gated Learned ISTA

1 code implementation ICLR 2020 Kailun Wu, Yiwen Guo, Ziang Li, Chang-Shui Zhang

In this paper, we study the learned iterative shrinkage thresholding algorithm (LISTA) for solving sparse coding problems.

Adversarial Margin Maximization Networks

1 code implementation14 Nov 2019 Ziang Yan, Yiwen Guo, Chang-Shui Zhang

The tremendous recent success of deep neural networks (DNNs) has sparked a surge of interest in understanding their predictive ability.

Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks

2 code implementations NeurIPS 2019 Ziang Yan, Yiwen Guo, Chang-Shui Zhang

Unlike the white-box counterparts that are widely studied and readily accessible, adversarial examples in black-box settings are generally more Herculean on account of the difficulty of estimating gradients.

Adversarial Attack

Differentiable Architecture Search with Ensemble Gumbel-Softmax

no code implementations6 May 2019 Jianlong Chang, Xinbang Zhang, Yiwen Guo, Gaofeng Meng, Shiming Xiang, Chunhong Pan

For network architecture search (NAS), it is crucial but challenging to simultaneously guarantee both effectiveness and efficiency.

Neural Architecture Search

Deep Discriminative Clustering Analysis

no code implementations5 May 2019 Jianlong Chang, Yiwen Guo, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, Chunhong Pan

Traditional clustering methods often perform clustering with low-level indiscriminative representations and ignore relationships between patterns, resulting in slight achievements in the era of deep learning.

Clustering

Sparse DNNs with Improved Adversarial Robustness

no code implementations NeurIPS 2018 Yiwen Guo, Chao Zhang, Chang-Shui Zhang, Yurong Chen

Deep neural networks (DNNs) are computationally/memory-intensive and vulnerable to adversarial attacks, making them prohibitive in some real-world applications.

Adversarial Robustness General Classification

Deep Defense: Training DNNs with Improved Adversarial Robustness

1 code implementation NeurIPS 2018 Ziang Yan, Yiwen Guo, Chang-Shui Zhang

Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are vulnerable to adversarial attacks, limiting their applications in security-critical systems.

Adversarial Robustness

Network Sketching: Exploiting Binary Structure in Deep CNNs

no code implementations CVPR 2017 Yiwen Guo, Anbang Yao, Hao Zhao, Yurong Chen

Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks.

Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights

3 code implementations10 Feb 2017 Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen

The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained.

Quantization

Dynamic Network Surgery for Efficient DNNs

3 code implementations NeurIPS 2016 Yiwen Guo, Anbang Yao, Yurong Chen

In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning.

Cannot find the paper you are looking for? You can Submit a new open access paper.