Search Results for author: JinFeng Yi

Found 18 papers, 4 papers with code

Adversarial Attack across Datasets

no code implementations13 Oct 2021 Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Cho-Jui Hsieh

It has been observed that Deep Neural Networks (DNNs) are vulnerable to transfer attacks in the query-free black-box setting.

Adversarial Attack Image Classification

Trustworthy AI: From Principles to Practices

no code implementations4 Oct 2021 Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, JiQuan Pei, JinFeng Yi, BoWen Zhou

In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.

Fairness

Training Meta-Surrogate Model for Transferable Adversarial Attack

no code implementations5 Sep 2021 Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Cho-Jui Hsieh

In this paper, we tackle this problem from a novel angle -- instead of using the original surrogate models, can we obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models?

Adversarial Attack

Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

no code implementations25 Jun 2021 Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, JinFeng Yi

Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL.

Federated Learning

Towards Heterogeneous Clients with Elastic Federated Learning

no code implementations17 Jun 2021 Zichen Ma, Yu Lu, Zihan Lu, Wenye Li, JinFeng Yi, Shuguang Cui

Training in heterogeneous and potentially massive networks introduces bias into the system, which is originated from the non-IID data and the low participation rate in reality.

Federated Learning

A Simple yet Universal Strategy for Online Convex Optimization

no code implementations8 May 2021 Lijun Zhang, Guanghui Wang, JinFeng Yi, Tianbao Yang

In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations.

Fast Certified Robust Training with Short Warmup

1 code implementation31 Mar 2021 Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh

Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.

Adversarial Defense

On the Adversarial Robustness of Vision Transformers

1 code implementation29 Mar 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

This work provides the first and comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations.

Provable Defense Against Delusive Poisoning

no code implementations9 Feb 2021 Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Delusive poisoning is a special kind of attack to obstruct learning, where the learning performance could be significantly deteriorated by only manipulating (even slightly) the features of correctly labeled training examples.

Robust Text CAPTCHAs Using Adversarial Examples

no code implementations7 Jan 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

At the second stage, we design and apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.

Adversarial Attack Optical Character Recognition

Learning Contextual Perturbation Budgets for Training Robust Neural Networks

no code implementations1 Jan 2021 Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang

We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.

With False Friends Like These, Who Can Notice Mistakes?

no code implementations29 Dec 2020 Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen

In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.

On the Limitations of Denoising Strategies as Adversarial Defenses

no code implementations17 Dec 2020 Zhonghan Niu, Zhaoxi Chen, Linyi Li, YuBin Yang, Bo Li, JinFeng Yi

Surprisingly, our experimental results show that even if most of the perturbations in each dimension is eliminated, it is still difficult to obtain satisfactory robustness.

Denoising

Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System

1 code implementation29 Oct 2020 Tianxin Wei, Fuli Feng, Jiawei Chen, Ziwei Wu, JinFeng Yi, Xiangnan He

Existing work addresses this issue with Inverse Propensity Weighting (IPW), which decreases the impact of popular items on the training and increases the impact of long-tail items.

Counterfactual Inference Multi-Task Learning +1

Negative-Unlabeled Tensor Factorization for Location Category Inference from Highly Inaccurate Mobility Data

no code implementations21 Feb 2017 Jinfeng Yi, Qi Lei, Wesley Gifford, Ji Liu, Junchi Yan

In order to efficiently solve the proposed framework, we propose a parameter-free and scalable optimization algorithm by effectively exploring the sparse and low-rank structure of the tensor.

Scalable Demand-Aware Recommendation

no code implementations NeurIPS 2017 Jinfeng Yi, Cho-Jui Hsieh, Kush Varshney, Lijun Zhang, Yao Li

In particular for durable goods, time utility is a function of inter-purchase duration within product category because consumers are unlikely to purchase two items in the same category in close temporal succession.

Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning

no code implementations NeurIPS 2012 Jinfeng Yi, Rong Jin, Shaili Jain, Tianbao Yang, Anil K. Jain

One difficulty in learning the pairwise similarity measure is that there is a significant amount of noise and inter-worker variations in the manual annotations obtained via crowdsourcing.

Matrix Completion Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.