Search Results for author: JinFeng Yi

Found 25 papers, 10 papers with code

Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning

no code implementations NeurIPS 2012 Jinfeng Yi, Rong Jin, Shaili Jain, Tianbao Yang, Anil K. Jain

One difficulty in learning the pairwise similarity measure is that there is a significant amount of noise and inter-worker variations in the manual annotations obtained via crowdsourcing.

Clustering Computational Efficiency +2

Scalable Demand-Aware Recommendation

no code implementations NeurIPS 2017 Jinfeng Yi, Cho-Jui Hsieh, Kush Varshney, Lijun Zhang, Yao Li

In particular for durable goods, time utility is a function of inter-purchase duration within product category because consumers are unlikely to purchase two items in the same category in close temporal succession.

Negative-Unlabeled Tensor Factorization for Location Category Inference from Highly Inaccurate Mobility Data

no code implementations21 Feb 2017 Jinfeng Yi, Qi Lei, Wesley Gifford, Ji Liu, Junchi Yan

In order to efficiently solve the proposed framework, we propose a parameter-free and scalable optimization algorithm by effectively exploring the sparse and low-rank structure of the tensor.

Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System

1 code implementation29 Oct 2020 Tianxin Wei, Fuli Feng, Jiawei Chen, Ziwei Wu, JinFeng Yi, Xiangnan He

Existing work addresses this issue with Inverse Propensity Weighting (IPW), which decreases the impact of popular items on the training and increases the impact of long-tail items.

counterfactual Counterfactual Inference +3

On the Limitations of Denoising Strategies as Adversarial Defenses

no code implementations17 Dec 2020 Zhonghan Niu, Zhaoxi Chen, Linyi Li, YuBin Yang, Bo Li, JinFeng Yi

Surprisingly, our experimental results show that even if most of the perturbations in each dimension is eliminated, it is still difficult to obtain satisfactory robustness.

Denoising

With False Friends Like These, Who Can Notice Mistakes?

1 code implementation29 Dec 2020 Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen

In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.

Learning Contextual Perturbation Budgets for Training Robust Neural Networks

no code implementations1 Jan 2021 Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang

We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.

Robust Text CAPTCHAs Using Adversarial Examples

no code implementations7 Jan 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

At the second stage, we design and apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.

Adversarial Attack Optical Character Recognition (OCR)

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

2 code implementations NeurIPS 2021 Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.

On the Adversarial Robustness of Vision Transformers

1 code implementation29 Mar 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

Following the success in advancing natural language processing and understanding, transformers are expected to bring revolutionary changes to computer vision.

Adversarial Robustness

Fast Certified Robust Training with Short Warmup

2 code implementations NeurIPS 2021 Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh

Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.

Adversarial Defense

A Simple yet Universal Strategy for Online Convex Optimization

no code implementations8 May 2021 Lijun Zhang, Guanghui Wang, JinFeng Yi, Tianbao Yang

In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations.

Towards Heterogeneous Clients with Elastic Federated Learning

no code implementations17 Jun 2021 Zichen Ma, Yu Lu, Zihan Lu, Wenye Li, JinFeng Yi, Shuguang Cui

Training in heterogeneous and potentially massive networks introduces bias into the system, which is originated from the non-IID data and the low participation rate in reality.

Federated Learning

Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

no code implementations25 Jun 2021 Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, JinFeng Yi

Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL.

Federated Learning

Training Meta-Surrogate Model for Transferable Adversarial Attack

2 code implementations5 Sep 2021 Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Cho-Jui Hsieh

In this paper, we tackle this problem from a novel angle -- instead of using the original surrogate models, can we obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models?

Adversarial Attack

Trustworthy AI: From Principles to Practices

no code implementations4 Oct 2021 Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, JiQuan Pei, JinFeng Yi, BoWen Zhou

In this review, we provide AI practitioners with a comprehensive guide for building trustworthy AI systems.

Fairness

Adversarial Attack across Datasets

no code implementations13 Oct 2021 Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Lihong Cao, Cho-Jui Hsieh

In this paper, we define a Generalized Transferable Attack (GTA) problem where the attacker doesn't know this information and is acquired to attack any randomly encountered images that may come from unknown datasets.

Adversarial Attack Image Classification

How and When Adversarial Robustness Transfers in Knowledge Distillation?

no code implementations22 Oct 2021 Rulin Shao, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

Our comprehensive analysis shows several novel insights that (1) With KDIGA, students can preserve or even exceed the adversarial robustness of the teacher model, even when their models have fundamentally different architectures; (2) KDIGA enables robustness to transfer to pre-trained students, such as KD from an adversarially trained ResNet to a pre-trained ViT, without loss of clean accuracy; and (3) Our derived local linearity bounds for characterizing adversarial robustness in KD are consistent with the empirical results.

Adversarial Robustness Knowledge Distillation +1

Federated Two-stage Learning with Sign-based Voting

no code implementations10 Dec 2021 Zichen Ma, Zihan Lu, Yu Lu, Wenye Li, JinFeng Yi, Shuguang Cui

In this paper, we design a federated two-stage learning framework that augments prototypical federated learning with a cut layer on devices and uses sign-based stochastic gradient descent with the majority vote method on model updates.

BIG-bench Machine Learning Federated Learning +2

On the Convergence and Robustness of Adversarial Training

no code implementations15 Dec 2021 Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu

In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.

Can Adversarial Training Be Manipulated By Non-Robust Features?

1 code implementation31 Jan 2022 Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

1 code implementation ICLR 2022 Yimeng Zhang, Yuguang Yao, Jinghan Jia, JinFeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu

To tackle this problem, we next propose to prepend an autoencoder (AE) to a given (black-box) model so that DS can be trained using variance-reduced ZO optimization.

Adversarial Robustness Image Classification +1

Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor

no code implementations2 May 2022 Lijun Zhang, Wei Jiang, JinFeng Yi, Tianbao Yang

In this paper, we investigate an online prediction strategy named as Discounted-Normal-Predictor (Kapralov and Panigrahy, 2010) for smoothed online convex optimization (SOCO), in which the learner needs to minimize not only the hitting cost but also the switching cost.

Cannot find the paper you are looking for? You can Submit a new open access paper.