no code implementations • 13 Jun 2024 • Yuhan Quan, Huan Zhao, JinFeng Yi, Yuqiang Chen
In this work, we propose GC-CAD, a self-supervised contrastive graph neural network-based method for mechanical CAD retrieval that directly models parameterized CAD raw files.
no code implementations • 2 May 2022 • Lijun Zhang, Wei Jiang, JinFeng Yi, Tianbao Yang
In this paper, we investigate an online prediction strategy named as Discounted-Normal-Predictor (Kapralov and Panigrahy, 2010) for smoothed online convex optimization (SOCO), in which the learner needs to minimize not only the hitting cost but also the switching cost.
1 code implementation • ICLR 2022 • Yimeng Zhang, Yuguang Yao, Jinghan Jia, JinFeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu
To tackle this problem, we next propose to prepend an autoencoder (AE) to a given (black-box) model so that DS can be trained using variance-reduced ZO optimization.
1 code implementation • 31 Jan 2022 • Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.
no code implementations • 15 Dec 2021 • Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu
In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.
no code implementations • 10 Dec 2021 • Zichen Ma, Zihan Lu, Yu Lu, Wenye Li, JinFeng Yi, Shuguang Cui
In this paper, we design a federated two-stage learning framework that augments prototypical federated learning with a cut layer on devices and uses sign-based stochastic gradient descent with the majority vote method on model updates.
no code implementations • 22 Oct 2021 • Rulin Shao, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh
Our comprehensive analysis shows several novel insights that (1) With KDIGA, students can preserve or even exceed the adversarial robustness of the teacher model, even when their models have fundamentally different architectures; (2) KDIGA enables robustness to transfer to pre-trained students, such as KD from an adversarially trained ResNet to a pre-trained ViT, without loss of clean accuracy; and (3) Our derived local linearity bounds for characterizing adversarial robustness in KD are consistent with the empirical results.
no code implementations • 13 Oct 2021 • Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Lihong Cao, Cho-Jui Hsieh
In this paper, we define a Generalized Transferable Attack (GTA) problem where the attacker doesn't know this information and is acquired to attack any randomly encountered images that may come from unknown datasets.
no code implementations • 4 Oct 2021 • Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, JiQuan Pei, JinFeng Yi, BoWen Zhou
In this review, we provide AI practitioners with a comprehensive guide for building trustworthy AI systems.
2 code implementations • 5 Sep 2021 • Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Cho-Jui Hsieh
In this paper, we tackle this problem from a novel angle -- instead of using the original surrogate models, can we obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models?
no code implementations • 25 Jun 2021 • Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, JinFeng Yi
Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL.
no code implementations • 17 Jun 2021 • Zichen Ma, Yu Lu, Zihan Lu, Wenye Li, JinFeng Yi, Shuguang Cui
Training in heterogeneous and potentially massive networks introduces bias into the system, which is originated from the non-IID data and the low participation rate in reality.
1 code implementation • 7 Jun 2021 • Sanshi Yu, Zhuoxuan Jiang, Dong-Dong Chen, Shanshan Feng, Dongsheng Li, Qi Liu, JinFeng Yi
Hence, the key is to make full use of rich interaction information among streamers, users, and products.
no code implementations • 8 May 2021 • Lijun Zhang, Guanghui Wang, JinFeng Yi, Tianbao Yang
In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations.
2 code implementations • NeurIPS 2021 • Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh
Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.
1 code implementation • 29 Mar 2021 • Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh
Following the success in advancing natural language processing and understanding, transformers are expected to bring revolutionary changes to computer vision.
2 code implementations • NeurIPS 2021 • Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.
no code implementations • 7 Jan 2021 • Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh
At the second stage, we design and apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.
no code implementations • 1 Jan 2021 • Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang
We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.
1 code implementation • 29 Dec 2020 • Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen
In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.
no code implementations • 17 Dec 2020 • Zhonghan Niu, Zhaoxi Chen, Linyi Li, YuBin Yang, Bo Li, JinFeng Yi
Surprisingly, our experimental results show that even if most of the perturbations in each dimension is eliminated, it is still difficult to obtain satisfactory robustness.
1 code implementation • 29 Oct 2020 • Tianxin Wei, Fuli Feng, Jiawei Chen, Ziwei Wu, JinFeng Yi, Xiangnan He
Existing work addresses this issue with Inverse Propensity Weighting (IPW), which decreases the impact of popular items on the training and increases the impact of long-tail items.
1 code implementation • 21 Jul 2019 • Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, MingJie Sun, JinFeng Yi, Zijiang Yang, Mingyan Liu, Bo Li, Dawn Song
To the best of our knowledge, we are the first to apply adversarial attacks on DRL systems to physical robots.
no code implementations • NeurIPS 2017 • Jinfeng Yi, Cho-Jui Hsieh, Kush Varshney, Lijun Zhang, Yao Li
In particular for durable goods, time utility is a function of inter-purchase duration within product category because consumers are unlikely to purchase two items in the same category in close temporal succession.
no code implementations • 21 Feb 2017 • Jinfeng Yi, Qi Lei, Wesley Gifford, Ji Liu, Junchi Yan
In order to efficiently solve the proposed framework, we propose a parameter-free and scalable optimization algorithm by effectively exploring the sparse and low-rank structure of the tensor.
no code implementations • NeurIPS 2012 • Jinfeng Yi, Rong Jin, Shaili Jain, Tianbao Yang, Anil K. Jain
One difficulty in learning the pairwise similarity measure is that there is a significant amount of noise and inter-worker variations in the manual annotations obtained via crowdsourcing.