no code implementations • EMNLP 2021 • Haiwen Hong, Jingfeng Zhang, Yin Zhang, Yao Wan, Yulei Sui
Obviously, unchanged fix is not the correct fix because it is the same as the buggy code that needs to be fixed.
1 code implementation • 14 Jul 2024 • Zhongsheng Wang, Jiamou Liu, Qiming Bao, Hongfei Rong, Jingfeng Zhang
This paper introduces ChatLogic, an innovative framework specifically targeted at LLM reasoning tasks that can enhance the performance of LLMs in multi-step deductive reasoning tasks by integrating logic programming.
no code implementations • 15 Jun 2024 • Yukai Xu, Jingfeng Zhang, Yujie Gu
While the model-level challenge arises from the heterogeneity of local models, which need to be collaboratively trained while ensuring their confidentiality to address intellectual property concerns.
1 code implementation • 2 Jun 2024 • Jiacheng Zhang, Feng Liu, Dawei Zhou, Jingfeng Zhang, Tongliang Liu
However, in this paper, we discover that not all pixels contribute equally to the accuracy on AEs (i. e., robustness) and accuracy on natural images (i. e., accuracy).
no code implementations • 30 May 2024 • Jia Li, Lijie Hu, Zhixian He, Jingfeng Zhang, Tianhang Zheng, Di Wang
With the advancement of image-to-image diffusion models guided by text, significant progress has been made in image editing.
no code implementations • 16 May 2024 • Kunda Yan, Sen Cui, Abudukelimu Wuerkaixi, Jingfeng Zhang, Bo Han, Gang Niu, Masashi Sugiyama, ChangShui Zhang
Our framework aims to approximate an optimal cooperation network for each client by optimizing a weighted sum of model similarity and feature complementarity.
1 code implementation • 18 Apr 2024 • Zhen Han, Chaojie Mao, Zeyinzi Jiang, Yulin Pan, Jingfeng Zhang
We integrate encoded textual instruction and image exemplar as a unified condition for diffusion model, enabling the editing of original image following multimodal instructions.
no code implementations • 28 Mar 2024 • Yulin Pan, Chaojie Mao, Zeyinzi Jiang, Zhen Han, Jingfeng Zhang
The process involves (i) Locate: concatenating the noise with masked scene image to achieve precise regional editing, (ii) Assign: employing decoupled cross-attention mechanism to accommodate multi-modal guidance, and (iii) Refine: using a novel RefineNet to supplement subject details.
no code implementations • 13 Mar 2024 • Qing Lin, Jingfeng Zhang, Yew Soon Ong, Mengmi Zhang
For the first time, we present a novel challenge of emotion-evoked image generation, aiming to synthesize images that evoke target emotions while retaining the semantics and structures of the original scenes.
1 code implementation • 19 Feb 2024 • Zihao Luo, Xilie Xu, Feng Liu, Yun Sing Koh, Di Wang, Jingfeng Zhang
To mitigate this issue, we further propose a Stable Membership-Privacy-preserving LoRA (SMP-LoRA) that adapts the LDM by minimizing the ratio of the adaptation loss to the MI gain.
3 code implementations • CVPR 2024 • Zeyinzi Jiang, Chaojie Mao, Yulin Pan, Zhen Han, Jingfeng Zhang
Image diffusion models have been utilized in various tasks, such as text-to-image generation and controllable image synthesis.
no code implementations • 29 Nov 2023 • Jia Li, Lijie Hu, Jingfeng Zhang, Tianhang Zheng, Hua Zhang, Di Wang
In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions.
no code implementations • 3 Oct 2023 • Xilie Xu, Jingfeng Zhang, Mohan Kankanhalli
To mitigate this issue, we propose a low-rank (LoRa) branch that disentangles RFT into two distinct components: optimizing natural objectives via the LoRa branch and adversarial objectives via the FE.
1 code implementation • 28 May 2023 • Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama
To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
1 code implementation • NeurIPS 2023 • Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli
To improve transferability, the existing work introduced the standard invariant regularization (SIR) to impose style-independence property to SCL, which can exempt the impact of nuisance style factors in the standard representation.
1 code implementation • NeurIPS 2023 • Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli
Adversarial contrastive learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks and also generalizes to a wide range of downstream tasks.
1 code implementation • 6 Feb 2023 • Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon
While leveraging additional training data is well established to improve adversarial robustness, it incurs the unavoidable cost of data collection and the heavy computation to train models.
1 code implementation • 1 Nov 2022 • Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama
Adversarial training (AT) with imperfect supervision is significant but receives limited attention.
no code implementations • 26 Aug 2022 • Lichen Jia, Bowen Tang, Chenggang Wu, Zhe Wang, Zihan Jiang, Yuanming Lai, Yan Kang, Ning Liu, Jingfeng Zhang
The binary code similarity detection (BCSD) method measures the similarity of two binary executable codes.
no code implementations • 8 Jun 2022 • Hengyuan Ma, Li Zhang, Xiatian Zhu, Jingfeng Zhang, Jianfeng Feng
To ensure stability of convergence in sampling and generation quality, however, this sequential sampling process has to take a small step size and many sampling iterations (e. g., 2000).
no code implementations • 22 Apr 2022 • Yunqing Hu, Xuan Jin, Yin Zhang, Haiwen Hong, Jingfeng Zhang, Feihu Yan, Yuan He, Hui Xue
Finally, we propose a weakly supervised object localization-based approach to extract multi-scale local features, to form a multi-view pipeline.
Multi-Label Image Recognition Weakly-Supervised Object Localization
no code implementations • 22 Feb 2022 • Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama
To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters.
1 code implementation • 7 Feb 2022 • Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli
Furthermore, we theoretically find that the adversary can also degrade the lower bound of a TST's test power, which enables us to iteratively minimize the test criterion in order to search for adversarial pairs.
no code implementations • 12 Jan 2022 • Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan
Secondly, to robustify DIDs, we propose an adversarial training strategy, hybrid adversarial training ({\sc HAT}), that jointly trains DIDs with adversarial and non-adversarial noisy data to ensure that the reconstruction quality is high and the denoisers around non-adversarial data are locally smooth.
no code implementations • 29 Sep 2021 • Sen Cui, Jingfeng Zhang, Jian Liang, Masashi Sugiyama, ChangShui Zhang
However, an ensemble still wastes the limited capacity of multiple models.
no code implementations • 29 Sep 2021 • Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Shu-Tao Xia, Gang Niu, Masashi Sugiyama
Based on thorough experiments, we find that such trade-off ignores the interactions between the perturbation budget of adversarial training and the magnitude of the backdoor trigger.
no code implementations • 21 Jul 2021 • Haiwen Hong, Xuan Jin, Yin Zhang, Yunqing Hu, Jingfeng Zhang, Yuan He, Hui Xue
In multimodal tasks, we find that the importance of text and image modal information is different for different input cases, and for this motivation, we propose a high-performance and highly general Dual-Router Dynamic Framework (DRDF), consisting of Dual-Router, MWF-Layer, experts and expert fusion unit.
no code implementations • 17 Jul 2021 • Yunqing Hu, Xuan Jin, Yin Zhang, Haiwen Hong, Jingfeng Zhang, Yuan He, Hui Xue
We propose the recurrent attention multi-scale transformer (RAMS-Trans), which uses the transformer's self-attention to recursively learn discriminative region attention in a multi-scale manner.
Ranked #8 on Fine-Grained Image Classification on Stanford Dogs
Fine-Grained Image Classification Fine-Grained Image Recognition
2 code implementations • ICLR 2022 • Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang
However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.
1 code implementation • 31 May 2021 • Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama
First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain the observations on when NL injection benefits AT.
no code implementations • 15 Feb 2021 • Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama
To enhance adversarial robustness, adversarial training learns deep neural networks on the adversarial variants generated by their natural data.
2 code implementations • 10 Feb 2021 • Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama
By comparing \textit{non-robust} (normally trained) and \textit{robustified} (adversarially trained) models, we observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts.
no code implementations • 6 Feb 2021 • Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama
A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.
1 code implementation • 3 Feb 2021 • Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).
2 code implementations • 22 Oct 2020 • Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama
However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.
2 code implementations • ICLR 2021 • Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli
The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy.
no code implementations • 15 Jun 2020 • Chen Chen, Jingfeng Zhang, Anthony K. H. Tung, Mohan Kankanhalli, Gang Chen
We argue that the key to Byzantine detection is monitoring of gradients of the model parameters of clients.
no code implementations • 22 Apr 2020 • Jingfeng Zhang, Cheng Li, Antonio Robles-Kelly, Mohan Kankanhalli
When the federated learning is adopted among competitive agents with siloed datasets, agents are self-interested and participate only if they are fairly rewarded.
1 code implementation • ICML 2020 • Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli
Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models.
no code implementations • 20 Nov 2019 • Jingfeng Zhang, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
Deep neural networks (DNNs) are incredibly brittle due to adversarial examples.
no code implementations • 28 Feb 2019 • Jingfeng Zhang, Bo Han, Laura Wynter, Kian Hsiang Low, Mohan Kankanhalli
Our analytical studies reveal that the step factor h in the Euler method is able to control the robustness of ResNet in both its training and generalization.
no code implementations • 27 Sep 2018 • Jingfeng Zhang, Laura Wynter
Recent work has studied the reasons for the remarkable performance of deep neural networks in image classification.