1 code implementation • 20 Apr 2024 • Haotian Xue, Yongxin Chen
We also find that PDMs can be used as an off-the-shelf purifier to effectively remove the adversarial patterns that were generated on LDMs to protect the images, which means that most protection methods nowadays, to some extent, cannot protect our images from malicious attacks.
1 code implementation • 2 Oct 2023 • Haotian Xue, Chumeng Liang, Xiaoyu Wu, Yongxin Chen
In this work, we present novel findings on attacking latent diffusion models (LDM) and propose new plug-and-play strategies for more effective protection.
1 code implementation • NeurIPS 2023 • Haotian Xue, Alexandre Araujo, Bin Hu, Yongxin Chen
Neural networks are known to be susceptible to adversarial samples: small variations of natural examples crafted to deliberately mislead the models.
1 code implementation • 21 Oct 2022 • Shengyuan Hou, Jushi Kai, Haotian Xue, Bingyu Zhu, Bo Yuan, Longtao Huang, Xinbing Wang, Zhouhan Lin
Recent works have revealed that Transformers are implicitly learning the syntactic information in its lower layers from data, albeit is highly dependent on the quality and scale of the training data.
1 code implementation • 11 Oct 2022 • Zirong Chen, Haotian Xue
Due to the unfamiliarity to particular words(or proper nouns) for ingredients, non-native English speakers can be extremely confused about the ordering process in restaurants like Subway.
no code implementations • 28 Oct 2021 • Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang, Xin Wang
In this paper, we investigate GNNs from the lens of weight and feature loss landscapes, i. e., the loss changes with respect to model weights and node features, respectively.
no code implementations • 29 Sep 2021 • Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Xin Jin, Quanshi Zhang
This paper proposes a hypothesis to analyze the underlying reason for the cognitive difficulty of an image from two perspectives, i. e. a cognitive image usually makes a DNN strongly activated by cognitive concepts; discarding massive non-cognitive concepts may also help the DNN focus on cognitive concepts.
no code implementations • 31 Jul 2021 • Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Quanshi Zhang
This paper proposes a hypothesis for the aesthetic appreciation that aesthetic images make a neural network strengthen salient concepts and discard inessential concepts.
no code implementations • 20 Nov 2019 • Hao Zhang, Jiayi Chen, Haotian Xue, Quanshi Zhang
This paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges.