no code implementations • 18 Jan 2022 • Wu Zhang
In this paper, we introduce a divide-and-conquer algorithm to improve sentence alignment speed.
1 code implementation • 1 Jan 2022 • Yexin Duan, Junhua Zou, Xingyu Zhou, Wu Zhang, Jin Zhang, Zhisong Pan
Deep neural networks are vulnerable to adversarial examples, which can fool deep models by adding subtle perturbations.
no code implementations • 1 Sep 2021 • Yexin Duan, Jialin Chen, Xingyu Zhou, Junhua Zou, Zhengyun He, Jin Zhang, Wu Zhang, Zhisong Pan
An adversary can fool deep neural network object detectors by generating adversarial noises.
no code implementations • 16 Nov 2020 • Guannan Hu, Wu Zhang, Hu Ding, Wenhao Zhu
Catastrophic forgetting in continual learning is a common destructive phenomenon in gradient-based neural networks that learn sequential tasks, and it is much different from forgetting in humans, who can learn and accumulate knowledge throughout their whole lives.
2 code implementations • 8 Jul 2020 • Junhua Zou, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan, Zhisong Pan
Fast gradient sign attack series are popular methods that are used to generate adversarial examples.