no code implementations • 25 Mar 2024 • Weimin Lyu, Xiao Lin, Songzhu Zheng, Lu Pang, Haibin Ling, Susmit Jha, Chao Chen
Textual backdoor attacks pose significant security threats.
no code implementations • 23 Oct 2023 • Weimin Lyu, Songzhu Zheng, Lu Pang, Haibin Ling, Chao Chen
Recent studies have revealed that \textit{Backdoor Attacks} can threaten the safety of natural language processing (NLP) models.
no code implementations • 25 Sep 2023 • Yikai Zhang, Songzhu Zheng, Mina Dalirrooyfard, Pengxiang Wu, Anderson Schneider, Anant Raj, Yuriy Nevmyvaka, Chao Chen
Learning and decision-making in domains with naturally high noise-to-signal ratio, such as Finance or Healthcare, is often challenging, while the stakes are very high.
1 code implementation • 21 Jul 2023 • Jiachen Yao, Yikai Zhang, Songzhu Zheng, Mayank Goswami, Prateek Prasanna, Chao Chen
However, segmentation label noise usually has strong spatial correlation and has prominent bias in distribution.
no code implementations • 9 Aug 2022 • Weimin Lyu, Xinyu Dong, Rachel Wong, Songzhu Zheng, Kayley Abell-Hart, Fusheng Wang, Chao Chen
Deep-learning-based clinical decision support using structured electronic health records (EHR) has been an active research area for predicting risks of mortality and diseases.
no code implementations • 9 Aug 2022 • Weimin Lyu, Songzhu Zheng, Tengfei Ma, Haibin Ling, Chao Chen
Trojan attacks pose a severe threat to AI systems.
1 code implementation • NAACL 2022 • Weimin Lyu, Songzhu Zheng, Tengfei Ma, Chao Chen
Trojan attacks raise serious security concerns.
no code implementations • 29 Sep 2021 • Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Yuriy Nevmyvaka, Chao Chen
Learning and decision making in domains with naturally high noise-to-signal ratios – such as Finance or Public Health – can be challenging and yet extremely important.
no code implementations • NeurIPS 2021 • Songzhu Zheng, Yikai Zhang, Hubert Wagner, Mayank Goswami, Chao Chen
Deep neural networks are known to have security issues.
1 code implementation • ICLR 2021 • Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, Chao Chen
Label noise is frequently observed in real-world large-scale datasets.
Ranked #12 on Learning with noisy labels on ANIMAL
1 code implementation • NeurIPS 2020 • Pengxiang Wu, Songzhu Zheng, Mayank Goswami, Dimitris Metaxas, Chao Chen
Noisy labels can impair the performance of deep neural networks.
3 code implementations • ICML 2020 • Songzhu Zheng, Pengxiang Wu, Aman Goswami, Mayank Goswami, Dimitris Metaxas, Chao Chen
To be robust against label noise, many successful methods rely on the noisy classifiers (i. e., models trained on the noisy training data) to determine whether a label is trustworthy.
Ranked #40 on Image Classification on Clothing1M
no code implementations • 25 Sep 2019 • Songzhu Zheng, Pengxiang Wu, Aman Goswami, Mayank Goswami, Dimitris Metaxas, Chao Chen
To collect large scale annotated data, it is inevitable to introduce label noise, i. e., incorrect class labels.