1 code implementation • 14 Sep 2023 • Zhiheng Xi, Wenxiang Chen, Xin Guo, wei he, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks.
1 code implementation • 11 Jul 2023 • Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model.
1 code implementation • 27 Jun 2023 • Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang, Jin Ma, Ying Shan
Detecting adversarial samples that are carefully crafted to fool the model is a critical step to socially-secure applications.
1 code implementation • 27 Jun 2023 • Songyang Gao, Shihan Dou, Yan Liu, Xiao Wang, Qi Zhang, Zhongyu Wei, Jin Ma, Ying Shan
Adversarial training is one of the best-performing methods in improving the robustness of deep language models.
no code implementations • 4 May 2023 • Songyang Gao, Shihan Dou, Junjie Shan, Qi Zhang, Xuanjing Huang
Dataset bias, i. e., the over-reliance on dataset-specific literal heuristics, is getting increasing attention for its detrimental effect on the generalization ability of NLU models.
1 code implementation • 14 Oct 2022 • Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang
Dataset bias has attracted increasing attention recently for its detrimental effect on the generalization ability of fine-tuned models.
1 code implementation • International Conference on Software Engineering 2022 • Yueming Wu, Deqing Zou, Shihan Dou, Wei Yang, Duo Xu, Hai Jin
Furthermore, we conduct a case study on more than 25 million lines of code and the result indicates that VulCNN has the ability to detect large-scale vulnerability.
1 code implementation • ACL 2022 • Xiao Wang, Shihan Dou, Limao Xiong, Yicheng Zou, Qi Zhang, Tao Gui, Liang Qiao, Zhanzhan Cheng, Xuanjing Huang
NER model has achieved promising performance on standard NER benchmarks.
Ranked #8 on
Named Entity Recognition (NER)
on WNUT 2017
2 code implementations • COLING 2022 • Shihan Dou, Rui Zheng, Ting Wu, Songyang Gao, Junjie Shan, Qi Zhang, Yueming Wu, Xuanjing Huang
Most of the existing debiasing methods often identify and weaken these samples with biased features (i. e., superficial surface features that cause such spurious correlations).