no code implementations • Findings (ACL) 2022 • YUREN MAO, Zekai Wang, Weiwei Liu, Xuemin Lin, Pengtao Xie
Task weighting, which assigns weights on the including tasks during training, significantly matters the performance of Multi-task Learning (MTL); thus, recently, there has been an explosive interest in it.
1 code implementation • 7 Jan 2024 • Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang, Xiangrui Meng, Sirui Hong, Wenhao Li, ZiHao Wang, Zekai Wang, Feng Yin, Junhua Zhao, Xiuqiang He
Intelligent agents stand out as a potential path toward artificial general intelligence (AGI).
2 code implementations • 9 Feb 2023 • Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, Shuicheng Yan
Under the $\ell_\infty$-norm threat model with $\epsilon=8/255$, our models achieve $70. 69\%$ and $42. 67\%$ robust accuracy on CIFAR-10 and CIFAR-100, respectively, i. e. improving upon previous state-of-the-art models by $+4. 58\%$ and $+8. 03\%$.
no code implementations • 19 Oct 2022 • Zekai Wang, Stavros Stavrakis, Bing Yao
In this paper, we propose a two-level hierarchical deep learning framework with Generative Adversarial Network (GAN) for automatic diagnosis of ECG signals.
no code implementations • Nature 2022 • Ruiqi Guo, Fanping Sui, Wei Yue, Zekai Wang, Sedat Pala, Kunying Li, Renxiao Xu, Liwei Lin
With reasonable training, our deep learning neural network becomes a high-speed, high-accuracy calculator: it can identify the flexural mode frequency and the quality factor 4. 6 × 10 times and 2. 6 × 10 times faster, respectively, than conventional numerical simulation packages, with good accuracies of 98. 8 ± 1. 6% and 96. 8 ± 3. 1%, respectively.
no code implementations • 15 Nov 2021 • Yuyang Sun, Zhiyong Zhang, Changzhen Qiu, Liang Wang, Zekai Wang
With the rapid development of generation model, AI-based face manipulation technology, which called DeepFakes, has become more and more realistic.
no code implementations • ACL 2021 • YUREN MAO, Zekai Wang, Weiwei Liu, Xuemin Lin, Wenbin Hu
Task variance regularization, which can be used to improve the generalization of Multi-task Learning (MTL) models, remains unexplored in multi-task text classification.