no code implementations • ICML 2020 • YUREN MAO, Weiwei Liu, Xuemin Lin
Adversarial Multi-task Representation Learning (AMTRL) methods are able to boost the performance of Multi-task Representation Learning (MTRL) models.
no code implementations • Findings (ACL) 2022 • YUREN MAO, Zekai Wang, Weiwei Liu, Xuemin Lin, Pengtao Xie
Task weighting, which assigns weights on the including tasks during training, significantly matters the performance of Multi-task Learning (MTL); thus, recently, there has been an explosive interest in it.
no code implementations • 21 Mar 2024 • YUREN MAO, XueMei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, Ying Zhang
Simply concatenating all the retrieved documents brings large amounts of unnecessary tokens for LLMs, which degenerates the efficiency of black-box RAG.
no code implementations • 19 Jan 2024 • Chao Zhang, YUREN MAO, Yijiang Fan, Yu Mi, Yunjun Gao, Lu Chen, Dongfang Lou, Jinshu Lin
Text-to-SQL, which provides zero-code interface for operating relational databases, has gained much attention in financial analysis; because, financial professionals may not well-skilled in SQL programming.
1 code implementation • 2 Aug 2023 • Xiaocan Zeng, Pengfei Wang, YUREN MAO, Lu Chen, Xiaoze Liu, Yunjun Gao
Traditional unsupervised EM assumes that all entities come from two tables; however, it is more common to match entities from multiple tables in practical applications, that is, multi-table entity matching (multi-table EM).
1 code implementation • 14 Jul 2023 • XueMei Dong, Chao Zhang, Yuhang Ge, YUREN MAO, Yunjun Gao, Lu Chen, Jinshu Lin, Dongfang Lou
This paper proposes a ChatGPT-based zero-shot Text-to-SQL method, dubbed C3, which achieves 82. 3\% in terms of execution accuracy on the holdout test set of Spider and becomes the state-of-the-art zero-shot Text-to-SQL method on the Spider Challenge.
Ranked #4 on Text-To-SQL on spider
1 code implementation • 28 Apr 2023 • Xinjun Zhu, Yuntao Du, YUREN MAO, Lu Chen, Yujia Hu, Yunjun Gao
Knowledge graph (KG), which contains rich side information, becomes an essential part to boost the recommendation performance and improve its explainability.
no code implementations • 3 Apr 2023 • Minjun Zhao, Yichen Yin, YUREN MAO, Qing Liu, Lu Chen, Yunjun Gao
Recently, a few methods have been put forward to handle the SGA dilemma.
1 code implementation • 21 Sep 2022 • Haobo Wang, Mingxuan Xia, Yixuan Li, YUREN MAO, Lei Feng, Gang Chen, Junbo Zhao
Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth.
no code implementations • ACL 2021 • YUREN MAO, Zekai Wang, Weiwei Liu, Xuemin Lin, Wenbin Hu
Task variance regularization, which can be used to improve the generalization of Multi-task Learning (MTL) models, remains unexplored in multi-task text classification.
no code implementations • ACL 2020 • YUREN MAO, Shuang Yun, Weiwei Liu, Bo Du
Multi-task Learning methods have achieved great progress in text classification.