Search Results for author: YUREN MAO

Found 11 papers, 4 papers with code

MetaWeighting: Learning to Weight Tasks in Multi-Task Learning

no code implementations Findings (ACL) 2022 YUREN MAO, Zekai Wang, Weiwei Liu, Xuemin Lin, Pengtao Xie

Task weighting, which assigns weights on the including tasks during training, significantly matters the performance of Multi-task Learning (MTL); thus, recently, there has been an explosive interest in it.

Multi-Task Learning text-classification +1

Adaptive Adversarial Multi-task Representation Learning

no code implementations ICML 2020 YUREN MAO, Weiwei Liu, Xuemin Lin

Adversarial Multi-task Representation Learning (AMTRL) methods are able to boost the performance of Multi-task Representation Learning (MTRL) models.

Representation Learning

FIT-RAG: Black-Box RAG with Factual Information and Token Reduction

no code implementations21 Mar 2024 YUREN MAO, XueMei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, Ying Zhang

Simply concatenating all the retrieved documents brings large amounts of unnecessary tokens for LLMs, which degenerates the efficiency of black-box RAG.

Open-Domain Question Answering Retrieval +2

FinSQL: Model-Agnostic LLMs-based Text-to-SQL Framework for Financial Analysis

no code implementations19 Jan 2024 Chao Zhang, YUREN MAO, Yijiang Fan, Yu Mi, Yunjun Gao, Lu Chen, Dongfang Lou, Jinshu Lin

Text-to-SQL, which provides zero-code interface for operating relational databases, has gained much attention in financial analysis; because, financial professionals may not well-skilled in SQL programming.

Language Modelling Large Language Model +1

MultiEM: Efficient and Effective Unsupervised Multi-Table Entity Matching

1 code implementation2 Aug 2023 Xiaocan Zeng, Pengfei Wang, YUREN MAO, Lu Chen, Xiaoze Liu, Yunjun Gao

Traditional unsupervised EM assumes that all entities come from two tables; however, it is more common to match entities from multiple tables in practical applications, that is, multi-table entity matching (multi-table EM).

Management

C3: Zero-shot Text-to-SQL with ChatGPT

1 code implementation14 Jul 2023 XueMei Dong, Chao Zhang, Yuhang Ge, YUREN MAO, Yunjun Gao, Lu Chen, Jinshu Lin, Dongfang Lou

This paper proposes a ChatGPT-based zero-shot Text-to-SQL method, dubbed C3, which achieves 82. 3\% in terms of execution accuracy on the holdout test set of Spider and becomes the state-of-the-art zero-shot Text-to-SQL method on the Spider Challenge.

Text-To-SQL

Knowledge-refined Denoising Network for Robust Recommendation

1 code implementation28 Apr 2023 Xinjun Zhu, Yuntao Du, YUREN MAO, Lu Chen, Yujia Hu, Yunjun Gao

Knowledge graph (KG), which contains rich side information, becomes an essential part to boost the recommendation performance and improve its explainability.

Denoising Knowledge-Aware Recommendation +1

SoLar: Sinkhorn Label Refinery for Imbalanced Partial-Label Learning

1 code implementation21 Sep 2022 Haobo Wang, Mingxuan Xia, Yixuan Li, YUREN MAO, Lei Feng, Gang Chen, Junbo Zhao

Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth.

Partial Label Learning Weakly-supervised Learning

BanditMTL: Bandit-based Multi-task Learning for Text Classification

no code implementations ACL 2021 YUREN MAO, Zekai Wang, Weiwei Liu, Xuemin Lin, Wenbin Hu

Task variance regularization, which can be used to improve the generalization of Multi-task Learning (MTL) models, remains unexplored in multi-task text classification.

Multi-Task Learning text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.