no code implementations • 28 Mar 2024 • Rihui Jin, Yu Li, Guilin Qi, Nan Hu, Yuan-Fang Li, Jiaoyan Chen, Jianan Wang, Yongrui Chen, Dehai Min
Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HGT, a framework with a heterogeneous graph (HG)-enhanced large language model (LLM) to tackle few-shot TU tasks. It leverages the LLM by aligning the table semantics with the LLM's parametric knowledge through soft prompts and instruction turning and deals with complex tables by a multi-task pre-training scheme involving three novel multi-granularity self-supervised HG pre-training objectives. We empirically demonstrate the effectiveness of HGT, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.
1 code implementation • 28 Mar 2024 • Yu Li, Shenyu Zhang, Rui Wu, Xiutian Huang, Yongrui Chen, Wenhao Xu, Guilin Qi, Dehai Min
Experimental results show that our framework outperforms existing open-ended text evaluation methods and achieves the highest correlation with human evaluation, which confirms the effectiveness and advancement of our framework in addressing the uncertainties and instabilities in evaluating LLMs-generated text.
no code implementations • 18 Mar 2024 • Shenyu Zhang, Yu Li, Rui Wu, Xiutian Huang, Yongrui Chen, Wenhao Xu, Guilin Qi
Automatic methods for evaluating machine-generated texts hold significant importance due to the expanding applications of generative systems.
no code implementations • 20 Feb 2024 • Dehai Min, Nan Hu, Rihui Jin, Nuo Lin, Jiaoyan Chen, Yongrui Chen, Yu Li, Guilin Qi, Yun Li, Nijun Li, Qianren Wang
Table-to-Text Generation is a promising solution by facilitating the transformation of hybrid data into a uniformly text-formatted corpus.
no code implementations • 18 Feb 2024 • Jiaqi Li, Miaozeng Du, Chuanyi Zhang, Yongrui Chen, Nan Hu, Guilin Qi, Haiyun Jiang, Siyuan Cheng, Bozhong Tian
Multimodal knowledge editing represents a critical advancement in enhancing the capabilities of Multimodal Large Language Models (MLLMs).
1 code implementation • 12 Oct 2023 • Jiaqi Li, Guilin Qi, Chuanyi Zhang, Yongrui Chen, Yiming Tan, Chenlong Xia, Ye Tian
Firstly we retrieve the relevant embedding from the knowledge graph by utilizing group relations in metadata and then integrate it with other modalities.
no code implementations • 11 Sep 2023 • Yongrui Chen, Haiyun Jiang, Xinting Huang, Shuming Shi, Guilin Qi
High-quality instruction-tuning data is critical to improving LLM capabilities.
2 code implementations • 14 Mar 2023 • Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi
ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge.
Ranked #1 on Knowledge Base Question Answering on WebQuestionsSP (Accuracy metric)
1 code implementation • 21 Nov 2022 • Yongrui Chen, Xinnan Guo, Tongtong Wu, Guilin Qi, Yang Li, Yang Dong
The first solution Vanilla is to perform self-training, augmenting the supervised training data with predicted pseudo-labeled instances of the current task, while replacing the full volume retraining with episodic memory replay to balance the training efficiency with the performance of previous tasks.
no code implementations • 11 Oct 2022 • Tinghao Zhang, Zhijun Li, Yongrui Chen, Kwok-Yan Lam, Jun Zhao
A reinforcement learning (RL)-based DNN compression approach is used to generate the lightweight model suitable for the edge from the heavyweight model.
2 code implementations • 1 Nov 2021 • Yongrui Chen, Huiying Li, Guilin Qi, Tianxing Wu, Tenggou Wang
The high-level decoding generates an AQG as a constraint to prune the search space and reduce the locally ambiguous query graph.
1 code implementation • 12 Sep 2021 • Yongrui Chen, Xinnan Guo, Chaojie Wang, Jian Qiu, Guilin Qi, Meng Wang, Huiying Li
Compared to the larger pre-trained model and the tabular-specific pre-trained model, our approach is still competitive.
1 code implementation • 8 Sep 2021 • Yongrui Chen, Huiying Li, Yuncheng Hua, Guilin Qi
However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries.
no code implementations • 29 Aug 2021 • Zhiqiang Cao, Zhijun Li, Pan Heng, Yongrui Chen, Daqi Xie, Jie Liu
To address this challenge, we propose a small-big model framework that deploys a big model in the cloud and a small model on the edge devices.