no code implementations • Findings (EMNLP) 2021 • Yu Feng, Jing Zhang, Gaole He, Wayne Xin Zhao, Lemao Liu, Quan Liu, Cuiping Li, Hong Chen
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs).
no code implementations • Findings (EMNLP) 2021 • Jingwen Xu, Jing Zhang, Xirui Ke, Yuxiao Dong, Hong Chen, Cuiping Li, Yongbin Liu
Its general process is to first encode the implicit relation of an entity pair and then match the relation of a query entity pair with the relations of the reference entity pairs.
no code implementations • 10 Mar 2025 • Xinxin Zhao, Haoyang Li, Jing Zhang, Xinmei Huang, Tieying Zhang, Jianjun Chen, Rui Shi, Cuiping Li, Hong Chen
Index recommendation is essential for improving query performance in database management systems (DBMSs) through creating an optimal set of indexes under specific constraints.
1 code implementation • 4 Mar 2025 • Haoyang Li, Shang Wu, Xiaokang Zhang, Xinmei Huang, Jing Zhang, Fuxin Jiang, Shuai Wang, Tieying Zhang, Jianjun Chen, Rui Shi, Hong Chen, Cuiping Li
Text-to-SQL, the task of translating natural language questions into SQL queries, plays a crucial role in enabling non-experts to interact with databases.
no code implementations • 15 Jan 2025 • Yuxuan Hu, Jing Zhang, Xiaodong Chen, Zhe Zhao, Cuiping Li, Hong Chen
Existing low-rank adaptation (LoRA) methods face challenges on sparse large language models (LLMs) due to the inability to maintain sparsity.
1 code implementation • 18 Dec 2024 • Xiwen Geng, Suyun Zhao, Yixin Yu, Borui Peng, Pan Du, Hong Chen, Cuiping Li, Mengdie Wang
In this paper, we propose a personalized clustering method that explicitly performs targeted representation learning by interacting with users via modicum task information (e. g., $\textit{must-link}$ or $\textit{cannot-link}$ pairs) to guide the clustering direction.
1 code implementation • 16 Nov 2024 • Yuxuan Hu, Ke Wang, Xiaokang Zhang, Fanjin Zhang, Cuiping Li, Hong Chen, Jing Zhang
Speculative decoding (SD) has been demonstrated as an effective technique for lossless LLM inference acceleration.
no code implementations • 15 Nov 2024 • Xiaodong Chen, Yuxuan Hu, Xiaokang Zhang, Yanling Wang, Cuiping Li, Hong Chen, Jing Zhang
Pruning has become a widely adopted technique for reducing the hardware requirements of large language models (LLMs).
no code implementations • 2 Oct 2024 • Shasha Guo, Lizi Liao, Jing Zhang, Cuiping Li, Hong Chen
Conversational Question Generation (CQG) enhances the interactivity of conversational question-answering systems in fields such as education, customer service, and entertainment.
no code implementations • 5 Aug 2024 • Yiyan Li, Haoyang Li, Zhao Pu, Jing Zhang, Xinyi Zhang, Tao Ji, Luming Sun, Cuiping Li, Hong Chen
Knob tuning plays a crucial role in optimizing databases by adjusting knobs to enhance database performance.
no code implementations • 20 Jun 2024 • Lingxi Zhang, Jing Zhang, Yanling Wang, Cuiping Li, Hong Chen
In order to improve the generalization capabilities of KBQA models, extensive research has embraced a retrieve-then-reason framework to retrieve relevant evidence for logical expression generation.
no code implementations • 13 Jun 2024 • Jiayang Meng, Tao Huang, Hong Chen, Cuiping Li
Gradient leakage has been identified as a potential source of privacy breaches in modern image processing systems, where the adversary can completely reconstruct the training images from leaked gradients.
1 code implementation • 17 Apr 2024 • Xinmei Huang, Haoyang Li, Jing Zhang, Xinxin Zhao, Zhiming Yao, Yiyan Li, Tieying Zhang, Jianjun Chen, Hong Chen, Cuiping Li
This tuner can directly recommend promising configurations for any new workload, eliminating the need for the extensive workload replays required by previous approaches.
1 code implementation • 2 Apr 2024 • Shasha Guo, Lizi Liao, Jing Zhang, Yanling Wang, Cuiping Li, Hong Chen
Knowledge base question generation (KBQG) aims to generate natural language questions from a set of triplet facts extracted from KB.
1 code implementation • 28 Mar 2024 • Xiaodong Chen, Yuxuan Hu, Jing Zhang, Yanling Wang, Cuiping Li, Hong Chen
This paper introduces LLM-Streamline, a pioneer work on layer pruning for large language models (LLMs).
1 code implementation • 18 Mar 2024 • Yanling Wang, Jing Zhang, Lingxi Zhang, Lixin Liu, Yuxiao Dong, Cuiping Li, Hong Chen, Hongzhi Yin
Open-world semi-supervised learning (Open-world SSL) for node classification, that classifies unlabeled nodes into seen classes or multiple novel classes, is a practical but under-explored problem in the graph community.
no code implementations • 28 Feb 2024 • Shasha Guo, Lizi Liao, Cuiping Li, Tat-Seng Chua
In this survey, we present a detailed examination of the advancements in Neural Question Generation (NQG), a field leveraging neural network techniques to generate relevant questions from diverse inputs like knowledge bases, texts, and images.
1 code implementation • 26 Feb 2024 • Haoyang Li, Jing Zhang, Hanbing Liu, Ju Fan, Xiaokang Zhang, Jun Zhu, Renjie Wei, Hongyan Pan, Cuiping Li, Hong Chen
To address the limitations, we introduce CodeS, a series of pre-trained language models with parameters ranging from 1B to 15B, specifically designed for the text-to-SQL task.
no code implementations • 11 Dec 2023 • Kai Zhong, Luming Sun, Tao Ji, Cuiping Li, Hong Chen
They either learn to construct plans from scratch in a bottom-up manner or steer the plan generation behavior of traditional optimizer using hints.
no code implementations • 23 Sep 2023 • Shasha Guo, Jing Zhang, Xirui Ke, Cuiping Li, Hong Chen
The above insights make diversifying question generation an intriguing task, where the first challenge is evaluation metrics for diversity.
1 code implementation • 31 Aug 2023 • Yuxuan Hu, Jing Zhang, Zhe Zhao, Chen Zhao, Xiaodong Chen, Cuiping Li, Hong Chen
Structured pruning is a widely used technique for reducing the size of pre-trained language models (PLMs), but current methods often overlook the potential of compressing the hidden dimension (d) in PLMs, a dimension critical to model size and efficiency.
1 code implementation • ICCV 2023 • Pan Du, Suyun Zhao, Zisen Sheng, Cuiping Li, Hong Chen
Specifically, WAD captures adaptive weights and high-quality pseudo labels to target instances by exploring point mutual information (PMI) in representation space to maximize the role of unlabeled data and filter unknown categories.
no code implementations • 26 Jun 2023 • Lingxi Zhang, Jing Zhang, Yanling Wang, Shulin Cao, Xinmei Huang, Cuiping Li, Hong Chen, Juanzi Li
The generalization problem on KBQA has drawn considerable attention.
1 code implementation • 12 Feb 2023 • Haoyang Li, Jing Zhang, Cuiping Li, Hong Chen
Due to the structural property of the SQL queries, the seq2seq model takes the responsibility of parsing both the schema items (i. e., tables and columns) and the skeleton (i. e., SQL keywords).
Ranked #1 on
Semantic Parsing
on spider
no code implementations • CVPR 2023 • Jinlong Kang, Liyuan Shang, Suyun Zhao, Hong Chen, Cuiping Li, Zeyu Gan
In many real scenarios, data are often divided into a handful of artificial super categories in terms of expert knowledge rather than the representations of images.
no code implementations • 1 Nov 2022 • Mengdie Wang, Liyuan Shang, Suyun Zhao, Yiming Wang, Hong Chen, Cuiping Li, XiZhao Wang
Accordingly, the query results, guided by oracles with distinctive demands, may drive the OCC's clustering results in a desired orientation.
1 code implementation • ACL 2022 • Jing Zhang, Xiaokang Zhang, Jifan Yu, Jian Tang, Jie Tang, Cuiping Li, Hong Chen
Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning.
1 code implementation • 7 Jan 2022 • Suyun Zhao, Zhigang Dai, XiZhao Wang, Peng Ni, Hengheng Luo, Hong Chen, Cuiping Li
First, a rule induction method based on consistence degree, called Consistence-based Value Reduction (CVR), is proposed and used as basis to accelerate.
1 code implementation • 12 Dec 2021 • Yu Feng, Jing Zhang, Xiaokang Zhang, Lemao Liu, Cuiping Li, Hong Chen
Embedding-based methods are popular for Knowledge Base Question Answering (KBQA), but few current models have numerical reasoning skills and thus struggle to answer ordinal constrained questions.
no code implementations • 4 Dec 2021 • Bowen Hao, Hongzhi Yin, Jing Zhang, Cuiping Li, Hong Chen
In terms of the pretext task, in addition to considering the intra-correlations of users and items by the embedding reconstruction task, we add embedding contrastive learning task to capture inter-correlations of users and items.
no code implementations • 4 Dec 2021 • Bowen Hao, Hongzhi Yin, Cuiping Li, Hong Chen
As each occasional group has extremely sparse interactions with items, traditional group recommendation methods can not learn high-quality group representations.
1 code implementation • ICCV 2021 • Pan Du, Suyun Zhao, Hui Chen, Shuwen Chai, Hong Chen, Cuiping Li
However, its performance deteriorates under class distribution mismatch, wherein the unlabeled data contain many samples out of the class distribution of labeled data.
1 code implementation • 28 Dec 2020 • Bowen Hao, Jing Zhang, Cuiping Li, Hong Chen, Hongzhi Yin
On the one hand, the framework enables training multiple supervised ranking models upon the pseudo labels produced by multiple unsupervised ranking models.
2 code implementations • 14 Dec 2020 • Bo Chen, Jing Zhang, Xiaokang Zhang, Xiaobin Tang, Lingfan Cai, Hong Chen, Cuiping Li, Peng Zhang, Jie Tang
In this paper, we propose CODE, which first pre-trains an expert linking model by contrastive learning on AMiner such that it can capture the representation and matching patterns of experts without supervised signals, then it is fine-tuned between AMiner and external sources to enhance the models transferability in an adversarial manner.
1 code implementation • 13 Dec 2020 • Bowen Hao, Jing Zhang, Hongzhi Yin, Cuiping Li, Hong Chen
Cold-start problem is a fundamental challenge for recommendation tasks.