no code implementations • Findings (EMNLP) 2021 • Jingwen Xu, Jing Zhang, Xirui Ke, Yuxiao Dong, Hong Chen, Cuiping Li, Yongbin Liu
Its general process is to first encode the implicit relation of an entity pair and then match the relation of a query entity pair with the relations of the reference entity pairs.
no code implementations • Findings (EMNLP) 2021 • Yu Feng, Jing Zhang, Gaole He, Wayne Xin Zhao, Lemao Liu, Quan Liu, Cuiping Li, Hong Chen
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs).
1 code implementation • 17 Apr 2024 • Xinmei Huang, Haoyang Li, Jing Zhang, Xinxin Zhao, Zhiming Yao, Yiyan Li, Zhuohao Yu, Tieying Zhang, Hong Chen, Cuiping Li
Database knob tuning is a critical challenge in the database community, aiming to optimize knob values to enhance database performance for specific workloads.
1 code implementation • 2 Apr 2024 • Shasha Guo, Lizi Liao, Jing Zhang, Yanling Wang, Cuiping Li, Hong Chen
Knowledge base question generation (KBQG) aims to generate natural language questions from a set of triplet facts extracted from KB.
no code implementations • 28 Mar 2024 • Xiaodong Chen, Yuxuan Hu, Jing Zhang, Yanling Wang, Cuiping Li, Hong Chen
This paper introduces LLM-Streamline, a novel layer pruning approach for large language models.
1 code implementation • 18 Mar 2024 • Yanling Wang, Jing Zhang, Lingxi Zhang, Lixin Liu, Yuxiao Dong, Cuiping Li, Hong Chen, Hongzhi Yin
Open-world semi-supervised learning (Open-world SSL) for node classification, that classifies unlabeled nodes into seen classes or multiple novel classes, is a practical but under-explored problem in the graph community.
no code implementations • 28 Feb 2024 • Shasha Guo, Lizi Liao, Cuiping Li, Tat-Seng Chua
In this survey, we present a detailed examination of the advancements in Neural Question Generation (NQG), a field leveraging neural network techniques to generate relevant questions from diverse inputs like knowledge bases, texts, and images.
1 code implementation • 26 Feb 2024 • Haoyang Li, Jing Zhang, Hanbing Liu, Ju Fan, Xiaokang Zhang, Jun Zhu, Renjie Wei, Hongyan Pan, Cuiping Li, Hong Chen
To address the limitations, we introduce CodeS, a series of pre-trained language models with parameters ranging from 1B to 15B, specifically designed for the text-to-SQL task.
no code implementations • 11 Dec 2023 • Kai Zhong, Luming Sun, Tao Ji, Cuiping Li, Hong Chen
They either learn to construct plans from scratch in a bottom-up manner or guide the plan generation behavior of traditional optimizer using hints.
no code implementations • 23 Sep 2023 • Shasha Guo, Jing Zhang, Xirui Ke, Cuiping Li, Hong Chen
The above insights make diversifying question generation an intriguing task, where the first challenge is evaluation metrics for diversity.
no code implementations • 31 Aug 2023 • Yuxuan Hu, Jing Zhang, Zhe Zhao, Chen Zhao, Xiaodong Chen, Cuiping Li, Hong Chen
Structured pruning is a widely used technique for reducing the size of pre-trained language models (PLMs), but current methods often overlook the potential of compressing the hidden dimension (d) in PLMs, a dimension critical to model size and efficiency.
1 code implementation • ICCV 2023 • Pan Du, Suyun Zhao, Zisen Sheng, Cuiping Li, Hong Chen
Specifically, WAD captures adaptive weights and high-quality pseudo labels to target instances by exploring point mutual information (PMI) in representation space to maximize the role of unlabeled data and filter unknown categories.
no code implementations • 26 Jun 2023 • Lingxi Zhang, Jing Zhang, Yanling Wang, Shulin Cao, Xinmei Huang, Cuiping Li, Hong Chen, Juanzi Li
The generalization problem on KBQA has drawn considerable attention.
1 code implementation • 12 Feb 2023 • Haoyang Li, Jing Zhang, Cuiping Li, Hong Chen
Due to the structural property of the SQL queries, the seq2seq model takes the responsibility of parsing both the schema items (i. e., tables and columns) and the skeleton (i. e., SQL keywords).
Ranked #1 on Semantic Parsing on spider
no code implementations • CVPR 2023 • Jinlong Kang, Liyuan Shang, Suyun Zhao, Hong Chen, Cuiping Li, Zeyu Gan
In many real scenarios, data are often divided into a handful of artificial super categories in terms of expert knowledge rather than the representations of images.
no code implementations • 1 Nov 2022 • Mengdie Wang, Liyuan Shang, Suyun Zhao, Yiming Wang, Hong Chen, Cuiping Li, XiZhao Wang
Accordingly, the query results, guided by oracles with distinctive demands, may drive the OCC's clustering results in a desired orientation.
1 code implementation • ACL 2022 • Jing Zhang, Xiaokang Zhang, Jifan Yu, Jian Tang, Jie Tang, Cuiping Li, Hong Chen
Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning.
1 code implementation • 7 Jan 2022 • Suyun Zhao, Zhigang Dai, XiZhao Wang, Peng Ni, Hengheng Luo, Hong Chen, Cuiping Li
First, a rule induction method based on consistence degree, called Consistence-based Value Reduction (CVR), is proposed and used as basis to accelerate.
1 code implementation • 12 Dec 2021 • Yu Feng, Jing Zhang, Xiaokang Zhang, Lemao Liu, Cuiping Li, Hong Chen
Embedding-based methods are popular for Knowledge Base Question Answering (KBQA), but few current models have numerical reasoning skills and thus struggle to answer ordinal constrained questions.
no code implementations • 4 Dec 2021 • Bowen Hao, Hongzhi Yin, Cuiping Li, Hong Chen
As each occasional group has extremely sparse interactions with items, traditional group recommendation methods can not learn high-quality group representations.
no code implementations • 4 Dec 2021 • Bowen Hao, Hongzhi Yin, Jing Zhang, Cuiping Li, Hong Chen
In terms of the pretext task, in addition to considering the intra-correlations of users and items by the embedding reconstruction task, we add embedding contrastive learning task to capture inter-correlations of users and items.
no code implementations • ICCV 2021 • Pan Du, Suyun Zhao, Hui Chen, Shuwen Chai, Hong Chen, Cuiping Li
However, its performance deteriorates under class distribution mismatch, wherein the unlabeled data contain many samples out of the class distribution of labeled data.
1 code implementation • 28 Dec 2020 • Bowen Hao, Jing Zhang, Cuiping Li, Hong Chen, Hongzhi Yin
On the one hand, the framework enables training multiple supervised ranking models upon the pseudo labels produced by multiple unsupervised ranking models.
2 code implementations • 14 Dec 2020 • Bo Chen, Jing Zhang, Xiaokang Zhang, Xiaobin Tang, Lingfan Cai, Hong Chen, Cuiping Li, Peng Zhang, Jie Tang
In this paper, we propose CODE, which first pre-trains an expert linking model by contrastive learning on AMiner such that it can capture the representation and matching patterns of experts without supervised signals, then it is fine-tuned between AMiner and external sources to enhance the models transferability in an adversarial manner.
1 code implementation • 13 Dec 2020 • Bowen Hao, Jing Zhang, Hongzhi Yin, Cuiping Li, Hong Chen
Cold-start problem is a fundamental challenge for recommendation tasks.