no code implementations • Findings (NAACL) 2022 • Qiqi Wang, Kaiqi Zhao, Robert Amor, Benjamin Liu, Ruofan Wang
We propose a Document-to-Graph Classifier (D2GCLF), which extracts facts as relations between key participants in the law case and represents a legal document with four relation graphs.
1 code implementation • 14 Mar 2023 • Kaiqi Zhao, Yitao Chen, Ming Zhao
Knowledge Transfer (KT) achieves competitive performance and is widely used for image classification tasks in model compression and transfer learning.
1 code implementation • 14 Mar 2023 • Kaiqi Zhao, Animesh Jain, Ming Zhao
Then, it proposes adaptive pruning policies for automatically meeting the pruning objectives of accuracy-critical, memory-constrained, and latency-sensitive tasks.
1 code implementation • 3 Mar 2023 • Song Yang, Jiamou Liu, Kaiqi Zhao
Instead, we propose a user-agnostic global trajectory flow map and a novel Graph Enhanced Transformer model (GETNext) to better exploit the extensive collaborative signals for a more accurate next POI prediction, and alleviate the cold start problem in the meantime.
no code implementations • 28 Feb 2023 • Yufan Sheng, Xin Cao, Yixiang Fang, Kaiqi Zhao, Jianzhong Qi, Gao Cong, Wenjie Zhang
In this paper, we propose WISK, a learned index for spatial keyword queries, which self-adapts for optimizing querying costs given a query workload.
1 code implementation • 12 Feb 2023 • Yifei Wang, Yupan Wang, Zeyu Zhang, Song Yang, Kaiqi Zhao, Jiamou Liu
To this end, we propose USER, an unsupervised robust version of graph neural networks that is based on structural entropy.
no code implementations • 22 Jan 2022 • Kaiqi Zhao, Animesh Jain, Ming Zhao
To solve this problem, we propose two activation-based pruning methods, Iterative Activation-based Pruning (IAP) and Adaptive Iterative Activation-based Pruning (AIAP).
no code implementations • 22 Jan 2022 • Kaiqi Zhao, Yitao Chen, Ming Zhao
The results show that 1) our model compression method can remove up to 99. 36% parameters of WRN-28-10, while preserving a Top-1 accuracy of over 90% on CIFAR-10; 2) our knowledge transfer method enables the compressed models to achieve more than 90% accuracy on CIFAR-10 and retain good accuracy on old categories; 3) it allows the compressed models to converge within real time (three to six minutes) on the edge for incremental learning tasks; 4) it enables the model to classify unseen categories of data (78. 92% Top-1 accuracy) that it is never trained with.
1 code implementation • 21 Jan 2022 • Kaiqi Zhao, Animesh Jain, Ming Zhao
Pruning is a promising approach to compress complex deep learning models in order to deploy them on resource-constrained edge devices.
1 code implementation • International Conference on Data Mining (ICDM) 2021 • Song Yang, Jiamou Liu, Kaiqi Zhao
We argue that such correlations are universal and play a pivotal role in traffic flow.
no code implementations • 11 Sep 2021 • Song Yang, Jiamou Liu, Kaiqi Zhao
We argue that such correlations are universal and play a pivotal role in traffic flow.
no code implementations • 2 May 2021 • Luke Chang, Katharina Dost, Kaiqi Zhao, Ambra Demontis, Fabio Roli, Gill Dobbie, Jörg Wicker
This problem is relieved with a technique called Applicability Domain (AD), which rejects the unsuitable compounds for the model.
no code implementations • 18 May 2020 • Yu Gong, Ziwen Jiang, Yufei Feng, Binbin Hu, Kaiqi Zhao, Qingwen Liu, Wenwu Ou
Recommender system (RS) has become a crucial module in most web-scale applications.
no code implementations • 2 Mar 2018 • Yu Gong, Kaiqi Zhao, Kenny Q. Zhu
Verbs play an important role in the understanding of natural language text.