Search Results for author: Kaiqi Zhao

Found 19 papers, 8 papers with code

D2GCLF: Document-to-Graph Classifier for Legal Document Classification

no code implementations Findings (NAACL) 2022 Qiqi Wang, Kaiqi Zhao, Robert Amor, Benjamin Liu, Ruofan Wang

We propose a Document-to-Graph Classifier (D2GCLF), which extracts facts as relations between key participants in the law case and represents a legal document with four relation graphs.

Classification Document Classification +2

Self-supervised Learning for Geospatial AI: A Survey

no code implementations22 Aug 2024 Yile Chen, Weiming Huang, Kaiqi Zhao, Yue Jiang, Gao Cong

The proliferation of geospatial data in urban and territorial environments has significantly facilitated the development of geospatial artificial intelligence (GeoAI) across various urban applications.

Self-Supervised Learning Survey

Self-Supervised Quantization-Aware Knowledge Distillation

1 code implementation17 Mar 2024 Kaiqi Zhao, Ming Zhao

Quantization-aware training (QAT) and Knowledge Distillation (KD) are combined to achieve competitive performance in creating low-bit deep learning models.

Knowledge Distillation Quantization

CSG: Curriculum Representation Learning for Signed Graph

no code implementations17 Oct 2023 Zeyu Zhang, Jiamou Liu, Kaiqi Zhao, Yifei Wang, Pengqian Han, Xianda Zheng, Qiqi Wang, Zijian Zhang

Signed graphs are valuable for modeling complex relationships with positive and negative connections, and Signed Graph Neural Networks (SGNNs) have become crucial tools for their analysis.

Link Sign Prediction Representation Learning

SGA: A Graph Augmentation Method for Signed Graph Neural Networks

no code implementations15 Oct 2023 Zeyu Zhang, Shuyan Wan, Sijie Wang, Xianda Zheng, Xinrui Zhang, Kaiqi Zhao, Jiamou Liu, Dong Hao

Signed Graph Neural Networks (SGNNs) are vital for analyzing complex patterns in real-world signed graphs containing positive and negative links.

Data Augmentation Graph Representation Learning +1

Poster: Self-Supervised Quantization-Aware Knowledge Distillation

no code implementations22 Sep 2023 Kaiqi Zhao, Ming Zhao

Quantization-aware training (QAT) starts with a pre-trained full-precision model and performs quantization during retraining.

Knowledge Distillation Quantization

Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions

1 code implementation14 Mar 2023 Kaiqi Zhao, Animesh Jain, Ming Zhao

Then, it proposes adaptive pruning policies for automatically meeting the pruning objectives of accuracy-critical, memory-constrained, and latency-sensitive tasks.

A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning

1 code implementation14 Mar 2023 Kaiqi Zhao, Yitao Chen, Ming Zhao

Knowledge Transfer (KT) achieves competitive performance and is widely used for image classification tasks in model compression and transfer learning.

Image Classification Model Compression +1

GETNext: Trajectory Flow Map Enhanced Transformer for Next POI Recommendation

1 code implementation3 Mar 2023 Song Yang, Jiamou Liu, Kaiqi Zhao

Instead, we propose a user-agnostic global trajectory flow map and a novel Graph Enhanced Transformer model (GETNext) to better exploit the extensive collaborative signals for a more accurate next POI prediction, and alleviate the cold start problem in the meantime.

WISK: A Workload-aware Learned Index for Spatial Keyword Queries

no code implementations28 Feb 2023 Yufan Sheng, Xin Cao, Yixiang Fang, Kaiqi Zhao, Jianzhong Qi, Gao Cong, Wenjie Zhang

In this paper, we propose WISK, a learned index for spatial keyword queries, which self-adapts for optimizing querying costs given a query workload.

USER: Unsupervised Structural Entropy-based Robust Graph Neural Network

1 code implementation12 Feb 2023 Yifei Wang, Yupan Wang, Zeyu Zhang, Song Yang, Kaiqi Zhao, Jiamou Liu

To this end, we propose USER, an unsupervised robust version of graph neural networks that is based on structural entropy.

Graph Neural Network Link Prediction +1

Enabling Deep Learning on Edge Devices through Filter Pruning and Knowledge Transfer

no code implementations22 Jan 2022 Kaiqi Zhao, Yitao Chen, Ming Zhao

The results show that 1) our model compression method can remove up to 99. 36% parameters of WRN-28-10, while preserving a Top-1 accuracy of over 90% on CIFAR-10; 2) our knowledge transfer method enables the compressed models to achieve more than 90% accuracy on CIFAR-10 and retain good accuracy on old categories; 3) it allows the compressed models to converge within real time (three to six minutes) on the edge for incremental learning tasks; 4) it enables the model to classify unseen categories of data (78. 92% Top-1 accuracy) that it is never trained with.

Image Classification Incremental Learning +4

Iterative Activation-based Structured Pruning

no code implementations22 Jan 2022 Kaiqi Zhao, Animesh Jain, Ming Zhao

To solve this problem, we propose two activation-based pruning methods, Iterative Activation-based Pruning (IAP) and Adaptive Iterative Activation-based Pruning (AIAP).

Adaptive Activation-based Structured Pruning

1 code implementation21 Jan 2022 Kaiqi Zhao, Animesh Jain, Ming Zhao

Pruning is a promising approach to compress complex deep learning models in order to deploy them on resource-constrained edge devices.

Space Meets Time: Local Spacetime Neural Network For Traffic Flow Forecasting

no code implementations11 Sep 2021 Song Yang, Jiamou Liu, Kaiqi Zhao

We argue that such correlations are universal and play a pivotal role in traffic flow.

Traffic Prediction

BAARD: Blocking Adversarial Examples by Testing for Applicability, Reliability and Decidability

1 code implementation2 May 2021 Xinglong Chang, Katharina Dost, Kaiqi Zhao, Ambra Demontis, Fabio Roli, Gill Dobbie, Jörg Wicker

Applicability Domain defines a domain based on the known compounds and rejects any unknown compound that falls outside the domain.

Blocking

Representing Verbs as Argument Concepts

no code implementations2 Mar 2018 Yu Gong, Kaiqi Zhao, Kenny Q. Zhu

Verbs play an important role in the understanding of natural language text.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.