1 code implementation • COLING 2022 • Yiming Wang, Qianren Mao, Junnan Liu, Weifeng Jiang, Hongdong Zhu, JianXin Li
Labeling large amounts of extractive summarization data is often prohibitive expensive due to time, financial, and expertise constraints, which poses great challenges to incorporating summarization system in practical applications.
no code implementations • EACL (AdaptNLP) 2021 • Tianyu Chen, Shaohan Huang, Furu Wei, JianXin Li
In unsupervised domain adaptation, we aim to train a model that works well on a target domain when provided with labeled source samples and unlabeled target samples.
1 code implementation • 17 Mar 2023 • Dongcheng Zou, Hao Peng, Xiang Huang, Renyu Yang, JianXin Li, Jia Wu, Chunyang Liu, Philip S. Yu
Graph Neural Networks (GNNs) are de facto solutions to structural data learning.
no code implementations • 18 Feb 2023 • Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, JianXin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, Lichao Sun
This study provides a comprehensive review of recent research advancements, current and future challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities.
1 code implementation • 28 Jan 2023 • Cheng Ji, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Qingyun Sun, Phillip S. Yu
Contrastive Learning (CL) has been proved to be a powerful self-supervised approach for a wide range of domains, including computer vision and graph representation learning.
no code implementations • 30 Dec 2022 • Qingyun Sun, JianXin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu
Most Graph Neural Networks follow the message-passing paradigm, assuming the observed structure depicts the ground-truth node relationships.
no code implementations • 23 Dec 2022 • Fucai Ke, Weiqing Wang, Weicong Tan, Lan Du, Yuan Jin, Yujin Huang, Hongzhi Yin, JianXin Li
Theoretically, within and across these sessions, students' learning dynamics can be very different.
no code implementations • 15 Nov 2022 • Qian Li, JianXin Li, Lihong Wang, Cheng Ji, Yiming Hei, Jiawei Sheng, Qingyun Sun, Shan Xue, Pengtao Xie
To address the above issues, we propose a Multi-Channel graph neural network utilizing Type information for Event Detection in power systems, named MC-TED, leveraging a semantic channel and a topological channel to enrich information interaction from short texts.
1 code implementation • 17 Aug 2022 • Qingyun Sun, JianXin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng Ji, Qian Li, Philip S. Yu
Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes, which significantly damages the performance of GNNs.
no code implementations • Findings (ACL) 2022 • Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei
As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e. g., search history, medical record, bank account).
no code implementations • 1 Jun 2022 • Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei
The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pre-training and has achieved promising results due to its model capacity.
1 code implementation • 3 May 2022 • Jinze Yu, Jiaming Liu, Xiaobao Wei, Haoyi Zhou, Yohei Nakata, Denis Gudovskiy, Tomoyuki Okuno, JianXin Li, Kurt Keutzer, Shanghang Zhang
To solve this problem, we propose an end-to-end cross-domain detection Transformer based on the mean teacher framework, MTTrans, which can fully exploit unlabeled target domain data in object detection training and transfer knowledge between domains via pseudo labels.
1 code implementation • 3 Mar 2022 • JianXin Li, Xingcheng Fu, Qingyun Sun, Cheng Ji, Jiajun Tan, Jia Wu, Hao Peng
In this paper, we proposed a novel Curvature Graph Generative Adversarial Networks method, named \textbf{\modelname}, which is the first GAN-based graph representation method in the Riemannian geometric manifold.
no code implementations • 17 Feb 2022 • Xiangjie Kong, Kailai Wang, Mingliang Hou, Feng Xia, Gour Karmakar, JianXin Li
To reduce this research gap and learn human mobility knowledge from this fixed travel behaviors, we propose a multi-pattern passenger flow prediction framework, MPGCN, based on Graph Convolutional Network (GCN).
no code implementations • 3 Feb 2022 • Tao Liu, Shu Guo, Hao liu, Rui Kang, Mingyang Bai, Jiyang Jiang, Wei Wen, Xing Pan, Jun Tai, JianXin Li, Jian Cheng, Jing Jing, Zhenzhou Wu, Haijun Niu, Haogang Zhu, Zixiao Li, Yongjun Wang, Henry Brodaty, Perminder Sachdev, Daqing Li
Degeneration and adaptation are two competing sides of the same coin called resilience in the progressive processes of brain aging or diseases.
no code implementations • 22 Jan 2022 • XiangYu Song, JianXin Li, Qi Lei, Wei Zhao, Yunliang Chen, Ajmal Mian
The goal of Knowledge Tracing (KT) is to estimate how well students have mastered a concept based on their historical learning of related exercises.
no code implementations • 16 Jan 2022 • Borui Cai, Yong Xiang, Longxiang Gao, He Zhang, Yunfeng Li, JianXin Li
KGC methods assume a knowledge graph is static, but that may lead to inaccurate prediction results because many facts in the knowledge graphs change over time.
1 code implementation • 16 Dec 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, Philip S. Yu
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications.
1 code implementation • TIST 2021 2021 • Haoyi Zhou, Hao Peng, Jieqi Peng, Shuai Zhang, JianXin Li
Extensive experiments are conducted on five large-scale datasets, which demonstrate that our method achieves state-of-the-art performance and validates the effectiveness brought by local structure information.
1 code implementation • 15 Oct 2021 • Xingcheng Fu, JianXin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng, Philip S. Yu
Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning.
2 code implementations • 9 Oct 2021 • Jinghui Si, Xutan Peng, Chen Li, Haotian Xu, JianXin Li
Event Extraction bridges the gap between text and event signals.
no code implementations • 8 Oct 2021 • Hui Yin, XiangYu Song, Shuiqiao Yang, JianXin Li
The outbreak of the novel Coronavirus Disease 2019 (COVID-19) has lasted for nearly two years and caused unprecedented impacts on people's daily life around the world.
no code implementations • 6 Oct 2021 • Taige Zhao, XiangYu Song, JianXin Li, Wei Luo, Imran Razzak
We first propose a graph augmentation-based partition (GAD-Partition) that can divide original graph into augmented subgraphs to reduce communication by selecting and storing as few significant nodes of other processors as possible while guaranteeing the accuracy of the training.
no code implementations • 29 Sep 2021 • Tianyu Chen, Haoyi Zhou, He Mingrui, JianXin Li
Pre-trained language models (e. g, BERT, GPT-3) have revolutionized the NLP research and fine-tuning becomes the indispensable step of downstream adaptation.
no code implementations • 21 Sep 2021 • Hui Yin, XiangYu Song, Shuiqiao Yang, Guangyan Huang, JianXin Li
Effective representation learning is critical for short text clustering due to the sparse, high-dimensional and noise attributes of short text corpus.
no code implementations • 23 Aug 2021 • Qian Li, Shu Guo, Jia Wu, JianXin Li, Jiawei Sheng, Lihong Wang, Xiaohan Dong, Hao Peng
It ignores meaningful associations among event types and argument roles, leading to relatively poor performance for less frequent types/roles.
no code implementations • 5 Jul 2021 • Qian Li, JianXin Li, Jiawei Sheng, Shiyao Cui, Jia Wu, Yiming Hei, Hao Peng, Shu Guo, Lihong Wang, Amin Beheshti, Philip S. Yu
Numerous methods, datasets, and evaluation metrics have been proposed in the literature, raising the need for a comprehensive and updated survey.
1 code implementation • 23 Jun 2021 • Qian Li, Hao Peng, JianXin Li, Jia Wu, Yuanxing Ning, Lihong Wang, Philip S. Yu, Zheng Wang
Our approach leverages knowledge of the already extracted arguments of the same sentence to determine the role of arguments that would be difficult to decide individually.
no code implementations • 7 Jun 2021 • Xin Guo, Jianlei Yang, Haoyi Zhou, Xucheng Ye, JianXin Li
In order to overcome these security problems, RoSearch is proposed as a comprehensive framework to search the student models with better adversarial robustness when performing knowledge distillation.
1 code implementation • 6 Jun 2021 • Qianren Mao, Xi Li, Bang Liu, Shu Guo, Peng Hao, JianXin Li, Lihong Wang
These tokens or phrases may originate from primary fragmental textual pieces (e. g., segments) in the original text and are separated into different segments.
no code implementations • 29 May 2021 • Xi Li, Qianren Mao, Hao Peng, Hongdong Zhu, JianXin Li, Zheng Wang
This paper presents a better TLS approach for automatically and dynamically determining the TLS timeline length.
no code implementations • 28 May 2021 • Junnan Liu, Qianren Mao, Bang Liu, Hao Peng, Hongdong Zhu, JianXin Li
In this paper, we argue that this limitation can be overcome by a semi-supervised approach: consistency training which is to leverage large amounts of unlabeled data to improve the performance of supervised learning over a small corpus.
1 code implementation • 22 May 2021 • JianXin Li, Xingcheng Fu, Hao Peng, Senzhang Wang, Shijie Zhu, Qingyun Sun, Philip S. Yu, Lifang He
With the prevalence of graph data in real-world applications, many methods have been proposed in recent years to learn high-quality graph embedding vectors various types of graphs.
1 code implementation • 17 May 2021 • Hao Peng, Haoran Li, Yangqiu Song, Vincent Zheng, JianXin Li
However, for multiple cross-domain knowledge graphs, state-of-the-art embedding models cannot make full use of the data from different knowledge domains while preserving the privacy of exchanged data.
1 code implementation • 7 May 2021 • Gongxu Luo, JianXin Li, Jianlin Su, Hao Peng, Carl Yang, Lichao Sun, Philip S. Yu, Lifang He
Based on them, we design MinGE to directly calculate the ideal node embedding dimension for any graph.
1 code implementation • 16 Apr 2021 • JianXin Li, Hao Peng, Yuwei Cao, Yingtong Dou, Hekai Zhang, Philip S. Yu, Lifang He
Furthermore, they cannot fully capture the content-based correlations between nodes, as they either do not use the self-attention mechanism or only use it to consider the immediate neighbors of each node, ignoring the higher-order neighbors.
1 code implementation • NAACL 2021 • Zhongfen Deng, Hao Peng, Dongxiao He, JianXin Li, Philip S. Yu
The second one encourages the structure encoder to learn better representations with desired characteristics for all labels which can better handle label imbalance in hierarchical text classification.
1 code implementation • 2 Apr 2021 • Hao Peng, JianXin Li, Yangqiu Song, Renyu Yang, Rajiv Ranjan, Philip S. Yu, Lifang He
Third, we propose a streaming social event detection and evolution discovery framework for HINs based on meta-path similarity search, historical information about meta-paths, and heterogeneous DBSCAN clustering method.
no code implementations • 29 Mar 2021 • Guotong Xue, Ming Zhong, JianXin Li, Jia Chen, Chengshuai Zhai, Ruochen Kong
Due to the lack of comprehensive investigation of them, we give a survey of dynamic network embedding in this paper.
2 code implementations • 21 Jan 2021 • Yuwei Cao, Hao Peng, Jia Wu, Yingtong Dou, JianXin Li, Philip S. Yu
The complexity and streaming nature of social messages make it appealing to address social event detection in an incremental learning setting, where acquiring, preserving, and extending knowledge are major concerns.
1 code implementation • 20 Jan 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Yuanxing Ning, Phillip S. Yu, Lifang He
Graph representation learning has attracted increasing research attention.
no code implementations • 18 Jan 2021 • Uno Fang, JianXin Li, Xuequan Lu, Mumtaz Ali, Longxiang Gao, Yong Xiang
Current annotation for plant disease images depends on manual sorting and handcrafted features by agricultural experts, which is time-consuming and labour-intensive.
5 code implementations • 14 Dec 2020 • Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, JianXin Li, Hui Xiong, Wancai Zhang
Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning.
Ranked #5 on
Time Series Forecasting
on ETTh2 (48)
(using extra training data)
Multivariate Time Series Forecasting
Univariate Time Series Forecasting
1 code implementation • COLING 2020 • Zhongfen Deng, Hao Peng, Congying Xia, JianXin Li, Lifang He, Philip S. Yu
Review rating prediction of text reviews is a rapidly growing technology with a wide range of applications in natural language processing.
1 code implementation • 9 Aug 2020 • Shijie Zhu, JianXin Li, Hao Peng, Senzhang Wang, Lifang He
To capture the directed edges between nodes, existing methods mostly learn two embedding vectors for each node, source vector and target vector.
1 code implementation • 18 Nov 2019 • JianXin Li, Cheng Ji, Hao Peng, Yu He, Yangqiu Song, Xinmiao Zhang, Fanzhang Peng
However, despite the success of current random-walk-based methods, most of them are usually not expressive enough to preserve the personalized higher-order proximity and lack a straightforward objective to theoretically articulate what and how network proximity is preserved.
1 code implementation • arXiv preprint 2018 • Xutan Peng, Chen Li, Zhi Cai, Faqiang Shi, Yidan Liu, JianXin Li
In this paper, we initiate a novel system for transferring the texture of music, and release it as an open source project.