1 code implementation • COLING 2022 • Yiming Wang, Qianren Mao, Junnan Liu, Weifeng Jiang, Hongdong Zhu, JianXin Li
Labeling large amounts of extractive summarization data is often prohibitive expensive due to time, financial, and expertise constraints, which poses great challenges to incorporating summarization system in practical applications.
no code implementations • EACL (AdaptNLP) 2021 • Tianyu Chen, Shaohan Huang, Furu Wei, JianXin Li
In unsupervised domain adaptation, we aim to train a model that works well on a target domain when provided with labeled source samples and unlabeled target samples.
no code implementations • 18 Apr 2024 • Qian Li, Cheng Ji, Shu Guo, Yong Zhao, Qianren Mao, Shangguang Wang, Yuntao Wei, JianXin Li
Existing methods are limited by their neglect of the multiple entity pairs in one sentence sharing very similar contextual information (ie, the same text and image), resulting in increased difficulty in the MMRE task.
no code implementations • 7 Mar 2024 • Qian Li, Shu Guo, Yinjia Chen, Cheng Ji, Jiawei Sheng, JianXin Li
Uncertainty representation is first designed for estimating the uncertainty scope of the entity pairs after transferring feature representations into a Gaussian distribution.
no code implementations • 28 Feb 2024 • Feihong Lu, Weiqi Wang, Yangyifei Luo, Ziqin Zhu, Qingyun Sun, Baixuan Xu, Haochen Shi, Shiqi Gao, Qian Li, Yangqiu Song, JianXin Li
However, understanding the intention behind social media posts remains challenging due to the implicitness of intentions in social media posts, the need for cross-modality understanding of both text and images, and the presence of noisy information such as hashtags, misspelled words, and complicated abbreviations.
no code implementations • 25 Feb 2024 • Tianyu Chen, Haoyi Zhou, Ying Li, Hao Wang, Chonghan Gao, Shanghang Zhang, JianXin Li
Foundation models have revolutionized knowledge acquisition across domains, and our study introduces OmniArch, a paradigm-shifting approach designed for building foundation models in multi-physics scientific computing.
no code implementations • 22 Feb 2024 • Qi Hu, Weifeng Jiang, Haoran Li, ZiHao Wang, Jiaxin Bai, Qianren Mao, Yangqiu Song, Lixin Fan, JianXin Li
An entity can be involved in various knowledge graphs and reasoning on multiple KGs and answering complex queries on multi-source KGs is important in discovering knowledge cross graphs.
2 code implementations • 16 Feb 2024 • Yangyifei Luo, Zhuo Chen, Lingbing Guo, Qian Li, Wenxuan Zeng, Zhixin Cai, JianXin Li
Entity alignment (EA) aims to identify entities across different knowledge graphs that represent the same real-world objects.
no code implementations • 14 Feb 2024 • Zhao Li, Xin Wang, JianXin Li, Wenbin Guo, Jun Zhao
Existing knowledge hypergraph embedding methods mainly focused on improving model performance, but their model structures are becoming more complex and redundant.
1 code implementation • 9 Feb 2024 • Haonan Yuan, Qingyun Sun, Xingcheng Fu, Cheng Ji, JianXin Li
Leveraged by the Information Bottleneck (IB) principle, we first propose the expected optimal representations should satisfy the Minimal-Sufficient-Consensual (MSC) Condition.
no code implementations • 19 Jan 2024 • Ziqi Yuan, Haoyi Zhou, Tianyu Chen, JianXin Li
The analysis of persistent homology demonstrates its effectiveness in capturing the topological structure formed by normal edge features.
1 code implementation • 25 Nov 2023 • Wei Yuan, Chaoqun Yang, Liang Qu, Quoc Viet Hung Nguyen, JianXin Li, Hongzhi Yin
Existing FedRecs generally adhere to a learning protocol in which a central server shares a global recommendation model with clients, and participants achieve collaborative learning by frequently communicating the model's public parameters.
1 code implementation • NeurIPS 2023 • Haonan Yuan, Qingyun Sun, Xingcheng Fu, Ziwei Zhang, Cheng Ji, Hao Peng, JianXin Li
To the best of our knowledge, we are the first to study OOD generalization on dynamic graphs from the environment learning perspective.
1 code implementation • 29 Oct 2023 • Qianren Mao, Shaobo Zhao, Jiarui Li, Xiaolei Gu, Shizhu He, Bo Li, JianXin Li
Pre-trained sentence representations are crucial for identifying significant sentences in unsupervised document extractive summarization.
no code implementations • 26 Oct 2023 • Shuai Zheng, Zhizhe Liu, Zhenfeng Zhu, Xingxing Zhang, JianXin Li, Yao Zhao
On this basis, BiKT not only allows us to acquire knowledge from both the GNN and its derived model but promotes each other by injecting the knowledge into the other.
2 code implementations • NeurIPS 2023 • Beining Yang, Kai Wang, Qingyun Sun, Cheng Ji, Xingcheng Fu, Hao Tang, Yang You, JianXin Li
We validate the proposed SGDD across 9 datasets and achieve state-of-the-art results on all of them: for example, on the YelpChi dataset, our approach maintains 98. 6% test accuracy of training on the original graph dataset with 1, 000 times saving on the scale of the graph.
1 code implementation • 10 Oct 2023 • Qian Li, Cheng Ji, Shu Guo, Zhaoji Liang, Lihong Wang, JianXin Li
To address these challenges, we propose a novel MMEA transformer, called MoAlign, that hierarchically introduces neighbor features, multi-modal attributes, and entity types to enhance the alignment task.
1 code implementation • 7 Sep 2023 • Xurong Liang, Tong Chen, Quoc Viet Hung Nguyen, JianXin Li, Hongzhi Yin
In addition, we innovatively design a regularized pruning mechanism in CERP, such that the two sparsified meta-embedding tables are encouraged to encode information that is mutually complementary.
no code implementations • 19 Jun 2023 • Qian Li, Shu Guo, Cheng Ji, Xutan Peng, Shiyao Cui, JianXin Li
Multi-Modal Relation Extraction (MMRE) aims at identifying the relation between two entities in texts that contain visual clues.
no code implementations • 31 May 2023 • Tianyu Chen, Yuan Xie, Shuai Zhang, Shaohan Huang, Haoyi Zhou, JianXin Li
Music representation learning is notoriously difficult for its complex human-related concepts contained in the sequence of numerical signals.
2 code implementations • 20 May 2023 • Weifeng Jiang, Qianren Mao, Chenghua Lin, JianXin Li, Ting Deng, Weiyi Yang, Zheng Wang
Many text mining models are constructed by fine-tuning a large deep pre-trained language model (PLM) in downstream tasks.
1 code implementation • 11 Apr 2023 • Xingcheng Fu, Yuecen Wei, Qingyun Sun, Haonan Yuan, Jia Wu, Hao Peng, JianXin Li
We find that training labeled nodes with different hierarchical properties have a significant impact on the node classification tasks and confirm it in our experiments.
no code implementations • 4 Apr 2023 • Qian Li, Shu Guo, Yangyifei Luo, Cheng Ji, Lihong Wang, Jiawei Sheng, JianXin Li
In this paper, we propose a novel attribute-consistent knowledge graph representation learning framework for MMEA (ACK-MMEA) to compensate the contextual gaps through incorporating consistent alignment knowledge.
2 code implementations • 17 Mar 2023 • Dongcheng Zou, Hao Peng, Xiang Huang, Renyu Yang, JianXin Li, Jia Wu, Chunyang Liu, Philip S. Yu
Graph Neural Networks (GNNs) are de facto solutions to structural data learning.
no code implementations • 18 Feb 2023 • Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, JianXin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, Lichao Sun
This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities.
1 code implementation • 28 Jan 2023 • Cheng Ji, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Qingyun Sun, Phillip S. Yu
Contrastive Learning (CL) has been proved to be a powerful self-supervised approach for a wide range of domains, including computer vision and graph representation learning.
no code implementations • 30 Dec 2022 • Qingyun Sun, JianXin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu
Most Graph Neural Networks follow the message-passing paradigm, assuming the observed structure depicts the ground-truth node relationships.
no code implementations • 15 Nov 2022 • Qian Li, JianXin Li, Lihong Wang, Cheng Ji, Yiming Hei, Jiawei Sheng, Qingyun Sun, Shan Xue, Pengtao Xie
To address the above issues, we propose a Multi-Channel graph neural network utilizing Type information for Event Detection in power systems, named MC-TED, leveraging a semantic channel and a topological channel to enrich information interaction from short texts.
1 code implementation • 17 Aug 2022 • Qingyun Sun, JianXin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng Ji, Qian Li, Philip S. Yu
Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes, which significantly damages the performance of GNNs.
no code implementations • 1 Jun 2022 • Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei
The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pre-training and has achieved promising results due to its model capacity.
no code implementations • Findings (ACL) 2022 • Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei
As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e. g., search history, medical record, bank account).
1 code implementation • 3 May 2022 • Jinze Yu, Jiaming Liu, Xiaobao Wei, Haoyi Zhou, Yohei Nakata, Denis Gudovskiy, Tomoyuki Okuno, JianXin Li, Kurt Keutzer, Shanghang Zhang
To solve this problem, we propose an end-to-end cross-domain detection Transformer based on the mean teacher framework, MTTrans, which can fully exploit unlabeled target domain data in object detection training and transfer knowledge between domains via pseudo labels.
1 code implementation • 3 Mar 2022 • JianXin Li, Xingcheng Fu, Qingyun Sun, Cheng Ji, Jiajun Tan, Jia Wu, Hao Peng
In this paper, we proposed a novel Curvature Graph Generative Adversarial Networks method, named \textbf{\modelname}, which is the first GAN-based graph representation method in the Riemannian geometric manifold.
no code implementations • 17 Feb 2022 • Xiangjie Kong, Kailai Wang, Mingliang Hou, Feng Xia, Gour Karmakar, JianXin Li
To reduce this research gap and learn human mobility knowledge from this fixed travel behaviors, we propose a multi-pattern passenger flow prediction framework, MPGCN, based on Graph Convolutional Network (GCN).
no code implementations • 3 Feb 2022 • Tao Liu, Shu Guo, Hao liu, Rui Kang, Mingyang Bai, Jiyang Jiang, Wei Wen, Xing Pan, Jun Tai, JianXin Li, Jian Cheng, Jing Jing, Zhenzhou Wu, Haijun Niu, Haogang Zhu, Zixiao Li, Yongjun Wang, Henry Brodaty, Perminder Sachdev, Daqing Li
Degeneration and adaptation are two competing sides of the same coin called resilience in the progressive processes of brain aging or diseases.
no code implementations • 22 Jan 2022 • XiangYu Song, JianXin Li, Qi Lei, Wei Zhao, Yunliang Chen, Ajmal Mian
The goal of Knowledge Tracing (KT) is to estimate how well students have mastered a concept based on their historical learning of related exercises.
no code implementations • 16 Jan 2022 • Borui Cai, Yong Xiang, Longxiang Gao, He Zhang, Yunfeng Li, JianXin Li
KGC methods assume a knowledge graph is static, but that may lead to inaccurate prediction results because many facts in the knowledge graphs change over time.
1 code implementation • 16 Dec 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, Philip S. Yu
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications.
1 code implementation • TIST 2021 2021 • Haoyi Zhou, Hao Peng, Jieqi Peng, Shuai Zhang, JianXin Li
Extensive experiments are conducted on five large-scale datasets, which demonstrate that our method achieves state-of-the-art performance and validates the effectiveness brought by local structure information.
1 code implementation • 15 Oct 2021 • Xingcheng Fu, JianXin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng, Philip S. Yu
Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning.
2 code implementations • 9 Oct 2021 • Jinghui Si, Xutan Peng, Chen Li, Haotian Xu, JianXin Li
Event Extraction bridges the gap between text and event signals.
no code implementations • 8 Oct 2021 • Hui Yin, XiangYu Song, Shuiqiao Yang, JianXin Li
The outbreak of the novel Coronavirus Disease 2019 (COVID-19) has lasted for nearly two years and caused unprecedented impacts on people's daily life around the world.
no code implementations • 6 Oct 2021 • Taige Zhao, XiangYu Song, JianXin Li, Wei Luo, Imran Razzak
We first propose a graph augmentation-based partition (GAD-Partition) that can divide original graph into augmented subgraphs to reduce communication by selecting and storing as few significant nodes of other processors as possible while guaranteeing the accuracy of the training.
no code implementations • 29 Sep 2021 • Tianyu Chen, Haoyi Zhou, He Mingrui, JianXin Li
Pre-trained language models (e. g, BERT, GPT-3) have revolutionized the NLP research and fine-tuning becomes the indispensable step of downstream adaptation.
no code implementations • 21 Sep 2021 • Hui Yin, XiangYu Song, Shuiqiao Yang, Guangyan Huang, JianXin Li
Effective representation learning is critical for short text clustering due to the sparse, high-dimensional and noise attributes of short text corpus.
no code implementations • 23 Aug 2021 • Qian Li, Shu Guo, Jia Wu, JianXin Li, Jiawei Sheng, Lihong Wang, Xiaohan Dong, Hao Peng
It ignores meaningful associations among event types and argument roles, leading to relatively poor performance for less frequent types/roles.
no code implementations • 5 Jul 2021 • Qian Li, JianXin Li, Jiawei Sheng, Shiyao Cui, Jia Wu, Yiming Hei, Hao Peng, Shu Guo, Lihong Wang, Amin Beheshti, Philip S. Yu
Numerous methods, datasets, and evaluation metrics have been proposed in the literature, raising the need for a comprehensive and updated survey.
1 code implementation • 23 Jun 2021 • Qian Li, Hao Peng, JianXin Li, Jia Wu, Yuanxing Ning, Lihong Wang, Philip S. Yu, Zheng Wang
Our approach leverages knowledge of the already extracted arguments of the same sentence to determine the role of arguments that would be difficult to decide individually.
no code implementations • 7 Jun 2021 • Xin Guo, Jianlei Yang, Haoyi Zhou, Xucheng Ye, JianXin Li
In order to overcome these security problems, RoSearch is proposed as a comprehensive framework to search the student models with better adversarial robustness when performing knowledge distillation.
1 code implementation • 6 Jun 2021 • Qianren Mao, Xi Li, Bang Liu, Shu Guo, Peng Hao, JianXin Li, Lihong Wang
These tokens or phrases may originate from primary fragmental textual pieces (e. g., segments) in the original text and are separated into different segments.
no code implementations • 29 May 2021 • Qianren Mao, Jiazheng Wang, Zheng Wang, Xi Li, Bo Li, JianXin Li
We meticulously analyze the corpus using well-known metrics, focusing on the style of the summaries and the complexity of the summarization task.
no code implementations • 28 May 2021 • Junnan Liu, Qianren Mao, Bang Liu, Hao Peng, Hongdong Zhu, JianXin Li
In this paper, we argue that this limitation can be overcome by a semi-supervised approach: consistency training which is to leverage large amounts of unlabeled data to improve the performance of supervised learning over a small corpus.
1 code implementation • 22 May 2021 • JianXin Li, Xingcheng Fu, Hao Peng, Senzhang Wang, Shijie Zhu, Qingyun Sun, Philip S. Yu, Lifang He
With the prevalence of graph data in real-world applications, many methods have been proposed in recent years to learn high-quality graph embedding vectors various types of graphs.
1 code implementation • 17 May 2021 • Hao Peng, Haoran Li, Yangqiu Song, Vincent Zheng, JianXin Li
However, for multiple cross-domain knowledge graphs, state-of-the-art embedding models cannot make full use of the data from different knowledge domains while preserving the privacy of exchanged data.
1 code implementation • 7 May 2021 • Gongxu Luo, JianXin Li, Jianlin Su, Hao Peng, Carl Yang, Lichao Sun, Philip S. Yu, Lifang He
Based on them, we design MinGE to directly calculate the ideal node embedding dimension for any graph.
1 code implementation • 16 Apr 2021 • JianXin Li, Hao Peng, Yuwei Cao, Yingtong Dou, Hekai Zhang, Philip S. Yu, Lifang He
Furthermore, they cannot fully capture the content-based correlations between nodes, as they either do not use the self-attention mechanism or only use it to consider the immediate neighbors of each node, ignoring the higher-order neighbors.
1 code implementation • NAACL 2021 • Zhongfen Deng, Hao Peng, Dongxiao He, JianXin Li, Philip S. Yu
The second one encourages the structure encoder to learn better representations with desired characteristics for all labels which can better handle label imbalance in hierarchical text classification.
1 code implementation • 2 Apr 2021 • Hao Peng, JianXin Li, Yangqiu Song, Renyu Yang, Rajiv Ranjan, Philip S. Yu, Lifang He
Third, we propose a streaming social event detection and evolution discovery framework for HINs based on meta-path similarity search, historical information about meta-paths, and heterogeneous DBSCAN clustering method.
no code implementations • 29 Mar 2021 • Guotong Xue, Ming Zhong, JianXin Li, Jia Chen, Chengshuai Zhai, Ruochen Kong
Due to the lack of comprehensive investigation of them, we give a survey of dynamic network embedding in this paper.
2 code implementations • 21 Jan 2021 • Yuwei Cao, Hao Peng, Jia Wu, Yingtong Dou, JianXin Li, Philip S. Yu
The complexity and streaming nature of social messages make it appealing to address social event detection in an incremental learning setting, where acquiring, preserving, and extending knowledge are major concerns.
1 code implementation • 20 Jan 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Yuanxing Ning, Phillip S. Yu, Lifang He
Graph representation learning has attracted increasing research attention.
no code implementations • 18 Jan 2021 • Uno Fang, JianXin Li, Xuequan Lu, Mumtaz Ali, Longxiang Gao, Yong Xiang
Current annotation for plant disease images depends on manual sorting and handcrafted features by agricultural experts, which is time-consuming and labour-intensive.
7 code implementations • 14 Dec 2020 • Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, JianXin Li, Hui Xiong, Wancai Zhang
Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning.
Ranked #1 on Time Series Forecasting on ETTh2 (336) Univariate
1 code implementation • COLING 2020 • Zhongfen Deng, Hao Peng, Congying Xia, JianXin Li, Lifang He, Philip S. Yu
Review rating prediction of text reviews is a rapidly growing technology with a wide range of applications in natural language processing.
1 code implementation • 9 Aug 2020 • Shijie Zhu, JianXin Li, Hao Peng, Senzhang Wang, Lifang He
To capture the directed edges between nodes, existing methods mostly learn two embedding vectors for each node, source vector and target vector.
1 code implementation • 18 Nov 2019 • JianXin Li, Cheng Ji, Hao Peng, Yu He, Yangqiu Song, Xinmiao Zhang, Fanzhang Peng
However, despite the success of current random-walk-based methods, most of them are usually not expressive enough to preserve the personalized higher-order proximity and lack a straightforward objective to theoretically articulate what and how network proximity is preserved.
1 code implementation • arXiv preprint 2018 • Xutan Peng, Chen Li, Zhi Cai, Faqiang Shi, Yidan Liu, JianXin Li
In this paper, we initiate a novel system for transferring the texture of music, and release it as an open source project.