1 code implementation • 13 Feb 2025 • Ciyuan Peng, Yuelong Huang, Qichao Dong, Shuo Yu, Feng Xia, Chengqi Zhang, Yaochu Jin
State-of-the-art brain graph analysis methods fail to fully encode the small-world architecture of brain graphs (accompanied by the presence of hubs and functional modules), and therefore lack biological plausibility to some extent.
1 code implementation • 12 Dec 2024 • Shengchao Chen, Guodong Long, Jing Jiang, Chengqi Zhang
Training a general-purpose time series foundation models with robust generalization capabilities across diverse applications from scratch is still an open challenge.
1 code implementation • 16 Oct 2024 • Hongduan Tian, Feng Liu, Zhanke Zhou, Tongliang Liu, Chengqi Zhang, Bo Han
However, in this paper, we find that there naturally exists a gap, which resembles the modality gap, between the prototype and image instance embeddings extracted from the frozen pre-trained backbone, and simply applying the same transformation during the adaptation phase constrains exploring the optimal representations and shrinks the gap between prototype and image representations.
no code implementations • 11 Oct 2024 • Zhiwei Li, Guodong Long, Jing Jiang, Chengqi Zhang
Federated recommendation systems are essential for providing personalized recommendations while protecting user privacy.
1 code implementation • 10 Oct 2024 • Qi Wang, Jindong Li, Shiqi Wang, Qianli Xing, Runliang Niu, He Kong, Rui Li, Guodong Long, Yi Chang, Chengqi Zhang
Large language models (LLMs) have not only revolutionized the field of natural language processing (NLP) but also have the potential to bring a paradigm shift in many other fields due to their remarkable abilities of language understanding, as well as impressive generalization capabilities and reasoning skills.
no code implementations • 9 Oct 2024 • Siyu Zhou, Tianyi Zhou, Yijun Yang, Guodong Long, Deheng Ye, Jing Jiang, Chengqi Zhang
The resulting world model is composed of the LLM and the learned rules.
no code implementations • 4 Oct 2024 • Yue Tan, Guodong Long, Jing Jiang, Chengqi Zhang
Traditional federated learning (FL) methods often rely on fixed weighting for parameter aggregation, neglecting the mutual influence by others.
1 code implementation • 16 Aug 2024 • Zhiwei Li, Guodong Long, Tianyi Zhou, Jing Jiang, Chengqi Zhang
Existing FedCF methods typically combine distributed Collaborative Filtering (CF) algorithms with privacy-preserving mechanisms, and then preserve personalized information into a user embedding vector.
no code implementations • 3 Aug 2024 • Zicheng Zhao, Linhao Luo, Shirui Pan, Chengqi Zhang, Chen Gong
Due to the long-tailed distribution of relations and the incompleteness of KGs, there is growing interest in few-shot knowledge graph completion (FKGC).
1 code implementation • 27 May 2024 • Yixin Liu, Shiyuan Li, Yu Zheng, Qingfeng Chen, Chengqi Zhang, Shirui Pan
Graph anomaly detection (GAD), which aims to identify abnormal nodes that differ from the majority within a graph, has garnered significant attention.
1 code implementation • 26 May 2024 • Shutong Chen, Tianyi Zhou, Guodong Long, Jie Ma, Jing Jiang, Chengqi Zhang
For every mid-level, it learns multiple models each assigned to a subgroup of clients, as clustered FL.
no code implementations • 24 May 2024 • Shengchao Chen, Guodong Long, Jing Jiang, Chengqi Zhang
This paper demonstrates that pre-trained language models (PLMs) are strong foundation models for on-device meteorological variables modeling.
no code implementations • 12 May 2024 • Zhiwei Li, Guodong Long, Chunxu Zhang, Honglei Zhang, Jing Jiang, Chengqi Zhang
In this study, we conduct a comprehensive review of FRSs with FMs.
no code implementations • 26 Apr 2024 • Renqiang Luo, Tao Tang, Feng Xia, Jiaying Liu, Chengpei Xu, Leo Yu Zhang, Wei Xiang, Chengqi Zhang
Recent advancements in machine learning and deep learning have brought algorithmic fairness into sharp focus, illuminating concerns over discriminatory decision making that negatively impacts certain individuals or groups.
1 code implementation • 16 Apr 2024 • Zhihong Deng, Jing Jiang, Guodong Long, Chengqi Zhang
In sequential decision-making problems involving sensitive attributes like race and gender, reinforcement learning (RL) agents must carefully consider long-term fairness while maximizing returns.
no code implementations • 13 Apr 2024 • Chengpei Xu, Hao Fu, Long Ma, Wenjing Jia, Chengqi Zhang, Feng Xia, Xiaoyu Ai, Binghao Li, Wenjie Zhang
Localizing text in low-light environments is challenging due to visual degradations.
no code implementations • 6 Feb 2024 • Bohao Qu, Xiaofeng Cao, Qing Guo, Yi Chang, Ivor W. Tsang, Chengqi Zhang
In this study, we present a transductive inference approach on that reward information propagation graph, which enables the effective estimation of rewards for unlabelled data in offline reinforcement learning.
no code implementations • 21 Jan 2024 • Hongzhi Yin, Liang Qu, Tong Chen, Wei Yuan, Ruiqi Zheng, Jing Long, Xin Xia, Yuhui Shi, Chengqi Zhang
Recently, driven by the advances in storage, communication, and computation capabilities of edge devices, there has been a shift of focus from CloudRSs to on-device recommender systems (DeviceRSs), which leverage the capabilities of edge devices to minimize centralized data storage requirements, reduce the response latency caused by communication overheads, and enhance user privacy and security by localizing data processing and model training.
1 code implementation • 5 Dec 2023 • Shengchao Chen, Guodong Long, Jing Jiang, Dikai Liu, Chengqi Zhang
Furthermore, in relation to the creation and application of foundation models for weather and climate data understanding, we delve into the field's prevailing challenges, offer crucial insights, and propose detailed avenues for future research.
no code implementations • 4 Jul 2023 • Zhihong Deng, Jing Jiang, Guodong Long, Chengqi Zhang
Causality, however, offers a notable advantage as it can formalize knowledge in a systematic manner and leverage invariance for effective knowledge transfer.
1 code implementation • 1 Jun 2023 • Xiuying Chen, Guodong Long, Chongyang Tao, Mingzhe Li, Xin Gao, Chengqi Zhang, Xiangliang Zhang
The other factor is in the latent space, where the attacked inputs bring more variations to the hidden states.
2 code implementations • 23 May 2023 • Shengchao Chen, Guodong Long, Tao Shen, Jing Jiang, Chengqi Zhang
On-device intelligence for weather forecasting uses local deep learning models to analyze weather patterns without centralized cloud computing, holds significance for supporting human activates.
no code implementations • 9 Apr 2023 • Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
In this paper, we study which modules in neural networks are more prone to forgetting by investigating their training dynamics during CL.
no code implementations • 27 Jan 2023 • Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
To address these challenges, we create a small model for a new task from the pruned models of similar tasks.
1 code implementation • 16 Jan 2023 • Chunxu Zhang, Guodong Long, Tianyi Zhou, Peng Yan, Zijian Zhang, Chengqi Zhang, Bo Yang
Moreover, we provide visualizations and in-depth analysis of the personalization techniques in item embedding, which shed novel insights on the design of recommender systems in federated settings.
1 code implementation • 23 Nov 2022 • Yue Tan, Yixin Liu, Guodong Long, Jing Jiang, Qinghua Lu, Chengqi Zhang
Inspired by this, we propose FedStar, an FGL framework that extracts and shares the common underlying structure information for inter-graph federated learning tasks.
no code implementations • 29 May 2022 • Shaoshen Wang, Yanbin Liu, Ling Chen, Chengqi Zhang
Empirically, DERM outperformed the state-of-the-art on the unsupervised AD benchmark consisting of 18 datasets.
1 code implementation • ACL 2022 • Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Joey Tianyi Zhou, Chengqi Zhang
Text-based games provide an interactive way to study natural language processing.
1 code implementation • 13 Feb 2022 • Jie Ma, Guodong Long, Tianyi Zhou, Jing Jiang, Chengqi Zhang
Knowledge sharing and model personalization are essential components to tackle the non-IID challenge in federated learning (FL).
no code implementations • 29 Sep 2021 • Shuang Ao, Tianyi Zhou, Jing Jiang, Guodong Long, Xuan Song, Chengqi Zhang
They are complementary in acquiring more informative feedback for RL: the planning policy provides dense reward of finishing easier sub-tasks while the environment policy modifies these sub-tasks to be adequately challenging and diverse so the RL agent can quickly adapt to different tasks/environments.
no code implementations • 29 Sep 2021 • Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Liming Zhu, Chengqi Zhang
Can we find a better initialization for a new task, e. g., a much smaller network closer to the final pruned model, by exploiting its similar tasks?
no code implementations • 29 Sep 2021 • Meng Fang, Yunqiu Xu, Yali Du, Ling Chen, Chengqi Zhang
In a variety of text-based games, we show that this simple method results in competitive performance for agents.
no code implementations • 29 Sep 2021 • Han Zheng, Jing Jiang, Pengfei Wei, Guodong Long, Xuan Song, Chengqi Zhang
URPL adds an uncertainty regularization term in the policy learning objective to enforce to learn a more stable policy under the offline setting.
1 code implementation • Findings (EMNLP) 2021 • Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Chengqi Zhang
Deep reinforcement learning provides a promising approach for text-based games in studying natural language communication between humans and artificial agents.
Deep Reinforcement Learning
Hierarchical Reinforcement Learning
+3
no code implementations • 24 Aug 2021 • Guodong Long, Yue Tan, Jing Jiang, Chengqi Zhang
In the near future, it is foreseeable to have decentralized data ownership in the finance sector using federated learning.
1 code implementation • 19 Aug 2021 • Guodong Long, Ming Xie, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, Chengqi Zhang
By comparison, a mixture of multiple global models could capture the heterogeneity across various clients if assigning the client to different global models (i. e., centers) in FL.
1 code implementation • 20 Jul 2021 • Xueping Peng, Guodong Long, Sen Wang, Jing Jiang, Allison Clarke, Clement Schlegel, Chengqi Zhang
Hence, some recent works train healthcare representations by incorporating medical ontology, by self-supervised tasks like diagnosis prediction, but (1) the small-scale, monotonous ontology is insufficient for robust learning, and (2) critical contexts or dependencies underlying patient journeys are barely exploited to enhance ontology learning.
1 code implementation • 10 Jul 2021 • Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang
Second, the bandwidth of existing graph convolutional filters is fixed.
4 code implementations • 1 May 2021 • Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, Chengqi Zhang
Heterogeneity across clients in federated learning (FL) usually hinders the optimization convergence and generalization performance when the aggregation of clients' knowledge occurs in the gradient space.
no code implementations • ICLR 2021 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang
To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces.
no code implementations • 1 Jan 2021 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang
Few-shot learning aims to train a classifier given only a few samples per class that are highly insufficient to describe the whole data distribution.
no code implementations • 1 Jan 2021 • Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
In this paper, we introduce an efficient method, \name, to extract the local inference chains by optimizing a differentiable sparse scoring for the filters and layers to preserve the outputs on given data from a local region.
no code implementations • 19 Dec 2020 • Yejiang Wang, Yuhai Zhao, Zhengkui Wang, Chengqi Zhang
Multi-graph multi-label learning (\textsc{Mgml}) is a supervised learning framework, which aims to learn a multi-label classifier from a set of labeled bags each containing a number of graphs.
no code implementations • NeurIPS 2020 • Han Zheng, Pengfei Wei, Jing Jiang, Guodong Long, Qinghua Lu, Chengqi Zhang
Numerous deep reinforcement learning agents have been proposed, and each of them has its strengths and flaws.
1 code implementation • NeurIPS 2020 • Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Joey Tianyi Zhou, Chengqi Zhang
We study reinforcement learning (RL) for text-based games, which are interactive simulations in the context of natural language.
no code implementations • COLING 2020 • Hao Huang, Guodong Long, Tao Shen, Jing Jiang, Chengqi Zhang
Many graph embedding approaches have been proposed for knowledge graph completion via link prediction.
2 code implementations • COLING 2020 • Yang Li, Tao Shen, Guodong Long, Jing Jiang, Tianyi Zhou, Chengqi Zhang
Then, facilitated by the proposed base model, we introduce collaborating relation features shared among relations in the hierarchies to promote the relation-augmenting process and balance the training data for long-tail relations.
1 code implementation • 24 Sep 2020 • Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang, Chengqi Zhang
Electronic health records (EHRs) are longitudinal records of a patient's interactions with healthcare systems.
no code implementations • 24 Sep 2020 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
To address this challenging task, most ZSL methods relate unseen test classes to seen(training) classes via a pre-defined set of attributes that can describe all classes in the same semantic space, so the knowledge learned on the training classes can be adapted to unseen classes.
1 code implementation • 28 Jun 2020 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning settings.
3 code implementations • 24 May 2020 • Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, Chengqi Zhang
Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic.
Ranked #1 on
Univariate Time Series Forecasting
on Electricity
3 code implementations • 3 May 2020 • Guodong Long, Ming Xie, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, Chengqi Zhang
However, due to the diverse nature of user behaviors, assigning users' gradients to different global models (i. e., centers) can better capture the heterogeneity of data distributions across users.
1 code implementation • NeurIPS 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
It can significantly improve tasks that suffer from insufficient training data, e. g., few shot learning.
3 code implementations • 15 Jun 2019 • Chun Wang, Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Chengqi Zhang
Graph clustering is a fundamental task which discovers communities or groups in networks.
Ranked #8 on
Node Clustering
on Cora
9 code implementations • 31 May 2019 • Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang
Spatial-temporal graph modeling is an important task to analyze the spatial relations and temporal trends of components in a system.
Ranked #2 on
Traffic Prediction
on LargeST
2 code implementations • 10 May 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang
The resulting graph of prototypes can be continually re-used and updated for new tasks and classes.
no code implementations • ICLR 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
It addresses the ``many-class'' problem by exploring the class hierarchy, e. g., the coarse-class label that covers a subset of fine classes, which helps to narrow down the candidates for the fine class and is cheaper to obtain.
1 code implementation • 14 Jan 2019 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
In this paper, we propose a unified framework for attributed network embedding-attri2vec-that learns node embeddings by discovering a latent node attribute subspace via a network structure guided transformation performed on the original attribute space.
Ranked #1 on
Node Clustering
on Facebook
1 code implementation • 14 Jan 2019 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
In this paper, we propose a search efficient binary network embedding algorithm called BinaryNE to learn a binary code for each node, by simultaneously modeling node context relations and node attribute relations through a three-layer neural network.
no code implementations • 4 Jan 2019 • Shirui Pan, Ruiqi Hu, Sai-fu Fung, Guodong Long, Jing Jiang, Chengqi Zhang
Based on this framework, we derive two variants of adversarial models, the adversarially regularized graph autoencoder (ARGA) and its variational version, adversarially regularized variational graph autoencoder (ARVGA), to learn the graph embedding effectively.
Ranked #7 on
Node Clustering
on Cora
5 code implementations • 3 Jan 2019 • Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu
In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields.
1 code implementation • 12 Nov 2018 • Wei Wu, Bin Li, Ling Chen, Junbin Gao, Chengqi Zhang
In this review, we mainly categorize the Weighted MinHash algorithms into quantization-based approaches, "active index"-based ones and others, and show the evolution and inherent connection of the weighted MinHash algorithms, from the integer weighted MinHash algorithms to real-valued weighted MinHash ones (particularly the Consistent Weighted Sampling scheme).
Data Structures and Algorithms
2 code implementations • ICDM 2018 • Hong Yang, Shirui Pan, Peng Zhang, Ling Chen, Defu Lian, Chengqi Zhang
To this end, we present a Binarized Attributed Network Embedding model (BANE for short) to learn binary node representation.
Ranked #1 on
Link Prediction
on Wiki
2 code implementations • 16 Oct 2018 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
In this paper, we propose a Scalable Incomplete Network Embedding (SINE) algorithm for learning node representations from incomplete graphs.
Social and Information Networks
2 code implementations • NAACL 2019 • Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
Neural networks equipped with self-attention have parallelizable computation, light-weight structure, and the ability to capture both long-range and local dependencies.
1 code implementation • ICLR 2018 • Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding.
no code implementations • 7 Mar 2018 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
Network embedding in heterogeneous information networks (HINs) is a challenging task, due to complications of different node types and rich relationships between nodes.
Social and Information Networks
4 code implementations • 13 Feb 2018 • Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang
Graph embedding is an effective method to represent graph data in a low dimensional space for graph analytics.
Ranked #5 on
Link Prediction
on Pubmed
1 code implementation • 31 Jan 2018 • Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, Chengqi Zhang
In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other.
Ranked #56 on
Natural Language Inference
on SNLI
no code implementations • 4 Dec 2017 • Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang
Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information.
no code implementations • 2 Nov 2017 • Jiangchao Yao, Jiajie Wang, Ivor Tsang, Ya zhang, Jun Sun, Chengqi Zhang, Rui Zhang
However, the label noise among the datasets severely degenerates the \mbox{performance of deep} learning approaches.
3 code implementations • 14 Sep 2017 • Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, Chengqi Zhang
Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively.
Ranked #69 on
Natural Language Inference
on SNLI
1 code implementation • 5 Jun 2017 • Wei Wu, Bin Li, Ling Chen, Chengqi Zhang, Philip S. Yu
Min-Hash is a popular technique for efficiently estimating the Jaccard similarity of binary sets.
Data Structures and Algorithms
no code implementations • 20 Dec 2016 • Haishuai Wang, Jia Wu, Peng Zhang, Chengqi Zhang
For example, social network users are considered to be social sensors that continuously generate social signals (tweets) represented as a time series.
no code implementations • 14 Jan 2016 • Xiaojun Chang, Yi Yang, Guodong Long, Chengqi Zhang, Alexander G. Hauptmann
In this paper, we focus on automatically detecting events in unconstrained videos without the use of any visual training exemplars.
no code implementations • 23 Nov 2014 • Xiaojun Chang, Feiping Nie, Sen Wang, Yi Yang, Xiaofang Zhou, Chengqi Zhang
In many real-world applications, data are represented by matrices or high-order tensors.