no code implementations • 26 Dec 2023 • Zi-Feng Mai, Chang-Dong Wang, Zhongjie Zeng, Ya Li, Jiaquan Chen, Philip S. Yu
To settle the above challenges, we propose a novel method HEKP4NBR, which transforms the knowledge graph (KG) into prompts, namely Knowledge Tree Prompt (KTP), to help PLM encode the OOV item IDs in the user's basket sequence.
1 code implementation • 14 Aug 2023 • Jianyang Zhai, Xiawu Zheng, Chang-Dong Wang, Hui Li, Yonghong Tian
Pre-trained language models (PLMs) have demonstrated strong performance in sequential recommendation (SR), which are utilized to extract general knowledge.
1 code implementation • 16 Jun 2023 • Wen-Zhi Li, Chang-Dong Wang, Hui Xiong, Jian-Huang Lai
Class imbalance is the phenomenon that some classes have much fewer instances than others, which is ubiquitous in real-world graph-structured scenarios.
1 code implementation • 16 Jun 2023 • Wen-Zhi Li, Chang-Dong Wang, Hui Xiong, Jian-Huang Lai
Contrastive learning (CL) has become the de-facto learning paradigm in self-supervised learning on graphs, which generally follows the "augmenting-contrasting" learning scheme.
no code implementations • 12 May 2023 • Si-Guo Fang, Dong Huang, Chang-Dong Wang, Jian-Huang Lai
The bipartite graph structure has shown its promising ability in facilitating the subspace clustering and spectral clustering algorithms for large-scale datasets.
1 code implementation • IEEE Transactions on Knowledge and Data Engineering 2023 • Man-Sheng Chen, Chang-Dong Wang, and Jian-Huang Lai
To deal with these problems, we propose a novel Low-rank Tensor Based Proximity Learning (LTBPL) approach for multi-view clustering, where multiple low-rank probability affinity matrices and consensus indicator graph reflecting the final performances are jointly studied in a unified framework.
1 code implementation • 11 Jan 2023 • Xiaozhi Deng, Dong Huang, Chang-Dong Wang
Contrastive deep clustering has recently gained significant attention with its ability of joint contrastive learning and clustering via deep neural networks.
no code implementations • 29 Dec 2022 • Ying Zhong, Dong Huang, Chang-Dong Wang
Recently the deep learning has shown its advantage in representation learning and clustering for time series data.
no code implementations • 28 Nov 2022 • Jia-Qi Lin, Man-Sheng Chen, Xi-Ran Zhu, Chang-Dong Wang, Haizhang Zhang
Specifically, the proposed method introduces the Specific Information Reconstruction (SIR) module to disentangle the explorations of the consensus and specific information from multiple views, which enables GCN to capture the more essential low-level representations.
1 code implementation • 9 Sep 2022 • Si-Guo Fang, Dong Huang, Xiao-Sha Cai, Chang-Dong Wang, Chaobo He, Yong Tang
By simultaneously formulating the view-specific bipartite graph learning, the view-consensus bipartite graph learning, and the discrete cluster structure learning into a unified objective function, an efficient minimization algorithm is then designed to tackle this optimization problem and directly achieve a discrete clustering solution without requiring additional partitioning, which notably has linear time complexity in data size.
no code implementations • 25 Aug 2022 • Man-Sheng Chen, Tuo Liu, Chang-Dong Wang, Dong Huang, Jian-Huang Lai
In view of this, we propose an Adaptively-weighted Integral Space for Fast Multiview Clustering (AIMC) with nearly linear complexity.
1 code implementation • 14 Jul 2022 • Yuankun Xu, Dong Huang, Chang-Dong Wang, Jian-Huang Lai
Deep clustering has shown its promising capability in joint representation learning and clustering via deep neural networks.
1 code implementation • 26 Jun 2022 • Hua-Bao Ling, Bowen Zhu, Dong Huang, Ding-Hua Chen, Chang-Dong Wang, Jian-Huang Lai
Vision Transformer (ViT) has shown its advantages over the convolutional neural network (CNN) with its ability to capture global long-range dependencies for visual representation learning.
1 code implementation • 19 Jun 2022 • Zhilin Zhao, Longbing Cao, Chang-Dong Wang
We observe that both in- and out-of-distribution samples can almost invariably be ruled out from belonging to certain classes, aside from those corresponding to unreliable ground-truth labels.
1 code implementation • 1 Jun 2022 • Xiaozhi Deng, Dong Huang, Ding-Hua Chen, Chang-Dong Wang, Jian-Huang Lai
In this paper, we present an end-to-end deep clustering approach termed Strongly Augmented Contrastive Clustering (SACC), which extends the conventional two-augmentation-view paradigm to multiple views and jointly leverages strong and weak augmentations for strengthened deep clustering.
no code implementations • 1 Jun 2022 • Dong Huang, Ding-Hua Chen, Xiangji Chen, Chang-Dong Wang, Jian-Huang Lai
In view of this, this paper presents a Deep Clustering via Ensembles (DeepCluE) approach, which bridges the gap between deep clustering and ensemble clustering by harnessing the power of multiple layers in deep neural networks.
1 code implementation • 20 Apr 2022 • Ling Huang, Can-Rong Guan, Zhen-Wei Huang, Yuefang Gao, Yingjie Kuang, Chang-Dong Wang, C. L. Philip Chen
Recently, Deep Neural Networks (DNNs) have been widely introduced into Collaborative Filtering (CF) to produce more accurate recommendation results due to their capability of capturing the complex nonlinear relationships between items and users. However, the DNNs-based models usually suffer from high computational complexity, i. e., consuming very long training time and storing huge amount of trainable parameters.
1 code implementation • 18 Apr 2022 • Si-Guo Fang, Dong Huang, Chang-Dong Wang, Yong Tang
Second, they often learn the similarity structure by either global structure learning or local structure learning, which lack the capability of graph learning with both global and local structural awareness.
1 code implementation • 22 Mar 2022 • Dong Huang, Chang-Dong Wang, Jian-Huang Lai
Then, a set of diversified base clusterings for different view groups are obtained via fast graph partitioning, which are further formulated into a unified bipartite graph for final clustering in the late-stage fusion.
1 code implementation • 15 Mar 2022 • Xiaosha Cai, Dong Huang, Guang-Yu Zhang, Chang-Dong Wang
Second, many of them overlook the local structures of multiple views and cannot jointly leverage multiple local structures to enhance the subspace representation learning.
no code implementations • 12 Mar 2022 • Zhi-Hong Deng, Chang-Dong Wang, Ling Huang, Jian-Huang Lai, Philip S. Yu
G$^3$SR decomposes the session-based recommendation workflow into two steps.
1 code implementation • 2021 IEEE International Conference on Data Mining (ICDM) 2021 • Jing Wen, Bi-Yi Chen, Chang-Dong Wang, Zhihong Tian
However, recommender systems suffer from interaction data sparsity and data noise problems in reality.
1 code implementation • 10 Mar 2021 • Zi-Yuan Hu, Jin Huang, Zhi-Hong Deng, Chang-Dong Wang, Ling Huang, Jian-Huang Lai, Philip S. Yu
Representation learning tries to learn a common low dimensional space for the representations of users and items.
2 code implementations • 24 Aug 2020 • Youwei Liang, Dong Huang, Chang-Dong Wang, Philip S. Yu
To overcome this limitation, we propose a new multi-view graph learning framework, which for the first time simultaneously and explicitly models multi-view consistency and multi-view inconsistency in a unified objective function, through which the consistent and inconsistent parts of each single-view graph as well as the unified graph that fuses the consistent parts can be iteratively learned.
2 code implementations • 30 May 2019 • Pei-Zhen Li, Ling Huang, Chang-Dong Wang, Jian-Huang Lai
Based on the new edge set, the original connectivity structure of the input network is enhanced to generate a rewired network, whereby the motif-based higher-order structure is leveraged and the hypergraph fragmentation issue is well addressed.
Ranked #1 on Community Detection on Cora
Social and Information Networks Physics and Society 97R40
no code implementations • 4 Mar 2019 • Dong Huang, Chang-Dong Wang, Jian-Sheng Wu, Jian-Huang Lai, Chee-Keong Kwoh
Experiments on various large-scale datasets have demonstrated the scalability and robustness of our algorithms.
Ranked #3 on Image/Document Clustering on pendigits
2 code implementations • 15 Jan 2019 • Zhi-Hong Deng, Ling Huang, Chang-Dong Wang, Jian-Huang Lai, Philip S. Yu
To solve this problem, many methods have been studied, which can be generally categorized into two types, i. e., representation learning-based CF methods and matching function learning-based CF methods.
1 code implementation • CVPR 2019 • He Huang, Changhu Wang, Philip S. Yu, Chang-Dong Wang
Most previous models try to learn a fixed one-directional mapping between visual and semantic space, while some recently proposed generative methods try to generate image features for unseen classes so that the zero-shot learning problem becomes a traditional fully-supervised classification problem.
no code implementations • 30 Oct 2018 • Dong Huang, Chang-Dong Wang, Hongxing Peng, Jian-Huang Lai, Chee-Keong Kwoh
Upon the constructed graph, a transition probability matrix is defined, based on which the random walk process is conducted to propagate the graph structural information.
1 code implementation • 29 Aug 2018 • He Huang, Bokai Cao, Philip S. Yu, Chang-Dong Wang, Alex D. Leow
Mood disorders are common and associated with significant morbidity and mortality.
Human-Computer Interaction Computers and Society
1 code implementation • 9 Oct 2017 • Dong Huang, Chang-Dong Wang, Jian-Huang Lai, Chee-Keong Kwoh
The rapid emergence of high-dimensional data in various areas has brought new challenges to current ensemble clustering research.
no code implementations • 3 Aug 2016 • Dong Huang, Chang-Dong Wang, Jian-Huang Lai, Yun Liang, Shan Bian, Yu Chen
Support vector clustering (SVC) is a versatile clustering technique that is able to identify clusters of arbitrary shapes by exploiting the kernel trick.
no code implementations • 3 Jun 2016 • Dong Huang, Jian-Huang Lai, Chang-Dong Wang
To address these two limitations, in this paper, we propose a novel ensemble clustering approach based on sparse graph representation and probability trajectory analysis.
no code implementations • 17 May 2016 • Dong Huang, Chang-Dong Wang, Jian-Huang Lai
Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering.
no code implementations • 6 May 2014 • Dong Huang, Jian-Huang Lai, Chang-Dong Wang
We present the normalized crowd agreement index (NCAI) to evaluate the quality of base clusterings in an unsupervised manner and thus weight the base clusterings in accordance with their clustering validity.