Search Results for author: Liangjian Wen

Found 15 papers, 6 papers with code

MVEB: Self-Supervised Learning with Multi-View Entropy Bottleneck

no code implementations28 Mar 2024 Liangjian Wen, Xiasi Wang, Jianzhuang Liu, Zenglin Xu

One can learn this representation by maximizing the mutual information between the representation and the supervised view while eliminating superfluous information.

Self-Supervised Learning

Enhancing Multivariate Time Series Forecasting with Mutual Information-driven Cross-Variable and Temporal Modeling

no code implementations1 Mar 2024 shiyi qi, Liangjian Wen, Yiduo Li, Yuanhang Yang, Zhe Li, Zhongwen Rao, Lujia Pan, Zenglin Xu

To substantiate this claim, we introduce the Cross-variable Decorrelation Aware feature Modeling (CDAM) for Channel-mixing approaches, aiming to refine Channel-mixing by minimizing redundant information between channels while enhancing relevant mutual information.

Multivariate Time Series Forecasting Time Series

PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from the perspective of partial differential equations

no code implementations25 Feb 2024 shiyi qi, Zenglin Xu, Yiduo Li, Liangjian Wen, Qingsong Wen, Qifan Wang, Yuan Qi

Recent advancements in deep learning have led to the development of various models for long-term multivariate time-series forecasting (LMTF), many of which have shown promising results.

Multivariate Time Series Forecasting Time Series

Structure-Preserving Graph Representation Learning

1 code implementation2 Sep 2022 Ruiyi Fang, Liangjian Wen, Zhao Kang, Jianzhuang Liu

To this end, we propose a novel Structure-Preserving Graph Representation Learning (SPGRL) method, to fully capture the structure information of graphs.

Graph Representation Learning Node Classification

Self-Supervision Can Be a Good Few-Shot Learner

3 code implementations19 Jul 2022 Yuning Lu, Liangjian Wen, Jianzhuang Liu, Yajing Liu, Xinmei Tian

Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.

cross-domain few-shot learning Unsupervised Few-Shot Image Classification +1

Self-supervised Consensus Representation Learning for Attributed Graph

1 code implementation10 Aug 2021 Changshu Liu, Liangjian Wen, Zhao Kang, Guangchun Luo, Ling Tian

Self-supervised loss is designed to maximize the agreement of the embeddings of the same node in the topology graph and the feature graph.

Graph Representation Learning Node Classification +1

Boosting Few-Shot Classification with View-Learnable Contrastive Learning

1 code implementation20 Jul 2021 Xu Luo, Yuxuan Chen, Liangjian Wen, Lili Pan, Zenglin Xu

The goal of few-shot classification is to classify new categories with few labeled examples within each class.

Classification Contrastive Learning +1

AFINet: Attentive Feature Integration Networks for Image Classification

no code implementations10 May 2021 Xinglin Pan, Jing Xu, Yu Pan, Liangjian Wen, WenXiang Lin, Kun Bai, Zenglin Xu

Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks including image classification.

Classification General Classification +1

AFINets: Attentive Feature Integration Networks for Image Classification

no code implementations1 Jan 2021 Xinglin Pan, Jing Xu, Yu Pan, WenXiang Lin, Liangjian Wen, Zenglin Xu

Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks, e. g., image classification.

Classification General Classification +1

Mutual Information Gradient Estimation for Representation Learning

1 code implementation ICLR 2020 Liangjian Wen, Yiji Zhou, Lirong He, Mingyuan Zhou, Zenglin Xu

To this end, we propose the Mutual Information Gradient Estimator (MIGE) for representation learning based on the score estimation of implicit distributions.

Representation Learning

Low-rank Kernel Learning for Graph-based Clustering

no code implementations14 Mar 2019 Zhao Kang, Liangjian Wen, Wenyu Chen, Zenglin Xu

By formulating graph construction and kernel learning in a unified framework, the graph and consensus kernel can be iteratively enhanced by each other.

Clustering graph construction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.