Search Results for author: Bing Su

Found 24 papers, 5 papers with code

MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning

1 code implementation16 Sep 2022 Jiangmeng Li, Wenwen Qiang, Yanan Zhang, Wenyi Mo, Changwen Zheng, Bing Su, Hui Xiong

As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample.

Contrastive Learning Meta-Learning +1

Modeling Multiple Views via Implicitly Preserving Global Consistency and Local Complementarity

1 code implementation16 Sep 2022 Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Farid Razzak, Ji-Rong Wen, Hui Xiong

To this end, we propose a methodology, specifically consistency and complementarity network (CoCoNet), which avails of strict global inter-view consistency and local cross-view complementarity preserving regularization to comprehensively learn representations from multiple views.

Representation Learning Self-Supervised Learning

A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural Language

no code implementations12 Sep 2022 Bing Su, Dazhao Du, Zhao Yang, Yujie Zhou, Jiangmeng Li, Anyi Rao, Hao Sun, Zhiwu Lu, Ji-Rong Wen

Although artificial intelligence (AI) has made significant progress in understanding molecules in a wide range of fields, existing models generally acquire the single cognitive ability from the single molecular modality.

Contrastive Learning Cross-Modal Retrieval +1

Interventional Contrastive Learning with Meta Semantic Regularizer

no code implementations29 Jun 2022 Wenwen Qiang, Jiangmeng Li, Changwen Zheng, Bing Su, Hui Xiong

Contrastive learning (CL)-based self-supervised learning models learn visual representations in a pairwise manner.

Contrastive Learning Representation Learning +1

SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders

no code implementations21 Jun 2022 Gang Li, Heliang Zheng, Daqing Liu, Chaoyue Wang, Bing Su, Changwen Zheng

In this paper, we explore a potential visual analogue of words, i. e., semantic parts, and we integrate semantic information into the training process of MAE by proposing a Semantic-Guided Masking strategy.

Language Modelling Masked Language Modeling +1

Efficient U-Transformer with Boundary-Aware Loss for Action Segmentation

no code implementations26 May 2022 Dazhao Du, Bing Su, Yu Li, Zhongang Qi, Lingyu Si, Ying Shan

Most state-of-the-art methods focus on designing temporal convolution-based models, but the limitations on modeling long-term temporal dependencies and inflexibility of temporal convolutions limit the potential of these models.

Action Classification Action Segmentation +1

Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt

no code implementations23 May 2022 Jiangmeng Li, Wenyi Mo, Wenwen Qiang, Bing Su, Changwen Zheng

Vision-language models are pre-trained by aligning image-text pairs in a common space so that the models can deal with open-set visual concepts by learning semantic information from textual labels.

Domain Generalization Language Modelling

Semi-WTC: A Practical Semi-supervised Framework for Attack Categorization through Weight-Task Consistency

1 code implementation19 May 2022 Zihan Li, Wentao Chen, Zhiqing Wei, Xingqi Luo, Bing Su

In addition, to cope with new attacks in real-world deployment, we propose an Active Adaption Resampling (AAR) method, which can better discover the distribution of unseen sample data and adapt the parameters of encoder.

MetAug: Contrastive Learning via Meta Feature Augmentation

1 code implementation10 Mar 2022 Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Hui Xiong

We perform a meta learning technique to build the augmentation generator that updates its network parameters by considering the performance of the encoder.

Contrastive Learning Informativeness +1

Robust Local Preserving and Global Aligning Network for Adversarial Domain Adaptation

no code implementations8 Mar 2022 Wenwen Qiang, Jiangmeng Li, Changwen Zheng, Bing Su, Hui Xiong

We conduct theoretical analysis on the robustness of the proposed RLPGA and prove that the robust informative-theoretic-based loss and the local preserving module are beneficial to reduce the empirical risk of the target domain.

Unsupervised Domain Adaptation

Preformer: Predictive Transformer with Multi-Scale Segment-wise Correlations for Long-Term Time Series Forecasting

no code implementations23 Feb 2022 Dazhao Du, Bing Su, Zhewei Wei

In this way, if a key segment has a high correlation score with the query segment, its successive segment contributes more to the prediction of the query segment.

Time Series Forecasting

SCformer: Segment Correlation Transformer for Long Sequence Time Series Forecasting

no code implementations29 Sep 2021 Dazhao Du, Bing Su, Zhewei Wei

Long-term time series forecasting is widely used in real-world applications such as financial investment, electricity management and production planning.

Management Time Series Forecasting

Log-Polar Space Convolution

no code implementations29 Sep 2021 Bing Su, Ji-Rong Wen

Convolutional neural networks use regular quadrilateral convolution kernels to extract features.

Domain-Invariant Representation Learning with Global and Local Consistency

no code implementations29 Sep 2021 Wenwen Qiang, Jiangmeng Li, Jie Hu, Bing Su, Changwen Zheng, Hui Xiong

In this paper, we give an analysis of the existing representation learning framework of unsupervised domain adaptation and show that the learned feature representations of the source domain samples are with discriminability, compressibility, and transferability.

Representation Learning Unsupervised Domain Adaptation

Information Theory-Guided Heuristic Progressive Multi-View Coding

no code implementations6 Sep 2021 Jiangmeng Li, Wenwen Qiang, Hang Gao, Bing Su, Farid Razzak, Jie Hu, Changwen Zheng, Hui Xiong

To this end, we rethink the existing multi-view learning paradigm from the information theoretical perspective and then propose a novel information theoretical framework for generalized multi-view learning.

Contrastive Learning MULTI-VIEW LEARNING +1

Log-Polar Space Convolution for Convolutional Neural Networks

1 code implementation26 Jul 2021 Bing Su, Ji-Rong Wen

Convolutional neural networks use regular quadrilateral convolution kernels to extract features.

Unsupervised Embedding Learning from Uncertainty Momentum Modeling

no code implementations19 Jul 2021 Jiahuan Zhou, Yansong Tang, Bing Su, Ying Wu

We justify that the performance limitation is caused by the gradient vanishing on these sample outliers.

Order-Preserving Wasserstein Discriminant Analysis

no code implementations ICCV 2019 Bing Su, Jiahuan Zhou, Ying Wu

Supervised dimensionality reduction for sequence data projects the observations in sequences onto a low-dimensional subspace to better separate different sequence classes.

3D Action Recognition Supervised dimensionality reduction

Learning Low-Dimensional Temporal Representations

no code implementations ICML 2018 Bing Su, Ying Wu

Low-dimensional discriminative representations enhance machine learning methods in both performance and complexity, motivating supervised dimensionality reduction (DR) that transforms high-dimensional data to a discriminative subspace.

Supervised dimensionality reduction

Order-Preserving Wasserstein Distance for Sequence Matching

no code implementations CVPR 2017 Bing Su, Gang Hua

We present a new distance measure between sequences that can tackle local temporal distortion and periodic sequences with arbitrary starting points.

Heteroscedastic Max-Min Distance Analysis

no code implementations CVPR 2015 Bing Su, Xiaoqing Ding, Changsong Liu, Ying Wu

Many discriminant analysis methods such as LDA and HLDA actually maximize the average pairwise distances between classes, which often causes the class separation problem.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.