Search Results for author: Baoyu Jing

Found 18 papers, 7 papers with code

Automated Contrastive Learning Strategy Search for Time Series

no code implementations19 Mar 2024 Baoyu Jing, Yansen Wang, Guoxin Sui, Jing Hong, Jingrui He, Yuqing Yang, Dongsheng Li, Kan Ren

In recent years, Contrastive Learning (CL) has become a predominant representation learning paradigm for time series.

AutoML Contrastive Learning +3

CASPER: Causality-Aware Spatiotemporal Graph Neural Networks for Spatiotemporal Time Series Imputation

no code implementations18 Mar 2024 Baoyu Jing, Dawei Zhou, Kan Ren, Carl Yang

Based on the results of the frontdoor adjustment, we introduce a novel Causality-Aware SPatiotEmpoRal graph neural network (CASPER), which contains a novel Spatiotemporal Causal Attention (SCA) and a Prompt Based Decoder (PBD).

Imputation Time Series

Characterizing Long-Tail Categories on Graphs

no code implementations17 May 2023 Haohui Wang, Baoyu Jing, Kaize Ding, Yada Zhu, Liqing Zhang, Dawei Zhou

However, there is limited literature that provides a theoretical tool to characterize the behaviors of long-tail categories on graphs and understand the generalization performance in real scenarios.

Contrastive Learning Multi-Task Learning

STERLING: Synergistic Representation Learning on Bipartite Graphs

no code implementations25 Jan 2023 Baoyu Jing, Yuchen Yan, Kaize Ding, Chanyoung Park, Yada Zhu, Huan Liu, Hanghang Tong

Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.

Contrastive Learning Graph Representation Learning +1

Retrieval Based Time Series Forecasting

no code implementations27 Sep 2022 Baoyu Jing, Si Zhang, Yada Zhu, Bin Peng, Kaiyu Guan, Andrew Margenot, Hanghang Tong

In this paper, we show both theoretically and empirically that the uncertainty could be effectively reduced by retrieving relevant time series as references.

Imputation Retrieval +2

ARIEL: Adversarial Graph Contrastive Learning

1 code implementation15 Aug 2022 Shengyu Feng, Baoyu Jing, Yada Zhu, Hanghang Tong

In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints.

Contrastive Learning Data Augmentation +1

COIN: Co-Cluster Infomax for Bipartite Graphs

no code implementations31 May 2022 Baoyu Jing, Yuchen Yan, Yada Zhu, Hanghang Tong

We theoretically prove that COIN is able to effectively increase the mutual information of node embeddings and COIN is upper-bounded by the prior distributions of nodes.

Drug Discovery Information Retrieval +3

Graph Communal Contrastive Learning

1 code implementation28 Oct 2021 Bolian Li, Baoyu Jing, Hanghang Tong

We argue that the community information should be considered to identify node pairs in the same communities, where the nodes insides are semantically similar.

Community Detection Contrastive Learning +1

X-GOAL: Multiplex Heterogeneous Graph Prototypical Contrastive Learning

no code implementations8 Sep 2021 Baoyu Jing, Shengyu Feng, Yuejia Xiang, Xi Chen, Yu Chen, Hanghang Tong

X-GOAL is comprised of two components: the GOAL framework, which learns node embeddings for each homogeneous graph layer, and an alignment regularization, which jointly models different layers by aligning layer-specific node embeddings.

Contrastive Learning Graph Learning +2

Network of Tensor Time Series

1 code implementation15 Feb 2021 Baoyu Jing, Hanghang Tong, Yada Zhu

We propose a novel model called Network of Tensor Time Series, which is comprised of two modules, including Tensor Graph Convolutional Network (TGCN) and Tensor Recurrent Neural Network (TRNN).

Tensor Decomposition Time Series +1

HDMI: High-order Deep Multiplex Infomax

1 code implementation15 Feb 2021 Baoyu Jing, Chanyoung Park, Hanghang Tong

To address the above-mentioned problems, we propose a novel framework, called High-order Deep Multiplex Infomax (HDMI), for learning node embedding on multiplex networks in a self-supervised way.

Node Classification Representation Learning +1

SODA: Detecting Covid-19 in Chest X-rays with Semi-supervised Open Set Domain Adaptation

no code implementations22 May 2020 Jieli Zhou, Baoyu Jing, Zeya Wang

However, direct transfer across datasets from different domains may lead to poor performance for CNN due to two issues, the large domain shift present in the biomedical imaging datasets and the extremely small scale of the COVID-19 chest x-ray dataset.

Domain Adaptation Image Classification

Show, Describe and Conclude: On Exploiting the Structure Information of Chest X-Ray Reports

no code implementations ACL 2019 Baoyu Jing, Zeya Wang, Eric Xing

In this work, we propose a novel framework that exploits the structure information between and within report sections for generating CXR imaging reports.

Descriptive

Adversarial Domain Adaptation Being Aware of Class Relationships

no code implementations28 May 2019 Zeya Wang, Baoyu Jing, Yang Ni, Nanqing Dong, Pengtao Xie, Eric P. Xing

In this paper, we propose a novel relationship-aware adversarial domain adaptation (RADA) algorithm, which first utilizes a single multi-class domain discriminator to enforce the learning of inter-class dependency structure during domain-adversarial training and then aligns this structure with the inter-class dependencies that are characterized from training the label predictor on source domain.

Domain Adaptation Transfer Learning

Cross-Domain Labeled LDA for Cross-Domain Text Classification

1 code implementation16 Sep 2018 Baoyu Jing, Chenwei Lu, Deqing Wang, Fuzhen Zhuang, Cheng Niu

To this end, we embed the group alignment and a partial supervision into a cross-domain topic model, and propose a Cross-Domain Labeled LDA (CDL-LDA).

Cross-Domain Text Classification General Classification +1

On the Automatic Generation of Medical Imaging Reports

4 code implementations ACL 2018 Baoyu Jing, Pengtao Xie, Eric Xing

To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the pre- diction of tags and the generation of para- graphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs.

Medical Report Generation Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.