no code implementations • 17 May 2023 • Haohui Wang, Baoyu Jing, Kaize Ding, Yada Zhu, Dawei Zhou
However, there is limited literature that provides a theoretical tool to characterize the behaviors of long-tail categories on graphs and understand the generalization performance in real scenarios.
no code implementations • 25 Jan 2023 • Baoyu Jing, Yuchen Yan, Kaize Ding, Chanyoung Park, Yada Zhu, Huan Liu, Hanghang Tong
Self-Supervised Learning (SSL) is a promising paradigm to address this challenge.
no code implementations • 27 Sep 2022 • Baoyu Jing, Si Zhang, Yada Zhu, Bin Peng, Kaiyu Guan, Andrew Margenot, Hanghang Tong
In this paper, we show both theoretically and empirically that the uncertainty could be effectively reduced by retrieving relevant time series as references.
no code implementations • 15 Aug 2022 • Shengyu Feng, Baoyu Jing, Yada Zhu, Hanghang Tong
In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints.
no code implementations • 31 May 2022 • Baoyu Jing, Yuchen Yan, Yada Zhu, Hanghang Tong
We theoretically prove that COIN is able to effectively increase the mutual information of node embeddings and COIN is upper-bounded by the prior distributions of nodes.
no code implementations • 14 Feb 2022 • Shengyu Feng, Baoyu Jing, Yada Zhu, Hanghang Tong
Contrastive learning is an effective unsupervised method in graph representation learning.
1 code implementation • 28 Oct 2021 • Bolian Li, Baoyu Jing, Hanghang Tong
We argue that the community information should be considered to identify node pairs in the same communities, where the nodes insides are semantically similar.
no code implementations • 8 Sep 2021 • Baoyu Jing, Shengyu Feng, Yuejia Xiang, Xi Chen, Yu Chen, Hanghang Tong
X-GOAL is comprised of two components: the GOAL framework, which learns node embeddings for each homogeneous graph layer, and an alignment regularization, which jointly models different layers by aligning layer-specific node embeddings.
no code implementations • EMNLP 2021 • Baoyu Jing, Zeyu You, Tao Yang, Wei Fan, Hanghang Tong
Extractive text summarization aims at extracting the most representative sentences from a given document as its summary.
1 code implementation • 15 Feb 2021 • Baoyu Jing, Chanyoung Park, Hanghang Tong
To address the above-mentioned problems, we propose a novel framework, called High-order Deep Multiplex Infomax (HDMI), for learning node embedding on multiplex networks in a self-supervised way.
1 code implementation • 15 Feb 2021 • Baoyu Jing, Hanghang Tong, Yada Zhu
We propose a novel model called Network of Tensor Time Series, which is comprised of two modules, including Tensor Graph Convolutional Network (TGCN) and Tensor Recurrent Neural Network (TRNN).
no code implementations • 22 May 2020 • Jieli Zhou, Baoyu Jing, Zeya Wang
However, direct transfer across datasets from different domains may lead to poor performance for CNN due to two issues, the large domain shift present in the biomedical imaging datasets and the extremely small scale of the COVID-19 chest x-ray dataset.
no code implementations • ACL 2019 • Baoyu Jing, Zeya Wang, Eric Xing
In this work, we propose a novel framework that exploits the structure information between and within report sections for generating CXR imaging reports.
no code implementations • 28 May 2019 • Zeya Wang, Baoyu Jing, Yang Ni, Nanqing Dong, Pengtao Xie, Eric P. Xing
In this paper, we propose a novel relationship-aware adversarial domain adaptation (RADA) algorithm, which first utilizes a single multi-class domain discriminator to enforce the learning of inter-class dependency structure during domain-adversarial training and then aligns this structure with the inter-class dependencies that are characterized from training the label predictor on source domain.
1 code implementation • 16 Sep 2018 • Baoyu Jing, Chenwei Lu, Deqing Wang, Fuzhen Zhuang, Cheng Niu
To this end, we embed the group alignment and a partial supervision into a cross-domain topic model, and propose a Cross-Domain Labeled LDA (CDL-LDA).
4 code implementations • ACL 2018 • Baoyu Jing, Pengtao Xie, Eric Xing
To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the pre- diction of tags and the generation of para- graphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs.