Search Results for author: Yatao Bian

Found 44 papers, 21 papers with code

Graph Unitary Message Passing

no code implementations17 Mar 2024 Haiquan Qiu, Yatao Bian, Quanming Yao

Then, unitary adjacency matrix is obtained with a unitary projection algorithm, which is implemented by utilizing the intrinsic structure of unitary adjacency matrix and allows GUMP to be permutation-equivariant.

Graph Learning

Step-On-Feet Tuning: Scaling Self-Alignment of LLMs via Bootstrapping

no code implementations12 Feb 2024 Haoyu Wang, Guozheng Ma, Ziqiao Meng, Zeyu Qin, Li Shen, Zhong Zhang, Bingzhe Wu, Liu Liu, Yatao Bian, Tingyang Xu, Xueqian Wang, Peilin Zhao

To further exploit the capabilities of bootstrapping, we investigate and adjust the training order of data, which yields improved performance of the model.

In-Context Learning

Enhancing Neural Subset Selection: Integrating Background Information into Set Representations

no code implementations5 Feb 2024 Binghui Xie, Yatao Bian, Kaiwen Zhou, Yongqiang Chen, Peilin Zhao, Bo Han, Wei Meng, James Cheng

Learning neural subset selection tasks, such as compound selection in AI-aided drug discovery, have become increasingly pivotal across diverse applications.

Drug Discovery

Integration of cognitive tasks into artificial general intelligence test for large models

no code implementations4 Feb 2024 Youzhi Qu, Chen Wei, Penghui Du, Wenxin Che, Chi Zhang, Wanli Ouyang, Yatao Bian, Feiyang Xu, Bin Hu, Kai Du, Haiyan Wu, Jia Liu, Quanying Liu

During the evolution of large models, performance evaluation is necessarily performed to assess their capabilities and ensure safety before practical application.

Rethinking and Simplifying Bootstrapped Graph Latents

1 code implementation5 Dec 2023 Wangbin Sun, Jintang Li, Liang Chen, Bingzhe Wu, Yatao Bian, Zibin Zheng

Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning, where negative samples are commonly regarded as the key to preventing model collapse and producing distinguishable representations.

Contrastive Learning Self-Supervised Learning

Positional Information Matters for Invariant In-Context Learning: A Case Study of Simple Function Classes

no code implementations30 Nov 2023 Yongqiang Chen, Binghui Xie, Kaiwen Zhou, Bo Han, Yatao Bian, James Cheng

Surprisingly, DeepSet outperforms transformers across a variety of distribution shifts, implying that preserving permutation invariance symmetry to input demonstrations is crucial for OOD ICL.

In-Context Learning

ETDock: A Novel Equivariant Transformer for Protein-Ligand Docking

no code implementations12 Oct 2023 Yiqiang Yi, Xu Wan, Yatao Bian, Le Ou-Yang, Peilin Zhao

Predicting the docking between proteins and ligands is a crucial and challenging task for drug discovery.

Drug Discovery Pose Prediction

Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators

1 code implementation11 Oct 2023 Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, Kam-Fai Wong

Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge.

Information Retrieval Informativeness +4

SAILOR: Structural Augmentation Based Tail Node Representation Learning

1 code implementation13 Aug 2023 Jie Liao, Jintang Li, Liang Chen, Bingzhe Wu, Yatao Bian, Zibin Zheng

In the pursuit of promoting the expressiveness of GNNs for tail nodes, we explore how the deficiency of structural information deteriorates the performance of tail nodes and propose a general Structural Augmentation based taIL nOde Representation learning framework, dubbed as SAILOR, which can jointly learn to augment the graph structure and extract more informative representations for tail nodes.

Representation Learning

SyNDock: N Rigid Protein Docking via Learnable Group Synchronization

no code implementations23 May 2023 Yuanfeng Ji, Yatao Bian, Guoji Fu, Peilin Zhao, Ping Luo

Firstly, SyNDock formulates multimeric protein docking as a problem of learning global transformations to holistically depict the placement of chain units of a complex, enabling a learning-centric solution.

Understanding and Improving Feature Learning for Out-of-Distribution Generalization

1 code implementation NeurIPS 2023 Yongqiang Chen, Wei Huang, Kaiwen Zhou, Yatao Bian, Bo Han, James Cheng

Moreover, when fed the ERM learned features to the OOD objectives, the invariant feature learning quality significantly affects the final OOD performance, as OOD objectives rarely learn new features.

Out-of-Distribution Generalization

Reweighted Mixup for Subpopulation Shift

no code implementations9 Apr 2023 Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, QinGhua Hu, Bingzhe Wu, Changqing Zhang, Jianhua Yao

Subpopulation shift exists widely in many real-world applications, which refers to the training and test distributions that contain the same subpopulation groups but with different subpopulation proportions.

Fairness Generalization Bounds

Activity Cliff Prediction: Dataset and Benchmark

1 code implementation15 Feb 2023 Ziqiao Zhang, Bangyi Zhao, Ailin Xie, Yatao Bian, Shuigeng Zhou

In this paper, we first introduce ACNet, a large-scale dataset for AC prediction.

Drug Discovery

Hierarchical Few-Shot Object Detection: Problem, Benchmark and Method

1 code implementation8 Oct 2022 Lu Zhang, Yang Wang, Jiaogen Zhou, Chenbo Zhang, Yinglu Zhang, Jihong Guan, Yatao Bian, Shuigeng Zhou

In this paper, we propose and solve a new problem called hierarchical few-shot object detection (Hi-FSOD), which aims to detect objects with hierarchical categories in the FSOD paradigm.

Contrastive Learning Few-Shot Object Detection +2

UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup

1 code implementation19 Sep 2022 Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, Bingzhe Wu, Changqing Zhang, Jianhua Yao

Importance reweighting is a normal way to handle the subpopulation shift issue by imposing constant or adaptive sampling weights on each sample in the training dataset.

Generalization Bounds

Can Pre-trained Models Really Learn Better Molecular Representations for AI-aided Drug Discovery?

no code implementations21 Aug 2022 Ziqiao Zhang, Yatao Bian, Ailin Xie, Pengju Han, Long-Kai Huang, Shuigeng Zhou

Self-supervised pre-training is gaining increasingly more popularity in AI-aided drug discovery, leading to more and more pre-trained models with the promise that they can extract better feature representations for molecules.

Drug Discovery

Diversity Boosted Learning for Domain Generalization with Large Number of Domains

no code implementations28 Jul 2022 Xi Leng, Xiaoying Tang, Yatao Bian

Machine learning algorithms minimizing the average training loss usually suffer from poor generalization performance due to the greedy exploitation of correlations among the training data, which are not stable under distributional shifts.

Domain Generalization Point Processes +1

Hypergraph Convolutional Networks via Equivalency between Hypergraphs and Undirected Graphs

2 code implementations31 Mar 2022 Jiying Zhang, Fuyang Li, Xi Xiao, Tingyang Xu, Yu Rong, Junzhou Huang, Yatao Bian

As a powerful tool for modeling complex relationships, hypergraphs are gaining popularity from the graph learning community.

Graph Learning

Fine-Tuning Graph Neural Networks via Graph Topology induced Optimal Transport

1 code implementation20 Mar 2022 Jiying Zhang, Xi Xiao, Long-Kai Huang, Yu Rong, Yatao Bian

In this paper, we present a novel optimal transport-based fine-tuning framework called GTOT-Tuning, namely, Graph Topology induced Optimal Transport fine-Tuning, for GNN style backbones.

Graph Classification Graph Learning +2

Learning Neural Set Functions Under the Optimal Subset Oracle

1 code implementation3 Mar 2022 Zijing Ou, Tingyang Xu, Qinliang Su, Yingzhen Li, Peilin Zhao, Yatao Bian

Learning neural set functions becomes increasingly more important in many applications like product recommendation and compound selection in AI-aided drug discovery.

Anomaly Detection Drug Discovery +2

Transformer for Graphs: An Overview from Architecture Perspective

1 code implementation17 Feb 2022 Erxue Min, Runfa Chen, Yatao Bian, Tingyang Xu, Kangfei Zhao, Wenbing Huang, Peilin Zhao, Junzhou Huang, Sophia Ananiadou, Yu Rong

In this survey, we provide a comprehensive review of various Graph Transformer models from the architectural design perspective.

Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack

no code implementations15 Feb 2022 Jintang Li, Bingzhe Wu, Chengbin Hou, Guoji Fu, Yatao Bian, Liang Chen, Junzhou Huang, Zibin Zheng

Despite the progress, applying DGL to real-world applications faces a series of reliability threats including inherent noise, distribution shift, and adversarial attacks.

Adversarial Attack Graph Learning

Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs

3 code implementations11 Feb 2022 Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng

Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.

Drug Discovery Graph Learning +1

Neighbour Interaction based Click-Through Rate Prediction via Graph-masked Transformer

no code implementations25 Jan 2022 Erxue Min, Yu Rong, Tingyang Xu, Yatao Bian, Peilin Zhao, Junzhou Huang, Da Luo, Kangyi Lin, Sophia Ananiadou

Although these methods have made great progress, they are often limited by the recommender system's direct exposure and inactive interactions, and thus fail to mine all potential user interests.

Click-Through Rate Prediction Representation Learning

Not All Low-Pass Filters are Robust in Graph Convolutional Networks

1 code implementation NeurIPS 2021 Heng Chang, Yu Rong, Tingyang Xu, Yatao Bian, Shiji Zhou, Xin Wang, Junzhou Huang, Wenwu Zhu

Graph Convolutional Networks (GCNs) are promising deep learning approaches in learning representations for graph-structured data.

Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking

1 code implementation ICLR 2022 Octavian-Eugen Ganea, Xinyuan Huang, Charlotte Bunne, Yatao Bian, Regina Barzilay, Tommi Jaakkola, Andreas Krause

Protein complex formation is a central problem in biology, being involved in most of the cell's processes, and essential for applications, e. g. drug design or protein engineering.

Graph Matching Translation

$p$-Laplacian Based Graph Neural Networks

2 code implementations14 Nov 2021 Guoji Fu, Peilin Zhao, Yatao Bian

Graph neural networks (GNNs) have demonstrated superior performance for semi-supervised node classification on graphs, as a result of their ability to exploit node features and topological information simultaneously.

Node Classification

Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning

no code implementations ICLR 2022 Yatao Bian, Yu Rong, Tingyang Xu, Jiaxiang Wu, Andreas Krause, Junzhou Huang

By running fixed point iteration for multiple steps, we achieve a trajectory of the valuations, among which we define the valuation with the best conceivable decoupling error as the Variational Index.

Data Valuation Variational Inference

Diversified Multiscale Graph Learning with Graph Self-Correction

no code implementations17 Mar 2021 Yuzhao Chen, Yatao Bian, Jiying Zhang, Xi Xiao, Tingyang Xu, Yu Rong, Junzhou Huang

Though the multiscale graph learning techniques have enabled advanced feature extraction frameworks, the classic ensemble strategy may show inferior performance while encountering the high homogeneity of the learnt representation, which is caused by the nature of existing graph pooling methods.

Ensemble Learning Graph Classification +1

On Self-Distilling Graph Neural Network

no code implementations4 Nov 2020 Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, Junzhou Huang

Furthermore, the inefficient training process of teacher-student knowledge distillation also impedes its applications in GNN models.

Graph Embedding Knowledge Distillation

Graph Information Bottleneck for Subgraph Recognition

1 code implementation ICLR 2021 Junchi Yu, Tingyang Xu, Yu Rong, Yatao Bian, Junzhou Huang, Ran He

In this paper, we propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning.

Denoising Graph Classification +1

Continuous Submodular Function Maximization

no code implementations24 Jun 2020 Yatao Bian, Joachim M. Buhmann, Andreas Krause

We start by a thorough characterization of the class of continuous submodular functions, and show that continuous submodularity is equivalent to a weak version of the diminishing returns (DR) property.

Self-Supervised Graph Transformer on Large-Scale Molecular Data

3 code implementations NeurIPS 2020 Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying WEI, Wenbing Huang, Junzhou Huang

We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules -- the biggest GNN and the largest training dataset in molecular representation learning.

Molecular Property Prediction molecular representation +2

Multi-View Graph Neural Networks for Molecular Property Prediction

no code implementations17 May 2020 Hehuan Ma, Yatao Bian, Yu Rong, Wenbing Huang, Tingyang Xu, Weiyang Xie, Geyan Ye, Junzhou Huang

Guided by this observation, we present Multi-View Graph Neural Network (MV-GNN), a multi-view message passing architecture to enable more accurate predictions of molecular properties.

Drug Discovery Molecular Property Prediction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.