Search Results for author: Haitao Mao

Found 29 papers, 17 papers with code

One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs

no code implementations30 Nov 2024 Jingzhe Liu, Haitao Mao, Zhikai Chen, Wenqi Fan, Mingxuan Ju, Tong Zhao, Neil Shah, Jiliang Tang

Graph Neural Networks (GNNs) have emerged as a powerful tool to capture intricate network patterns, achieving success across different domains.

Link Prediction Node Classification

Do Neural Scaling Laws Exist on Graph Self-Supervised Learning?

no code implementations20 Aug 2024 Qian Ma, Haitao Mao, Jingzhe Liu, Zhehua Zhang, Chunlin Feng, Yu Song, Yihan Shao, Yao Ma

This paper examines existing SSL techniques for the feasibility of Graph SSL techniques in developing GFMs and opens a new direction for graph SSL design with the new evaluation prototype.

Self-Supervised Learning

Intrinsic Self-correction for Enhanced Morality: An Analysis of Internal Mechanisms and the Superficial Hypothesis

no code implementations21 Jul 2024 Guangliang Liu, Haitao Mao, Jiliang Tang, Kristen Marie Johnson

Through empirical investigation with tasks of language generation and multi-choice question answering, we conclude:(i) LLMs exhibit good performance across both tasks, and self-correction instructions are particularly beneficial when the correct answer is already top-ranked; (ii) The morality levels in intermediate hidden states are strong indicators as to whether one instruction would be more effective than another; (iii) Based on our analysis of intermediate hidden states and task case studies of self-correction behaviors, we are first to propose the hypothesis that intrinsic moral self-correction is in fact superficial.

Question Answering Text Generation

A Pure Transformer Pretraining Framework on Text-attributed Graphs

1 code implementation19 Jun 2024 Yu Song, Haitao Mao, Jiachen Xiao, Jingzhe Liu, Zhikai Chen, Wei Jin, Carl Yang, Jiliang Tang, Hui Liu

Pretraining plays a pivotal role in acquiring generalized knowledge from large-scale data, achieving remarkable successes as evidenced by large models in CV and NLP.

Link Prediction Node Classification

Text-space Graph Foundation Models: Comprehensive Benchmarks and New Insights

1 code implementation15 Jun 2024 Zhikai Chen, Haitao Mao, Jingzhe Liu, Yu Song, Bingheng Li, Wei Jin, Bahare Fatemi, Anton Tsitsulin, Bryan Perozzi, Hui Liu, Jiliang Tang

First, the absence of a comprehensive benchmark with unified problem settings hinders a clear understanding of the comparative effectiveness and practical value of different text-space GFMs.

On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and Latent Concept

no code implementations4 Jun 2024 Guangliang Liu, Haitao Mao, Bochuan Cao, Zhiyu Xue, Xitong Zhang, Rongrong Wang, Jiliang Tang, Kristen Johnson

Our findings are verified in: (1) the scenario of multi-round question answering, by comprehensively demonstrating that intrinsic self-correction can progressively introduce performance gains through iterative interactions, ultimately converging to stable performance; and (2) the context of intrinsic self-correction for enhanced morality, in which we provide empirical evidence that iteratively applying instructions reduces model uncertainty towards convergence, which then leads to convergence of both the calibration error and self-correction performance, ultimately resulting in a stable state of intrinsic self-correction.

Question Answering Safety Alignment

PDHG-Unrolled Learning-to-Optimize Method for Large-Scale Linear Programming

1 code implementation4 Jun 2024 Bingheng Li, Linxin Yang, Yupeng Chen, Senmiao Wang, Qian Chen, Haitao Mao, Yao Ma, Akang Wang, Tian Ding, Jiliang Tang, Ruoyu Sun

In this work, we propose an FOM-unrolled neural network (NN) called PDHG-Net, and propose a two-stage L2O method to solve large-scale LP problems.

Cross-Domain Graph Data Scaling: A Showcase with Diffusion Models

2 code implementations4 Jun 2024 Wenzhuo Tang, Haitao Mao, Danial Dervovic, Ivan Brugere, Saumitra Mishra, Yuying Xie, Jiliang Tang

To achieve effective data scaling, we aim to develop a general model that is able to capture diverse data patterns of graphs and can be utilized to adaptively help the downstream tasks.

Graph Machine Learning in the Era of Large Language Models (LLMs)

no code implementations23 Apr 2024 Wenqi Fan, Shijie Wang, Jiani Huang, Zhikai Chen, Yu Song, Wenzhuo Tang, Haitao Mao, Hui Liu, Xiaorui Liu, Dawei Yin, Qing Li

Meanwhile, graphs, especially knowledge graphs, are rich in reliable factual knowledge, which can be utilized to enhance the reasoning capabilities of LLMs and potentially alleviate their limitations such as hallucinations and the lack of explainability.

Few-Shot Learning Knowledge Graphs +1

Addressing Shortcomings in Fair Graph Learning Datasets: Towards a New Benchmark

1 code implementation9 Mar 2024 Xiaowei Qian, Zhimeng Guo, Jialiang Li, Haitao Mao, Bingheng Li, Suhang Wang, Yao Ma

These datasets are thoughtfully designed to include relevant graph structures and bias information crucial for the fair evaluation of models.

Benchmarking Fairness +1

Universal Link Predictor By In-Context Learning on Graphs

no code implementations12 Feb 2024 Kaiwen Dong, Haitao Mao, Zhichun Guo, Nitesh V. Chawla

In this work, we introduce the Universal Link Predictor (UniLP), a novel model that combines the generalizability of heuristic approaches with the pattern learning capabilities of parametric models.

Hyperparameter Optimization In-Context Learning +1

Towards Neural Scaling Laws on Graphs

1 code implementation3 Feb 2024 Jingzhe Liu, Haitao Mao, Zhikai Chen, Tong Zhao, Neil Shah, Jiliang Tang

Yet, the neural scaling laws on graphs, i. e., how the performance of deep graph models changes with model and dataset sizes, have not been systematically investigated, casting doubts on the feasibility of achieving large graph models.

Graph Classification Link Prediction +1

Position: Graph Foundation Models are Already Here

1 code implementation3 Feb 2024 Haitao Mao, Zhikai Chen, Wenzhuo Tang, Jianan Zhao, Yao Ma, Tong Zhao, Neil Shah, Mikhail Galkin, Jiliang Tang

Graph Foundation Models (GFMs) are emerging as a significant research topic in the graph domain, aiming to develop graph models trained on extensive and diverse data to enhance their applicability across various tasks and domains.

Position

A Survey to Recent Progress Towards Understanding In-Context Learning

no code implementations3 Feb 2024 Haitao Mao, Guangliang Liu, Yao Ma, Rongrong Wang, Kristen Johnson, Jiliang Tang

In-Context Learning (ICL) empowers Large Language Models (LLMs) with the ability to learn from a few examples provided in the prompt, enabling downstream generalization without the requirement for gradient updates.

In-Context Learning

LPFormer: An Adaptive Graph Transformer for Link Prediction

1 code implementation17 Oct 2023 Harry Shomer, Yao Ma, Haitao Mao, Juanhui Li, Bo Wu, Jiliang Tang

These methods perform predictions by using the output of an MPNN in conjunction with a "pairwise encoding" that captures the relationship between nodes in the candidate link.

Inductive Bias Link Prediction +1

Label-free Node Classification on Graphs with Large Language Models (LLMS)

1 code implementation7 Oct 2023 Zhikai Chen, Haitao Mao, Hongzhi Wen, Haoyu Han, Wei Jin, Haiyang Zhang, Hui Liu, Jiliang Tang

In light of these observations, this work introduces a label-free node classification on graphs with LLMs pipeline, LLM-GNN.

Node Classification

Revisiting Link Prediction: A Data Perspective

1 code implementation1 Oct 2023 Haitao Mao, Juanhui Li, Harry Shomer, Bingheng Li, Wenqi Fan, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang

We recognize three fundamental factors critical to link prediction: local structural proximity, global structural proximity, and feature proximity.

Link Prediction Prediction

Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs

2 code implementations7 Jul 2023 Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, Jiliang Tang

The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding.

General Knowledge Node Classification

Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All?

1 code implementation NeurIPS 2023 Haitao Mao, Zhikai Chen, Wei Jin, Haoyu Han, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang

Recent studies on Graph Neural Networks(GNNs) provide both empirical and theoretical evidence supporting their effectiveness in capturing structural patterns on both homophilic and certain heterophilic graphs.

All Node Classification

Company Competition Graph

no code implementations1 Apr 2023 Yanci Zhang, Yutong Lu, Haitao Mao, Jiawei Huang, Cien Zhang, Xinyi Li, Rui Dai

Based on the output from our system, we construct a knowledge graph with more than 700 nodes and 1200 edges.

Knowledge Graphs

Form 10-K Itemization

no code implementations18 Feb 2023 Yanci Zhang, Mengjia Xia, Mingyang Li, Haitao Mao, Yutong Lu, Yupeng Lan, Jinlin Ye, Rui Dai

With the segmented Item sections, NLP techniques can directly apply on those Item sections related to downstream tasks.

Financial Analysis Form +1

Whole Page Unbiased Learning to Rank

no code implementations19 Oct 2022 Haitao Mao, Lixin Zou, Yujia Zheng, Jiliang Tang, Xiaokai Chu, Jiashu Zhao, Qian Wang, Dawei Yin

To address the above challenges, we propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model with causal discovery and mitigate the biases induced by multiple SERP features with no specific design.

Causal Discovery Information Retrieval +2

A Large Scale Search Dataset for Unbiased Learning to Rank

1 code implementation7 Jul 2022 Lixin Zou, Haitao Mao, Xiaokai Chu, Jiliang Tang, Wenwen Ye, Shuaiqiang Wang, Dawei Yin

The unbiased learning to rank (ULTR) problem has been greatly advanced by recent deep learning techniques and well-designed debias algorithms.

Causal Discovery Language Modelling +3

Alternately Optimized Graph Neural Networks

no code implementations8 Jun 2022 Haoyu Han, Xiaorui Liu, Haitao Mao, MohamadAli Torkamani, Feng Shi, Victor Lee, Jiliang Tang

Extensive experiments demonstrate that the proposed method can achieve comparable or better performance with state-of-the-art baselines while it has significantly better computation and memory efficiency.

MULTI-VIEW LEARNING Node Classification

Source Free Unsupervised Graph Domain Adaptation

1 code implementation2 Dec 2021 Haitao Mao, Lun Du, Yujia Zheng, Qiang Fu, Zelin Li, Xu Chen, Shi Han, Dongmei Zhang

To address the non-trivial adaptation challenges in this practical scenario, we propose a model-agnostic algorithm called SOGA for domain adaptation to fully exploit the discriminative ability of the source model while preserving the consistency of structural proximity on the target graph.

Domain Adaptation GRAPH DOMAIN ADAPTATION +1

Neuron with Steady Response Leads to Better Generalization

no code implementations30 Nov 2021 Qiang Fu, Lun Du, Haitao Mao, Xu Chen, Wei Fang, Shi Han, Dongmei Zhang

Based on the analysis results, we articulate the Neuron Steadiness Hypothesis: the neuron with similar responses to instances of the same class leads to better generalization.

Inductive Bias

Neuron Campaign for Initialization Guided by Information Bottleneck Theory

1 code implementation14 Aug 2021 Haitao Mao, Xu Chen, Qiang Fu, Lun Du, Shi Han, Dongmei Zhang

Initialization plays a critical role in the training of deep neural networks (DNN).

Cannot find the paper you are looking for? You can Submit a new open access paper.