Search Results for author: Shichang Zhang

Found 15 papers, 7 papers with code

Generalized Group Data Attribution

no code implementations13 Oct 2024 Dan Ley, Suraj Srinivas, Shichang Zhang, Gili Rusak, Himabindu Lakkaraju

Data Attribution (DA) methods quantify the influence of individual training data points on model outputs and have broad applications such as explainability, data selection, and noisy label identification.

Computational Efficiency

Hierarchical Compression of Text-Rich Graphs via Large Language Models

no code implementations13 Jun 2024 Shichang Zhang, Da Zheng, Jiani Zhang, Qi Zhu, Xiang Song, Soji Adeshina, Christos Faloutsos, George Karypis, Yizhou Sun

Large Language Models (LLMs), noted for their superior text understanding abilities, offer a solution for processing the text in graphs but face integration challenges due to their limitation for encoding graph structures and their computational complexities when dealing with extensive text in large neighborhoods of interconnected nodes.

Node Classification

Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller

1 code implementation4 Jun 2024 Min Cai, Yuchen Zhang, Shichang Zhang, Fan Yin, Dan Zhang, Difan Zou, Yisong Yue, Ziniu Hu

Given a desired behavior expressed in a natural language suffix string concatenated to the input prompt, SelfControl computes gradients of the LLM's self-evaluation of the suffix with respect to its latent representations.

Efficient Ensembles Improve Training Data Attribution

no code implementations27 May 2024 Junwei Deng, Ting-Wei Li, Shichang Zhang, Jiaqi Ma

Training data attribution (TDA) methods aim to quantify the influence of individual training data points on the model predictions, with broad applications in data-centric AI, such as mislabel detection, data selection, and copyright compensation.

Attribute Computational Efficiency

Parameter-Efficient Tuning Large Language Models for Graph Representation Learning

no code implementations28 Apr 2024 Qi Zhu, Da Zheng, Xiang Song, Shichang Zhang, Bowen Jin, Yizhou Sun, George Karypis

Inspired by this, we introduce Graph-aware Parameter-Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning with LLMs on text-rich graphs.

Graph Neural Network Graph Representation Learning +2

Predicting and Interpreting Energy Barriers of Metallic Glasses with Graph Neural Networks

1 code implementation8 Dec 2023 Haoyu Li, Shichang Zhang, Longwen Tang, Mathieu Bauchy, Yizhou Sun

Metallic Glasses (MGs) are widely used materials that are stronger than steel while being shapeable as plastic.

Representation Learning

SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models

1 code implementation20 Jul 2023 Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang

Most of the existing Large Language Model (LLM) benchmarks on scientific problem reasoning focus on problems grounded in high-school subjects and are confined to elementary algebraic operations.

Benchmarking Language Modelling +2

Linkless Link Prediction via Relational Distillation

no code implementations11 Oct 2022 Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh V. Chawla, Neil Shah, Tong Zhao

In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i. e., predicted logit-based matching and node representation-based matching.

Knowledge Distillation Link Prediction +1

GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative Games

1 code implementation28 Jan 2022 Shichang Zhang, Yozen Liu, Neil Shah, Yizhou Sun

Explaining machine learning models is an important and increasingly popular area of research interest.

Attribute Feature Importance +4

Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation

1 code implementation ICLR 2022 Shichang Zhang, Yozen Liu, Yizhou Sun, Neil Shah

Conversely, multi-layer perceptrons (MLPs) have no graph dependency and infer much faster than GNNs, even though they are less accurate than GNNs for node classification in general.

Knowledge Distillation Node Classification +2

Graph Condensation for Graph Neural Networks

2 code implementations ICLR 2022 Wei Jin, Lingxiao Zhao, Shichang Zhang, Yozen Liu, Jiliang Tang, Neil Shah

Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns.

Motif-Driven Contrastive Learning of Graph Representations

no code implementations23 Dec 2020 Shichang Zhang, Ziniu Hu, Arjun Subramonian, Yizhou Sun

Our framework MotIf-driven Contrastive leaRning Of Graph representations (MICRO-Graph) can: 1) use GNNs to extract motifs from large graph datasets; 2) leverage learned motifs to sample informative subgraphs for contrastive learning of GNN.

Clustering Contrastive Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.