Search Results for author: Shenghua Liu

Found 14 papers, 5 papers with code

Is Factuality Decoding a Free Lunch for LLMs? Evaluation on Knowledge Editing Benchmark

no code implementations30 Mar 2024 Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng

The rapid development of large language models (LLMs) enables them to convey factual knowledge in a more human-like fashion.

knowledge editing

Graph Descriptive Order Improves Reasoning with Large Language Model

no code implementations11 Feb 2024 Yuyao Ge, Shenghua Liu, Wenjie Feng, Lingrui Mei, Lizhe Chen, Xueqi Cheng

In this work, we reveal the impact of the order of graph description on LLMs' graph reasoning performance, which significantly affects LLMs' reasoning abilities.

Descriptive Language Modelling +1

LPNL: Scalable Link Prediction with Large Language Models

no code implementations24 Jan 2024 Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng

This work focuses on the link prediction task and introduces $\textbf{LPNL}$ (Link Prediction via Natural Language), a framework based on large language models designed for scalable link prediction on large-scale heterogeneous graphs.

Graph Learning Language Modelling +3

SLANG: New Concept Comprehension of Large Language Models

1 code implementation23 Jan 2024 Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Xueqi Cheng

The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of large language models (LLMs).

Causal Inference

Learning node embeddings via summary graphs: a brief theoretical analysis

no code implementations4 Jul 2022 Houquan Zhou, Shenghua Liu, Danai Koutra, HuaWei Shen, Xueqi Cheng

Recent works try to improve scalability via graph summarization -- i. e., they learn embeddings on a smaller summary graph, and then restore the node embeddings of the original graph.

Graph Mining Graph Representation Learning

AugSplicing: Synchronized Behavior Detection in Streaming Tensors

1 code implementation3 Dec 2020 Jiabao Zhang, Shenghua Liu, Wenting Hou, Siddharth Bhatia, HuaWei Shen, Wenjian Yu, Xueqi Cheng

Therefore, we propose a fast streaming algorithm, AugSplicing, which can detect the top dense blocks by incrementally splicing the previous detection with the incoming ones in new tuples, avoiding re-runs over all the history data at every tracking time step.

Attribute

Summarizing graphs using configuration model

no code implementations19 Oct 2020 Houquan Zhou, Shenghua Liu, Kyuhan Lee, Kijung Shin, HuaWei Shen, Xueqi Cheng

As a solution, graph summarization, which aims to find a compact representation that preserves the important properties of a given graph, has received much attention, and numerous algorithms have been developed for it.

Social and Information Networks

Efficient Model-Based Collaborative Filtering with Fast Adaptive PCA

1 code implementation4 Sep 2020 Xiangyun Ding, Wenjian Yu, Yuyang Xie, Shenghua Liu

The proposed model-based CF approach is able to efficiently process the MovieLens data with 20M ratings and exhibits more than 10X speedup over the regularized matrix factorization based approach [2] and the fast singular value thresholding approach [3] with comparable or better accuracy.

Collaborative Filtering Matrix Completion +1

HoloScope: Topology-and-Spike Aware Fraud Detection

1 code implementation6 May 2017 Shenghua Liu, Bryan Hooi, Christos Faloutsos

Hence, we propose HoloScope, which uses information from graph topology and temporal spikes to more accurately detect groups of fraudulent users.

Social and Information Networks

Cascade Dynamics Modeling with Attention-based Recurrent Neural Network

1 code implementation1 May 2017 Yongqing Wang, HuaWei Shen, Shenghua Liu, Jinhua Gao, and Xueqi Cheng

However, for cascade prediction, each cascade generally corresponds to a diffusion tree, causing cross-dependence in cascade— one sharing behavior could be triggered by its non-immediate predecessor in the memory chain.

Marketing

Single-Pass PCA of Large High-Dimensional Data

no code implementations25 Apr 2017 Wenjian Yu, Yu Gu, Jian Li, Shenghua Liu, Yaohang Li

Principal component analysis (PCA) is a fundamental dimension reduction tool in statistics and machine learning.

Dimensionality Reduction Vocal Bursts Intensity Prediction

Marked Temporal Dynamics Modeling based on Recurrent Neural Network

no code implementations14 Jan 2017 Yongqing Wang, Shenghua Liu, Hua-Wei Shen, Xue-Qi Cheng

Indeed, in marked temporal dynamics, the time and the mark of the next event are highly dependent on each other, requiring a method that could simultaneously predict both of them.

Cannot find the paper you are looking for? You can Submit a new open access paper.