Search Results for author: Jiezhong Qiu

Found 19 papers, 11 papers with code

Spatio-Temporal Contrastive Learning Enhanced GNNs for Session-based Recommendation

no code implementations23 Sep 2022 Zhongwei Wan, Benyou Wang, Xin Liu, Jiezhong Qiu, Boyu Li, Ting Guo, Guangyong Chen, Yang Wang

The idea is to supplement the GNN-based main supervised recommendation task with the temporal representation via an auxiliary cross-view contrastive learning mechanism.

Collaborative Filtering Contrastive Learning +1

Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries

1 code implementation16 Aug 2022 Xiao Liu, Shiyu Zhao, Kai Su, Yukuo Cen, Jiezhong Qiu, Mengdi Zhang, Wei Wu, Yuxiao Dong, Jie Tang

In this work, we present the Knowledge Graph Transformer (kgTransformer) with masked pre-training and fine-tuning strategies.

Stable Prediction on Graphs with Agnostic Distribution Shift

no code implementations8 Oct 2021 Shengyu Zhang, Kun Kuang, Jiezhong Qiu, Jin Yu, Zhou Zhao, Hongxia Yang, Zhongfei Zhang, Fei Wu

The results demonstrate that our method outperforms various SOTA GNNs for stable prediction on graphs with agnostic distribution shift, including shift caused by node labels and attributes.

Graph Learning Recommendation Systems

Fast Extraction of Word Embedding from Q-contexts

no code implementations15 Sep 2021 Junsheng Kong, Weizhao Li, Zeyi Liu, Ben Liao, Jiezhong Qiu, Chang-Yu Hsieh, Yi Cai, Shengyu Zhang

In this work, we show that with merely a small fraction of contexts (Q-contexts)which are typical in the whole corpus (and their mutual information with words), one can construct high-quality word embedding with negligible errors.

Modeling Protein Using Large-scale Pretrain Language Model

2 code implementations17 Aug 2021 Yijia Xiao, Jiezhong Qiu, Ziang Li, Chang-Yu Hsieh, Jie Tang

The emergence of deep learning models makes modeling data patterns in large quantities of data possible.

Drug Discovery Language Modelling

FastMoE: A Fast Mixture-of-Expert Training System

2 code implementations24 Mar 2021 Jiaao He, Jiezhong Qiu, Aohan Zeng, Zhilin Yang, Jidong Zhai, Jie Tang

However, training trillion-scale MoE requires algorithm and system co-design for a well-tuned high performance distributed training system.

Language Modelling

GLM: General Language Model Pretraining with Autoregressive Blank Infilling

2 code implementations ACL 2022 Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang

On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1. 25x parameters of BERT Large , demonstrating its generalizability to different downstream tasks.

Ranked #2 on Language Modelling on WikiText-103 (using extra training data)

Abstractive Text Summarization Classification +4

Local Clustering Graph Neural Networks

no code implementations1 Jan 2021 Jiezhong Qiu, Yukuo Cen, Qibin Chen, Chang Zhou, Jingren Zhou, Hongxia Yang, Jie Tang

Based on the theoretical analysis, we propose Local Clustering Graph Neural Networks (LCGNN), a GNN learning paradigm that utilizes local clustering to efficiently search for small but compact subgraphs for GNN training and inference.

A Matrix Chernoff Bound for Markov Chains and Its Application to Co-occurrence Matrices

no code implementations NeurIPS 2020 Jiezhong Qiu, Chi Wang, Ben Liao, Richard Peng, Jie Tang

Our result gives the first bound on the convergence rate of the co-occurrence matrix and the first sample complexity analysis in graph representation learning.

Graph Learning Graph Representation Learning

NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization

1 code implementation26 Jun 2019 Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, Jie Tang

Previous research shows that 1) popular network embedding benchmarks, such as DeepWalk, are in essence implicitly factorizing a matrix with a closed form, and 2)the explicit factorization of such matrix generates more powerful embeddings than existing methods.

Network Embedding

Alchemy: A Quantum Chemistry Dataset for Benchmarking AI Models

1 code implementation22 Jun 2019 Guangyong Chen, Pengfei Chen, Chang-Yu Hsieh, Chee-Kong Lee, Benben Liao, Renjie Liao, Weiwen Liu, Jiezhong Qiu, Qiming Sun, Jie Tang, Richard Zemel, Shengyu Zhang

We introduce a new molecular dataset, named Alchemy, for developing machine learning models useful in chemistry and material science.

BIG-bench Machine Learning

DeepInf: Social Influence Prediction with Deep Learning

1 code implementation15 Jul 2018 Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, Jie Tang

Inspired by the recent success of deep neural networks in a wide range of computing applications, we design an end-to-end framework, DeepInf, to learn users' latent feature representation for predicting social influence.

Feature Engineering Representation Learning

Revisiting Knowledge Base Embedding as Tensor Decomposition

no code implementations ICLR 2018 Jiezhong Qiu, Hao Ma, Yuxiao Dong, Kuansan Wang, Jie Tang

We study the problem of knowledge base (KB) embedding, which is usually addressed through two frameworks---neural KB embedding and tensor decomposition.

Link Prediction Tensor Decomposition

Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec

4 code implementations9 Oct 2017 Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, Jie Tang

This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.

Network Embedding

Cannot find the paper you are looking for? You can Submit a new open access paper.