Search Results for author: Guolin Ke

Found 24 papers, 13 papers with code

Benchmarking Graphormer on Large-Scale Molecular Modeling Datasets

1 code implementation9 Mar 2022 Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu

This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.

An Empirical Study of Graphormer on Large-Scale Molecular Modeling Datasets

no code implementations28 Feb 2022 Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu

This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.

Do Transformers Really Perform Badly for Graph Representation?

no code implementations NeurIPS 2021 Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu

Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.

Graph Representation Learning

Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding

no code implementations NeurIPS 2021 Shengjie Luo, Shanda Li, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng, Guolin Ke, LiWei Wang, Tie-Yan Liu

Since in many state-of-the-art models, relative positional encoding is used as default, designing efficient Transformers that can incorporate RPE is appealing.

Deep Subdomain Adaptation Network for Image Classification

1 code implementation17 Jun 2021 Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Guolin Ke, Jingwu Chen, Jiang Bian, Hui Xiong, Qing He

The adaptation can be achieved easily with most feed-forward network models by extending them with LMMD loss, which can be trained efficiently via back-propagation.

Classification Domain Adaptation +4

First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track

4 code implementations15 Jun 2021 Chengxuan Ying, Mingqi Yang, Shuxin Zheng, Guolin Ke, Shengjie Luo, Tianle Cai, Chenglin Wu, Yuxin Wang, Yanming Shen, Di He

In this technical report, we present our solution of KDD Cup 2021 OGB Large-Scale Challenge - PCQM4M-LSC Track.

Do Transformers Really Perform Bad for Graph Representation?

4 code implementations9 Jun 2021 Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu

Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.

Graph Classification Graph Regression +1

How could Neural Networks understand Programs?

1 code implementation10 May 2021 Dinglan Peng, Shuxin Zheng, Yatao Li, Guolin Ke, Di He, Tie-Yan Liu

Inspired by this, we propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition, which is indispensable for program understanding.

Transformers with Competitive Ensembles of Independent Mechanisms

no code implementations27 Feb 2021 Alex Lamb, Di He, Anirudh Goyal, Guolin Ke, Chien-Feng Liao, Mirco Ravanelli, Yoshua Bengio

In this work we explore a way in which the Transformer architecture is deficient: it represents each position with a large monolithic hidden representation and a single set of parameters which are applied over the entire hidden representation.

Speech Enhancement

LazyFormer: Self Attention with Lazy Update

no code implementations25 Feb 2021 Chengxuan Ying, Guolin Ke, Di He, Tie-Yan Liu

In each lazy block, the self-attention distribution is only computed once in the first layer and then is reused in all upper layers.

Revisiting Language Encoding in Learning Multilingual Representations

1 code implementation16 Feb 2021 Shengjie Luo, Kaiyuan Gao, Shuxin Zheng, Guolin Ke, Di He, LiWei Wang, Tie-Yan Liu

The language embedding can be either added to the word embedding or attached at the beginning of the sentence.

Word Embeddings

Taking Notes on the Fly Helps Language Pre-Training

no code implementations ICLR 2021 Qiyu Wu, Chen Xing, Yatao Li, Guolin Ke, Di He, Tie-Yan Liu

In this paper, we focus on improving the efficiency of language pre-training methods through providing better data utilization.

Taking Notes on the Fly Helps BERT Pre-training

no code implementations4 Aug 2020 Qiyu Wu, Chen Xing, Yatao Li, Guolin Ke, Di He, Tie-Yan Liu

In this paper, we focus on improving the efficiency of language pre-training methods through providing better data utilization.

Rethinking Positional Encoding in Language Pre-training

2 code implementations ICLR 2021 Guolin Ke, Di He, Tie-Yan Liu

In this work, we investigate the positional encoding methods used in language pre-training (e. g., BERT) and identify several problems in the existing formulations.

Natural Language Understanding Word Embeddings

MC-BERT: Efficient Language Pre-Training via a Meta Controller

1 code implementation10 Jun 2020 Zhenhui Xu, Linyuan Gong, Guolin Ke, Di He, Shuxin Zheng, Li-Wei Wang, Jiang Bian, Tie-Yan Liu

Pre-trained contextual representations (e. g., BERT) have become the foundation to achieve state-of-the-art results on many NLP tasks.

Cloze Test Language Modelling +3

Invertible Image Rescaling

3 code implementations ECCV 2020 Mingqing Xiao, Shuxin Zheng, Chang Liu, Yaolong Wang, Di He, Guolin Ke, Jiang Bian, Zhouchen Lin, Tie-Yan Liu

High-resolution digital images are usually downscaled to fit various display screens or save the cost of storage and bandwidth, meanwhile the post-upscaling is adpoted to recover the original resolutions or the details in the zoom-in images.

Image Super-Resolution

LightMC: A Dynamic and Efficient Multiclass Decomposition Algorithm

no code implementations25 Aug 2019 Ziyu Liu, Guolin Ke, Jiang Bian, Tie-Yan Liu

Instead of using fixed coding matrix and decoding strategy, LightMC uses a differentiable decoding strategy, which enables it to dynamically optimize the coding matrix and decoding strategy, toward increasing the overall accuracy of multiclass classification, via back propagation jointly with the training of base learners in an iterative way.

Classification General Classification

Light Multi-segment Activation for Model Compression

2 code implementations16 Jul 2019 Zhenhui Xu, Guolin Ke, Jia Zhang, Jiang Bian, Tie-Yan Liu

Inspired by the nature of the expressiveness ability in Neural Networks, we propose to use multi-segment activation, which can significantly improve the expressiveness ability with very little cost, in the compact student model.

Knowledge Distillation Model Compression +1

TabNN: A Universal Neural Network Solution for Tabular Data

no code implementations ICLR 2019 Guolin Ke, Jia Zhang, Zhenhui Xu, Jiang Bian, Tie-Yan Liu

Since there are no shared patterns among these diverse tabular data, it is hard to design specific structures to fit them all.

LightGBM: A Highly Efficient Gradient Boosting Decision Tree

1 code implementation NeurIPS 2017 Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu

We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size.

A Communication-Efficient Parallel Algorithm for Decision Tree

no code implementations NeurIPS 2016 Qi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhi-Ming Ma, Tie-Yan Liu

After partitioning the training data onto a number of (e. g., $M$) machines, this algorithm performs both local voting and global voting in each iteration.

Cannot find the paper you are looking for? You can Submit a new open access paper.