Search Results for author: Shikun Feng

Found 40 papers, 20 papers with code

Pre-training with Fractional Denoising to Enhance Molecular Property Prediction

no code implementations14 Jul 2024 Yuyan Ni, Shikun Feng, Xin Hong, Yuancheng Sun, Wei-Ying Ma, Zhi-Ming Ma, Qiwei Ye, Yanyan Lan

Deep learning methods have been considered promising for accelerating molecular screening in drug discovery and material design.

Denoising Drug Discovery +2

MoleculeCLA: Rethinking Molecular Benchmark via Computational Ligand-Target Binding Analysis

1 code implementation13 Jun 2024 Shikun Feng, Jiaxin Zheng, Yinjun Jia, Yanwen Huang, Fengfeng Zhou, Wei-Ying Ma, Yanyan Lan

We believe this dataset will serve as a more accurate and reliable benchmark for molecular representation learning, thereby expediting progress in the field of artificial intelligence-driven drug discovery.

Drug Discovery Molecular Property Prediction +3

Contextual Molecule Representation Learning from Chemical Reaction Knowledge

1 code implementation21 Feb 2024 Han Tang, Shikun Feng, Bicheng Lin, Yuyan Ni, Jingjing Liu, Wei-Ying Ma, Yanyan Lan

REMO offers a novel solution to MRL by exploiting the underlying shared patterns in chemical reactions as \textit{context} for pre-training, which effectively infers meaningful representations of common chemistry knowledge.

molecular representation Representation Learning +1

Named Entity Driven Zero-Shot Image Manipulation

1 code implementation CVPR 2024 Zhida Feng, Li Chen, Jing Tian, Jiaxiang Liu, Shikun Feng

We introduced StyleEntity a zero-shot image manipulation model that utilizes named entities as proxies during its training phase.

Image Manipulation

Sliced Denoising: A Physics-Informed Molecular Pre-Training Method

no code implementations3 Nov 2023 Yuyan Ni, Shikun Feng, Wei-Ying Ma, Zhi-Ming Ma, Yanyan Lan

By aligning with physical principles, SliDe shows a 42\% improvement in the accuracy of estimated force fields compared to current state-of-the-art denoising methods, and thus outperforms traditional baselines on various molecular property prediction tasks.

Denoising Drug Discovery +2

DocPrompt: Large-scale continue pretrain for zero-shot and few-shot document question answering

no code implementations21 Aug 2023 Sijin Wu, Dan Zhang, Teng Hu, Shikun Feng

In this paper, we propose Docprompt for document question answering tasks with powerful zero-shot and few-shot performance.

Question Answering

Fractional Denoising for 3D Molecular Pre-training

1 code implementation20 Jul 2023 Shikun Feng, Yuyan Ni, Yanyan Lan, Zhi-Ming Ma, Wei-Ying Ma

Theoretically, the objective is equivalent to learning the force field, which is revealed helpful for downstream tasks.

Denoising Drug Discovery +1

Multimodal Molecular Pretraining via Modality Blending

no code implementations12 Jul 2023 Qiying Yu, Yudi Zhang, Yuyan Ni, Shikun Feng, Yanyan Lan, Hao Zhou, Jingjing Liu

Self-supervised learning has recently gained growing interest in molecular modeling for scientific tasks such as AI-assisted drug discovery.

Drug Discovery molecular representation +3

Efficient Text-Guided 3D-Aware Portrait Generation with Score Distillation Sampling on Distribution

no code implementations3 Jun 2023 Yiji Cheng, Fei Yin, Xiaoke Huang, Xintong Yu, Jiaxiang Liu, Shikun Feng, Yujiu Yang, Yansong Tang

These elaborated designs enable our model to generate portraits with robust multi-view semantic consistency, eliminating the need for optimization-based methods.

Text to 3D

Spectral Heterogeneous Graph Convolutions via Positive Noncommutative Polynomials

2 code implementations31 May 2023 Mingguo He, Zhewei Wei, Shikun Feng, Zhengjie Huang, Weibin Li, Yu Sun, dianhai yu

Furthermore, these methods cannot learn arbitrary valid heterogeneous graph filters within the spectral domain, which have limited expressiveness.

Graph Learning Node Classification +2

Layout-aware Webpage Quality Assessment

no code implementations28 Jan 2023 Anfeng Cheng, Yiding Liu, Weibin Li, Qian Dong, Shuaiqiang Wang, Zhengjie Huang, Shikun Feng, Zhicong Cheng, Dawei Yin

To assess webpage quality from complex DOM tree data, we propose a graph neural network (GNN) based method that extracts rich layout-aware information that implies webpage quality in an end-to-end manner.

Graph Neural Network

ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization

1 code implementation9 Jan 2023 Weixin Liu, Xuyi Chen, Jiaxiang Liu, Shikun Feng, Yu Sun, Hao Tian, Hua Wu

Experimental results demonstrate that our method yields a student with much better generalization, significantly outperforms existing baselines, and establishes a new state-of-the-art result on in-domain, out-domain, and low-resource datasets in the setting of task-agnostic distillation.

Knowledge Distillation Language Modelling +1

ERNIE-mmLayout: Multi-grained MultiModal Transformer for Document Understanding

no code implementations18 Sep 2022 Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng, Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, dianhai yu, Yin Zhang

At first, a document graph is proposed to model complex relationships among multi-grained multimodal elements, in which salient visual regions are detected by a cluster-based method.

Common Sense Reasoning document understanding +1

Simple and Effective Relation-based Embedding Propagation for Knowledge Representation Learning

1 code implementation13 May 2022 Huijuan Wang, Siming Dai, Weiyue Su, Hui Zhong, Zeyang Fang, Zhengjie Huang, Shikun Feng, Zeyu Chen, Yu Sun, dianhai yu

Notably, it averagely brings about 10% relative improvement to triplet-based embedding methods on OGBL-WikiKG2 and takes 5%-83% time to achieve comparable results as the state-of-the-art GC-OTE.

Knowledge Graphs Relation +2

ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention

no code implementations23 Mar 2022 Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

We argue that two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.

Sparse Learning text-classification +1

ERNIE-GeoL: A Geography-and-Language Pre-trained Model and its Applications in Baidu Maps

no code implementations17 Mar 2022 Jizhou Huang, Haifeng Wang, Yibo Sun, Yunsheng Shi, Zhengjie Huang, An Zhuo, Shikun Feng

One of the main reasons for this plateau is the lack of readily available geographic knowledge in generic PTMs.

Graph4Rec: A Universal Toolkit with Graph Neural Networks for Recommender Systems

1 code implementation2 Dec 2021 Weibin Li, Mingkai He, Zhengjie Huang, Xianming Wang, Shikun Feng, Weiyue Su, Yu Sun

In recent years, owing to the outstanding performance in graph representation learning, graph neural network (GNN) techniques have gained considerable interests in many real-world scenarios, such as recommender systems and social networks.

graph construction Graph Neural Network +1

ERNIE-SPARSE: Robust Efficient Transformer Through Hierarchically Unifying Isolated Information

no code implementations29 Sep 2021 Yang Liu, Jiaxiang Liu, Yuxiang Lu, Shikun Feng, Yu Sun, Zhida Feng, Li Chen, Hao Tian, Hua Wu, Haifeng Wang

The first factor is information bottleneck sensitivity, which is caused by the key feature of Sparse Transformer — only a small number of global tokens can attend to all other tokens.

text-classification Text Classification

Alpha at SemEval-2021 Task 6: Transformer Based Propaganda Classification

no code implementations SEMEVAL 2021 Zhida Feng, Jiji Tang, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, Li Chen

This paper describes our system participated in Task 6 of SemEval-2021: the task focuses on multimodal propaganda technique classification and it aims to classify given image and text into 22 classes.

Classification

NOTE: Solution for KDD-CUP 2021 WikiKG90M-LSC

no code implementations5 Jul 2021 Weiyue Su, Zeyang Fang, Hui Zhong, Huijuan Wang, Siming Dai, Zhengjie Huang, Yunsheng Shi, Shikun Feng, Zeyu Chen

In addition to the representations, we also use various statistical probabilities among the head entities, the relations and the tail entities for the final prediction.

Feature Engineering Question Answering +2

ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression

1 code implementation4 Jun 2021 Weiyue Su, Xuyi Chen, Shikun Feng, Jiaxiang Liu, Weixin Liu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Specifically, the first stage, General Distillation, performs distillation with guidance from pretrained teacher, gerenal data and latent distillation loss.

Knowledge Distillation

Molecular Representation Learning by Leveraging Chemical Information

1 code implementation NA 2021 Weibin Li, Shanzhuo Zhang, Lihang Liu, Zhengjie Huang, Jieqiong Lei, Xiaomin Fang, Shikun Feng, Fan Wang

As graph neural networks have achieved great success in many domains, some studies apply graph neural networks to molecular property prediction and regard each molecule as a graph.

Graph Property Prediction Molecular Property Prediction +3

ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model

no code implementations SEMEVAL 2020 Zhengjie Huang, Shikun Feng, Weiyue Su, Xuyi Chen, Shuohuan Wang, Jiaxiang Liu, Xuan Ouyang, Yu Sun

This paper describes the system designed by ERNIE Team which achieved the first place in SemEval-2020 Task 10: Emphasis Selection For Written Text in Visual Media.

Data Augmentation Feature Engineering +3

Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification

3 code implementations8 Sep 2020 Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjin Wang, Yu Sun

Graph neural network (GNN) and label propagation algorithm (LPA) are both message passing algorithms, which have achieved superior performance in semi-supervised classification.

General Classification Graph Neural Network +2

ERNIE 2.0: A Continual Pre-training Framework for Language Understanding

3 code implementations29 Jul 2019 Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, Haifeng Wang

Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.

Chinese Named Entity Recognition Chinese Reading Comprehension +8

Functional Hashing for Compressing Neural Networks

no code implementations20 May 2016 Lei Shi, Shikun Feng, ZhifanZhu

As the complexity of deep neural networks (DNNs) trend to grow to absorb the increasing sizes of data, memory and energy consumption has been receiving more and more attentions for industrial applications, especially on mobile devices.

Cannot find the paper you are looking for? You can Submit a new open access paper.