2 code implementations • COLING (TextGraphs) 2020 • Weibin Li, Yuxiang Lu, Zhengjie Huang, Weiyue Su, Jiaxiang Liu, Shikun Feng, Yu Sun
To address this problem, we use a pre-trained language model to recall the top-K relevant explanations for each question.
no code implementations • 14 Jul 2024 • Yuyan Ni, Shikun Feng, Xin Hong, Yuancheng Sun, Wei-Ying Ma, Zhi-Ming Ma, Qiwei Ye, Yanyan Lan
Deep learning methods have been considered promising for accelerating molecular screening in drug discovery and material design.
1 code implementation • 13 Jun 2024 • Shikun Feng, Jiaxin Zheng, Yinjun Jia, Yanwen Huang, Fengfeng Zhou, Wei-Ying Ma, Yanyan Lan
We believe this dataset will serve as a more accurate and reliable benchmark for molecular representation learning, thereby expediting progress in the field of artificial intelligence-driven drug discovery.
no code implementations • 15 May 2024 • Shikun Feng, Yuyan Ni, Minghao Li, Yanwen Huang, Zhi-Ming Ma, Wei-Ying Ma, Yanyan Lan
Recently, a noticeable trend has emerged in developing pre-trained foundation models in the domains of CV and NLP.
1 code implementation • 21 Feb 2024 • Han Tang, Shikun Feng, Bicheng Lin, Yuyan Ni, Jingjing Liu, Wei-Ying Ma, Yanyan Lan
REMO offers a novel solution to MRL by exploiting the underlying shared patterns in chemical reactions as \textit{context} for pre-training, which effectively infers meaningful representations of common chemistry knowledge.
1 code implementation • CVPR 2024 • Zhida Feng, Li Chen, Jing Tian, Jiaxiang Liu, Shikun Feng
We introduced StyleEntity a zero-shot image manipulation model that utilizes named entities as proxies during its training phase.
no code implementations • 9 Nov 2023 • Shikun Feng, Minghao Li, Yinjun Jia, WeiYing Ma, Yanyan Lan
The binding between proteins and ligands plays a crucial role in the realm of drug discovery.
no code implementations • 3 Nov 2023 • Yuyan Ni, Shikun Feng, Wei-Ying Ma, Zhi-Ming Ma, Yanyan Lan
By aligning with physical principles, SliDe shows a 42\% improvement in the accuracy of estimated force fields compared to current state-of-the-art denoising methods, and thus outperforms traditional baselines on various molecular property prediction tasks.
1 code implementation • 22 Oct 2023 • Shikun Feng, Lixin Yang, WeiYing Ma, Yanyan Lan
Molecular representation learning is fundamental for many drug related applications.
no code implementations • 21 Aug 2023 • Sijin Wu, Dan Zhang, Teng Hu, Shikun Feng
In this paper, we propose Docprompt for document question answering tasks with powerful zero-shot and few-shot performance.
1 code implementation • 20 Jul 2023 • Shikun Feng, Yuyan Ni, Yanyan Lan, Zhi-Ming Ma, Wei-Ying Ma
Theoretically, the objective is equivalent to learning the force field, which is revealed helpful for downstream tasks.
no code implementations • 12 Jul 2023 • Qiying Yu, Yudi Zhang, Yuyan Ni, Shikun Feng, Yanyan Lan, Hao Zhou, Jingjing Liu
Self-supervised learning has recently gained growing interest in molecular modeling for scientific tasks such as AI-assisted drug discovery.
no code implementations • 5 Jun 2023 • Wenwen Yu, Chengquan Zhang, Haoyu Cao, Wei Hua, Bohan Li, Huang Chen, MingYu Liu, Mingrui Chen, Jianfeng Kuang, Mengjun Cheng, Yuning Du, Shikun Feng, Xiaoguang Hu, Pengyuan Lyu, Kun Yao, Yuechen Yu, Yuliang Liu, Wanxiang Che, Errui Ding, Cheng-Lin Liu, Jiebo Luo, Shuicheng Yan, Min Zhang, Dimosthenis Karatzas, Xing Sun, Jingdong Wang, Xiang Bai
It is hoped that this competition will attract many researchers in the field of CV and NLP, and bring some new thoughts to the field of Document AI.
no code implementations • 3 Jun 2023 • Yiji Cheng, Fei Yin, Xiaoke Huang, Xintong Yu, Jiaxiang Liu, Shikun Feng, Yujiu Yang, Yansong Tang
These elaborated designs enable our model to generate portraits with robust multi-view semantic consistency, eliminating the need for optimization-based methods.
2 code implementations • 31 May 2023 • Mingguo He, Zhewei Wei, Shikun Feng, Zhengjie Huang, Weibin Li, Yu Sun, dianhai yu
Furthermore, these methods cannot learn arbitrary valid heterogeneous graph filters within the spectral domain, which have limited expressiveness.
Ranked #6 on Node Property Prediction on ogbn-mag
1 code implementation • 21 Feb 2023 • Yuchen Wang, Jinghui Zhang, Zhengjie Huang, Weibin Li, Shikun Feng, Ziheng Ma, Yu Sun, dianhai yu, Fang Dong, Jiahui Jin, Beilun Wang, Junzhou Luo
Then, we combine the group aggregation and the learnable encodings into a Transformer encoder to capture the semantic information.
no code implementations • 28 Jan 2023 • Anfeng Cheng, Yiding Liu, Weibin Li, Qian Dong, Shuaiqiang Wang, Zhengjie Huang, Shikun Feng, Zhicong Cheng, Dawei Yin
To assess webpage quality from complex DOM tree data, we propose a graph neural network (GNN) based method that extracts rich layout-aware information that implies webpage quality in an end-to-end manner.
1 code implementation • 9 Jan 2023 • Weixin Liu, Xuyi Chen, Jiaxiang Liu, Shikun Feng, Yu Sun, Hao Tian, Hua Wu
Experimental results demonstrate that our method yields a student with much better generalization, significantly outperforms existing baselines, and establishes a new state-of-the-art result on in-domain, out-domain, and low-resource datasets in the setting of task-agnostic distillation.
2 code implementations • CVPR 2023 • Zhida Feng, Zhenyu Zhang, Xintong Yu, Yewei Fang, Lanxin Li, Xuyi Chen, Yuxiang Lu, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, Li Chen, Hao Tian, Hua Wu, Haifeng Wang
Recent progress in diffusion models has revolutionized the popular technology of text-to-image generation.
Ranked #13 on Text-to-Image Generation on MS COCO
2 code implementations • 12 Oct 2022 • Qiming Peng, Yinxu Pan, Wenjin Wang, Bin Luo, Zhenyu Zhang, Zhengjie Huang, Teng Hu, Weichong Yin, Yongfeng Chen, Yin Zhang, Shikun Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Recent years have witnessed the rise and success of pre-training techniques in visually-rich document understanding.
Ranked #2 on Semantic entity labeling on FUNSD
no code implementations • 18 Sep 2022 • Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng, Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, dianhai yu, Yin Zhang
At first, a document graph is proposed to model complex relationships among multi-grained multimodal elements, in which salient visual regions are detected by a cluster-based method.
no code implementations • 15 Aug 2022 • Jizhou Huang, Zhengjie Huang, Xiaomin Fang, Shikun Feng, Xuyi Chen, Jiaxiang Liu, Haitao Yuan, Haifeng Wang
In this work, we focus on modeling traffic congestion propagation patterns to improve ETA performance.
1 code implementation • 13 May 2022 • Huijuan Wang, Siming Dai, Weiyue Su, Hui Zhong, Zeyang Fang, Zhengjie Huang, Shikun Feng, Zeyu Chen, Yu Sun, dianhai yu
Notably, it averagely brings about 10% relative improvement to triplet-based embedding methods on OGBL-WikiKG2 and takes 5%-83% time to achieve comparable results as the state-of-the-art GC-OTE.
no code implementations • 23 Mar 2022 • Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
We argue that two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.
no code implementations • 17 Mar 2022 • Jizhou Huang, Haifeng Wang, Yibo Sun, Yunsheng Shi, Zhengjie Huang, An Zhuo, Shikun Feng
One of the main reasons for this plateau is the lack of readily available geographic knowledge in generic PTMs.
3 code implementations • 23 Dec 2021 • Shuohuan Wang, Yu Sun, Yang Xiang, Zhihua Wu, Siyu Ding, Weibao Gong, Shikun Feng, Junyuan Shang, Yanbin Zhao, Chao Pang, Jiaxiang Liu, Xuyi Chen, Yuxiang Lu, Weixin Liu, Xi Wang, Yangfan Bai, Qiuliang Chen, Li Zhao, Shiyong Li, Peng Sun, dianhai yu, Yanjun Ma, Hao Tian, Hua Wu, Tian Wu, Wei Zeng, Ge Li, Wen Gao, Haifeng Wang
A unified framework named ERNIE 3. 0 was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters.
1 code implementation • 2 Dec 2021 • Weibin Li, Mingkai He, Zhengjie Huang, Xianming Wang, Shikun Feng, Weiyue Su, Yu Sun
In recent years, owing to the outstanding performance in graph representation learning, graph neural network (GNN) techniques have gained considerable interests in many real-world scenarios, such as recommender systems and social networks.
no code implementations • 29 Sep 2021 • Yang Liu, Jiaxiang Liu, Yuxiang Lu, Shikun Feng, Yu Sun, Zhida Feng, Li Chen, Hao Tian, Hua Wu, Haifeng Wang
The first factor is information bottleneck sensitivity, which is caused by the key feature of Sparse Transformer — only a small number of global tokens can attend to all other tokens.
no code implementations • SEMEVAL 2021 • Chao Pang, Xiaoran Fan, Weiyue Su, Xuyi Chen, Shuohuan Wang, Jiaxiang Liu, Xuan Ouyang, Shikun Feng, Yu Sun
This paper describes our system participated in Task 7 of SemEval-2021: Detecting and Rating Humor and Offense.
no code implementations • SEMEVAL 2021 • Zhida Feng, Jiji Tang, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, Li Chen
This paper describes our system participated in Task 6 of SemEval-2021: the task focuses on multimodal propaganda technique classification and it aims to classify given image and text into 22 classes.
2 code implementations • 5 Jul 2021 • Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, dianhai yu, Hao Tian, Hua Wu, Haifeng Wang
We trained the model with 10 billion parameters on a 4TB corpus consisting of plain texts and a large-scale knowledge graph.
no code implementations • 5 Jul 2021 • Weiyue Su, Zeyang Fang, Hui Zhong, Huijuan Wang, Siming Dai, Zhengjie Huang, Yunsheng Shi, Shikun Feng, Zeyu Chen
In addition to the representations, we also use various statistical probabilities among the head entities, the relations and the tail entities for the final prediction.
1 code implementation • 4 Jun 2021 • Weiyue Su, Xuyi Chen, Shikun Feng, Jiaxiang Liu, Weixin Liu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Specifically, the first stage, General Distillation, performs distillation with guidance from pretrained teacher, gerenal data and latent distillation loss.
1 code implementation • NA 2021 • Weibin Li, Shanzhuo Zhang, Lihang Liu, Zhengjie Huang, Jieqiong Lei, Xiaomin Fang, Shikun Feng, Fan Wang
As graph neural networks have achieved great success in many domains, some studies apply graph neural networks to molecular property prediction and regard each molecule as a graph.
Ranked #6 on Graph Property Prediction on ogbg-molhiv
no code implementations • SEMEVAL 2020 • Zhengjie Huang, Shikun Feng, Weiyue Su, Xuyi Chen, Shuohuan Wang, Jiaxiang Liu, Xuan Ouyang, Yu Sun
This paper describes the system designed by ERNIE Team which achieved the first place in SemEval-2020 Task 10: Emphasis Selection For Written Text in Visual Media.
no code implementations • SEMEVAL 2020 • Jiaxiang Liu, Xuyi Chen, Shikun Feng, Shuohuan Wang, Xuan Ouyang, Yu Sun, Zhengjie Huang, Weiyue Su
Code switching is a linguistic phenomenon that may occur within a multilingual setting where speakers share more than one language.
3 code implementations • 8 Sep 2020 • Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjin Wang, Yu Sun
Graph neural network (GNN) and label propagation algorithm (LPA) are both message passing algorithms, which have achieved superior performance in semi-supervised classification.
Ranked #1 on Node Property Prediction on ogbn-proteins
3 code implementations • 29 Jul 2019 • Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, Haifeng Wang
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
Ranked #1 on Chinese Sentence Pair Classification on LCQMC Dev
Chinese Named Entity Recognition Chinese Reading Comprehension +8
17 code implementations • 19 Apr 2019 • Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu
We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration).
Ranked #3 on Natural Language Inference on XNLI Chinese Dev
Chinese Named Entity Recognition Chinese Sentence Pair Classification +8
no code implementations • 20 May 2016 • Lei Shi, Shikun Feng, ZhifanZhu
As the complexity of deep neural networks (DNNs) trend to grow to absorb the increasing sizes of data, memory and energy consumption has been receiving more and more attentions for industrial applications, especially on mobile devices.