no code implementations • 20 Feb 2024 • Kexin Chen, Hanqun Cao, Junyou Li, Yuyang Du, Menghao Guo, Xin Zeng, Lanqing Li, Jiezhong Qiu, Pheng Ann Heng, Guangyong Chen
The proposed approach marks a significant advancement in automating chemical literature extraction and demonstrates the potential for AI to revolutionize data management and utilization in chemistry.
no code implementations • 6 Dec 2023 • Fei Yang, Shuang Peng, Ning Sun, Fangyu Wang, Yuanyuan Wang, Fu Wu, Jiezhong Qiu, Aimin Pan
Large language models (LLMs) such as GPT-3, OPT, and LLaMA have demonstrated remarkable accuracy in a wide range of tasks.
no code implementations • 16 Nov 2023 • Kexin Chen, Junyou Li, Kunyi Wang, Yuyang Du, Jiahui Yu, Jiamin Lu, Lanqing Li, Jiezhong Qiu, Jianzhang Pan, Yi Huang, Qun Fang, Pheng Ann Heng, Guangyong Chen
Recent AI research plots a promising future of automatic chemical reactions within the chemistry society.
1 code implementation • 8 Jun 2023 • Jiaxian Yan, Zhaofeng Ye, ZiYi Yang, Chengqiang Lu, Shengyu Zhang, Qi Liu, Jiezhong Qiu
By introducing multi-task pre-training to treat the prediction of different affinity labels as different tasks and classifying relative rankings between samples from the same bioassay, MBP learns robust and transferrable structural knowledge from our new ChEMBL-Dock dataset with varied and noisy labels.
no code implementations • 12 Jan 2023 • Jonathan P. Mailoa, Xin Li, Jiezhong Qiu, Shengyu Zhang
Recently, machine learning methods have been used to propose molecules with desired properties, which is especially useful for exploring large chemical spaces efficiently.
1 code implementation • 3 Jan 2023 • Jonathan P. Mailoa, Zhaofeng Ye, Jiezhong Qiu, Chang-Yu Hsieh, Shengyu Zhang
The generation of small molecule candidate (ligand) binding poses in its target protein pocket is important for computer-aided drug discovery.
1 code implementation • 23 Sep 2022 • Zhongwei Wan, Xin Liu, Benyou Wang, Jiezhong Qiu, Boyu Li, Ting Guo, Guangyong Chen, Yang Wang
The idea is to supplement the GNN-based main supervised recommendation task with the temporal representation via an auxiliary cross-view contrastive learning mechanism.
1 code implementation • 16 Aug 2022 • Xiao Liu, Shiyu Zhao, Kai Su, Yukuo Cen, Jiezhong Qiu, Mengdi Zhang, Wei Wu, Yuxiao Dong, Jie Tang
In this work, we present the Knowledge Graph Transformer (kgTransformer) with masked pre-training and fine-tuning strategies.
no code implementations • 2 Dec 2021 • Junsheng Kong, Weizhao Li, Ben Liao, Jiezhong Qiu, Chang-Yu, Hsieh, Yi Cai, Jinhui Zhu, Shengyu Zhang
Then, NES computes the network embedding from this representative subgraph, efficiently.
no code implementations • 8 Oct 2021 • Shengyu Zhang, Kun Kuang, Jiezhong Qiu, Jin Yu, Zhou Zhao, Hongxia Yang, Zhongfei Zhang, Fei Wu
The results demonstrate that our method outperforms various SOTA GNNs for stable prediction on graphs with agnostic distribution shift, including shift caused by node labels and attributes.
1 code implementation • Chemical Engineering Journal 2021 • Xiaorui Wang, Yuquan Li, Jiezhong Qiu, Guangyong Chen, Huanxiang Liu, Benben Liao, Chang-Yu Hsieh, Xiaojun Yaoa
RetroPrime achieves the Top-1 accuracy of 64. 8% and 51. 4%, when the reaction type is known and unknown, respectively, in the USPTO-50 K dataset.
Ranked #27 on
Single-step retrosynthesis
on USPTO-50k
no code implementations • 15 Sep 2021 • Junsheng Kong, Weizhao Li, Zeyi Liu, Ben Liao, Jiezhong Qiu, Chang-Yu Hsieh, Yi Cai, Shengyu Zhang
In this work, we show that with merely a small fraction of contexts (Q-contexts)which are typical in the whole corpus (and their mutual information with words), one can construct high-quality word embedding with negligible errors.
1 code implementation • 17 Aug 2021 • Yijia Xiao, Jiezhong Qiu, Ziang Li, Chang-Yu Hsieh, Jie Tang
The emergence of deep learning models makes modeling data patterns in large quantities of data possible.
no code implementations • 14 Jun 2021 • Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan YAO, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI).
3 code implementations • 24 Mar 2021 • Jiaao He, Jiezhong Qiu, Aohan Zeng, Zhilin Yang, Jidong Zhai, Jie Tang
However, training trillion-scale MoE requires algorithm and system co-design for a well-tuned high performance distributed training system.
8 code implementations • ACL 2022 • Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang
On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1. 25x parameters of BERT Large , demonstrating its generalizability to different downstream tasks.
Ranked #4 on
Language Modelling
on WikiText-103
(using extra training data)
no code implementations • 1 Jan 2021 • Jiezhong Qiu, Yukuo Cen, Qibin Chen, Chang Zhou, Jingren Zhou, Hongxia Yang, Jie Tang
Based on the theoretical analysis, we propose Local Clustering Graph Neural Networks (LCGNN), a GNN learning paradigm that utilizes local clustering to efficiently search for small but compact subgraphs for GNN training and inference.
no code implementations • NeurIPS 2020 • Jiezhong Qiu, Chi Wang, Ben Liao, Richard Peng, Jie Tang
Our result gives the first bound on the convergence rate of the co-occurrence matrix and the first sample complexity analysis in graph representation learning.
4 code implementations • 17 Jun 2020 • Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, Jie Tang
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Jiezhong Qiu, Hao Ma, Omer Levy, Scott Wen-tau Yih, Sinong Wang, Jie Tang
We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies.
1 code implementation • 26 Jun 2019 • Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, Jie Tang
Previous research shows that 1) popular network embedding benchmarks, such as DeepWalk, are in essence implicitly factorizing a matrix with a closed form, and 2)the explicit factorization of such matrix generates more powerful embeddings than existing methods.
1 code implementation • 22 Jun 2019 • Guangyong Chen, Pengfei Chen, Chang-Yu Hsieh, Chee-Kong Lee, Benben Liao, Renjie Liao, Weiwen Liu, Jiezhong Qiu, Qiming Sun, Jie Tang, Richard Zemel, Shengyu Zhang
We introduce a new molecular dataset, named Alchemy, for developing machine learning models useful in chemistry and material science.
1 code implementation • 15 Jul 2018 • Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, Jie Tang
Inspired by the recent success of deep neural networks in a wide range of computing applications, we design an end-to-end framework, DeepInf, to learn users' latent feature representation for predicting social influence.
no code implementations • ICLR 2018 • Jiezhong Qiu, Hao Ma, Yuxiao Dong, Kuansan Wang, Jie Tang
We study the problem of knowledge base (KB) embedding, which is usually addressed through two frameworks---neural KB embedding and tensor decomposition.
4 code implementations • 9 Oct 2017 • Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, Jie Tang
This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.