Search Results for author: Xiaozhuan Liang

Found 11 papers, 10 papers with code

BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning

1 code implementation27 Feb 2024 Qizhi Pei, Lijun Wu, Kaiyuan Gao, Xiaozhuan Liang, Yin Fang, Jinhua Zhu, Shufang Xie, Tao Qin, Rui Yan

However, previous efforts like BioT5 faced challenges in generalizing across diverse tasks and lacked a nuanced understanding of molecular structures, particularly in their textual representations (e. g., IUPAC).

Forward reaction prediction Molecule Captioning +3

InstructEdit: Instruction-based Knowledge Editing for Large Language Models

1 code implementation25 Feb 2024 Bozhong Tian, Siyuan Cheng, Xiaozhuan Liang, Ningyu Zhang, Yi Hu, Kouying Xue, Yanjie Gou, Xi Chen, Huajun Chen

Knowledge editing for large language models can offer an efficient solution to alter a model's behavior without negatively impacting the overall performance.

knowledge editing

Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models

1 code implementation13 Jun 2023 Yin Fang, Xiaozhuan Liang, Ningyu Zhang, Kangwei Liu, Rui Huang, Zhuo Chen, Xiaohui Fan, Huajun Chen

Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields.

Catalytic activity prediction Chemical-Disease Interaction Extraction +14

Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning

2 code implementations29 May 2022 Xiang Chen, Lei LI, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data.

Few-Shot Text Classification Memorization +5

Multi-modal Protein Knowledge Graph Construction and Applications

no code implementations27 May 2022 Siyuan Cheng, Xiaozhuan Liang, Zhen Bi, Huajun Chen, Ningyu Zhang

Existing data-centric methods for protein science generally cannot sufficiently capture and leverage biology knowledge, which may be crucial for many protein tasks.

graph construction

Contrastive Demonstration Tuning for Pre-trained Language Models

1 code implementation9 Apr 2022 Xiaozhuan Liang, Ningyu Zhang, Siyuan Cheng, Zhenru Zhang, Chuanqi Tan, Huajun Chen

Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios.

OntoProtein: Protein Pretraining With Gene Ontology Embedding

1 code implementation ICLR 2022 Ningyu Zhang, Zhen Bi, Xiaozhuan Liang, Siyuan Cheng, Haosen Hong, Shumin Deng, Jiazhang Lian, Qiang Zhang, Huajun Chen

We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph.

Contrastive Learning Knowledge Graphs +2

Cannot find the paper you are looking for? You can Submit a new open access paper.