Search Results for author: Tong Zhu

Found 22 papers, 19 papers with code

On the design space between molecular mechanics and machine learning force fields

no code implementations3 Sep 2024 Yuanqing Wang, Kenichiro Takaba, Michael S. Chen, Marcus Wieder, Yuzhi Xu, Tong Zhu, John Z. H. Zhang, Arnav Nagle, Kuang Yu, Xinyan Wang, Daniel J. Cole, Joshua A. Rackers, Kyunghyun Cho, Joe G. Greener, Peter Eastman, Stefano Martiniani, Mark E. Tuckerman

A force field as accurate as quantum mechanics (QM) and as fast as molecular mechanics (MM), with which one can simulate a biomolecular system efficiently enough and meaningfully enough to get quantitative insights, is among the most ardent dreams of biophysicists -- a dream, nevertheless, not to be fulfilled any time soon.

ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM

1 code implementation22 Aug 2024 Zhaochen Su, Jun Zhang, Xiaoye Qu, Tong Zhu, Yanshu Li, Jiashuo Sun, Juntao Li, Min Zhang, Yu Cheng

Only a few research explored the conflicts between the inherent knowledge of LLMs and the retrieved contextual knowledge.

Misinformation

Learning to Refuse: Towards Mitigating Privacy Risks in LLMs

1 code implementation14 Jul 2024 Zhenhua Liu, Tong Zhu, Chuanyuan Tan, Wenliang Chen

Large language models (LLMs) exhibit remarkable capabilities in understanding and generating natural language.

Machine Unlearning

LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training

1 code implementation24 Jun 2024 Tong Zhu, Xiaoye Qu, Daize Dong, Jiacheng Ruan, Jingqi Tong, Conghui He, Yu Cheng

Motivated by this limit, we investigate building MoE models from existing dense large language models.

Timo: Towards Better Temporal Reasoning for Language Models

1 code implementation20 Jun 2024 Zhaochen Su, Jun Zhang, Tong Zhu, Xiaoye Qu, Juntao Li, Min Zhang, Yu Cheng

Therefore, we propose a crucial question: Can we build a universal framework to handle a variety of temporal reasoning tasks?

Question Answering

Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts

1 code implementation17 Jun 2024 Tong Zhu, Daize Dong, Xiaoye Qu, Jiacheng Ruan, Wenliang Chen, Yu Cheng

Mixture-of-Experts (MoE) models have shown remarkable capability in instruction tuning, especially when the number of tasks scales.

Probing Language Models for Pre-training Data Detection

1 code implementation3 Jun 2024 Zhenhua Liu, Tong Zhu, Chuanyuan Tan, Haonan Lu, Bing Liu, Wenliang Chen

Large Language Models (LLMs) have shown their impressive capabilities, while also raising concerns about the data contamination problems due to privacy issues and leakage of benchmark datasets in the pre-training phase.

Probing Language Models

Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmark

1 code implementation14 May 2024 Mengsong Wu, Tong Zhu, Han Han, Chuanyuan Tan, Xiang Zhang, Wenliang Chen

Therefore, Seal-Tools can serve as a new benchmark to evaluate the tool-calling ability of LLMs.

MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking

1 code implementation12 Apr 2024 Tianwen Tang, Tong Zhu, Haodong Liu, Yin Bai, Jia Cheng, Wenliang Chen

Zero-shot dialogue state tracking (DST) transfers knowledge to unseen domains, reducing the cost of annotating new datasets.

Dialogue State Tracking

Controllable and Diverse Data Augmentation with Large Language Model for Low-Resource Open-Domain Dialogue Generation

no code implementations30 Mar 2024 Zhenhua Liu, Tong Zhu, Jianxiang Xiang, Wenliang Chen

To evaluate the efficacy of data augmentation methods for open-domain dialogue, we designed a clustering-based metric to characterize the semantic diversity of the augmented dialogue data.

Data Augmentation Dialogue Generation +3

Mirror: A Universal Framework for Various Information Extraction Tasks

1 code implementation9 Nov 2023 Tong Zhu, Junfei Ren, Zijian Yu, Mengsong Wu, Guoliang Zhang, Xiaoye Qu, Wenliang Chen, Zhefeng Wang, Baoxing Huai, Min Zhang

Sharing knowledge between information extraction tasks has always been a challenge due to the diverse data formats and task variations.

Machine Reading Comprehension

CED: Catalog Extraction from Documents

1 code implementation28 Apr 2023 Tong Zhu, Guoliang Zhang, Zechang Li, Zijian Yu, Junfei Ren, Mengsong Wu, Zhefeng Wang, Baoxing Huai, Pingfu Chao, Wenliang Chen

To address this problem, we build a large manually annotated corpus, which is the first dataset for the Catalog Extraction from Documents (CED) task.

Catalog Extraction Sentence

Closed-loop Error Correction Learning Accelerates Experimental Discovery of Thermoelectric Materials

1 code implementation26 Feb 2023 Hitarth Choubisa, Md Azimul Haque, Tong Zhu, Lewei Zeng, Maral Vafaie, Derya Baran, Edward H Sargent

The exploration of thermoelectric materials is challenging considering the large materials space, combined with added exponential degrees of freedom coming from doping and the diversity of synthetic pathways.

Diversity

Efficient Document-level Event Extraction via Pseudo-Trigger-aware Pruned Complete Graph

1 code implementation11 Dec 2021 Tong Zhu, Xiaoye Qu, Wenliang Chen, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan, Min Zhang

Most previous studies of document-level event extraction mainly focus on building argument chains in an autoregressive way, which achieves a certain success but is inefficient in both training and inference.

Document-level Event Extraction Event Extraction

Improving Relation Extraction with Relational Paraphrase Sentences

1 code implementation COLING 2020 Junjie Yu, Tong Zhu, Wenliang Chen, Wei zhang, Min Zhang

In this paper, we propose an alternative approach to improve RE systems via enriching diverse expressions by relational paraphrase sentences.

Relation Relation Extraction

Complex reaction processes in combustion unraveled by neural network-based molecular dynamics simulation

1 code implementation11 Nov 2020 Jinzhe Zeng, Liqun Cao, Mingyuan Xu, Tong Zhu, John Z. H. Zhang

Combustion is a complex chemical system which involves thousands of chemical reactions and generates hundreds of molecular species and radicals during the process.

Towards Accurate and Consistent Evaluation: A Dataset for Distantly-Supervised Relation Extraction

1 code implementation COLING 2020 Tong Zhu, Haitao Wang, Junjie Yu, Xiabing Zhou, Wenliang Chen, Wei zhang, Min Zhang

The experimental results show that the ranking lists of the comparison systems on the DS-labelled test data and human-annotated test data are different.

Relation Relation Extraction

Neural Network Based in Silico Simulation of Combustion Reactions

1 code implementation27 Nov 2019 Jinzhe Zeng, Liqun Cao, Mingyuan Xu, Tong Zhu, John ZH Zhang

Through further development, the algorithms in this study can be used to explore and discovery reaction mechanisms of many complex reaction systems, such as combustion, synthesis, and heterogeneous catalysis without any predefined reaction coordinates and elementary reaction steps.

Atomic Forces

ReacNetGenerator: an Automatic Reaction Network Generator for Reactive Molecular Dynamic Simulations

1 code implementation Physical Chemistry Chemical Physics 2019 Jinzhe Zeng, Liqun Cao, Chih-Hao Chin, Haisheng Ren, John Z.H. Zhang, Tong Zhu

However, the analysis of the MD trajectories which contain thousands of species and reaction pathways has become a major obstacle to the application of reactive MD simulation in large-scale systems.

CCKS 2019 Shared Task on Inter-Personal Relationship Extraction

1 code implementation29 Aug 2019 Haitao Wang, Zhengqiu He, Tong Zhu, Hao Shao, Wenliang Chen, Min Zhang

In this paper, we present the task definition, the description of data and the evaluation methodology used during this shared task.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.