Search Results for author: Zhen Wan

Found 9 papers, 4 papers with code

Reformulating Domain Adaptation of Large Language Models as Adapt-Retrieve-Revise

no code implementations5 Oct 2023 Zhen Wan, Yating Zhang, Yexiang Wang, Fei Cheng, Sadao Kurohashi

In the zero-shot setting of four Chinese legal tasks, our method improves accuracy by 33. 3\% compared to the direct generation by GPT-4.

Domain Adaptation

Pushing the Limits of ChatGPT on NLP Tasks

no code implementations16 Jun 2023 Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, Guoyin Wang

In this work, we propose a collection of general modules to address these issues, in an attempt to push the limits of ChatGPT on NLP tasks.

Dependency Parsing Event Extraction +9

GPT-RE: In-context Learning for Relation Extraction using Large Language Models

1 code implementation3 May 2023 Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu, Haiyue Song, Jiwei Li, Sadao Kurohashi

In spite of the potential for ground-breaking achievements offered by large language models (LLMs) (e. g., GPT-3), they still lag significantly behind fully-supervised baselines (e. g., fine-tuned BERT) in relation extraction (RE).

In-Context Learning Relation +2

Seeking Diverse Reasoning Logic: Controlled Equation Expression Generation for Solving Math Word Problems

1 code implementation21 Sep 2022 Yibin Shen, Qianying Liu, Zhuoyuan Mao, Zhen Wan, Fei Cheng, Sadao Kurohashi

To solve Math Word Problems, human students leverage diverse reasoning logic that reaches different possible equation solutions.

Math

Relation Extraction with Weighted Contrastive Pre-training on Distant Supervision

no code implementations18 May 2022 Zhen Wan, Fei Cheng, Qianying Liu, Zhuoyuan Mao, Haiyue Song, Sadao Kurohashi

Contrastive pre-training on distant supervision has shown remarkable effectiveness in improving supervised relation extraction tasks.

Contrastive Learning Relation +1

When do Contrastive Word Alignments Improve Many-to-many Neural Machine Translation?

no code implementations Findings (NAACL) 2022 Zhuoyuan Mao, Chenhui Chu, Raj Dabre, Haiyue Song, Zhen Wan, Sadao Kurohashi

Meanwhile, the contrastive objective can implicitly utilize automatically learned word alignment, which has not been explored in many-to-many NMT.

Machine Translation NMT +4

The dynamics of the globular cluster NGC3201 out to the Jacobi radius

no code implementations2 Feb 2021 Zhen Wan, William Oliver, Holger Baumgardt, Geraint Lewis, Mark Gieles, Vincent Hénault-Brunet, Thomas de Boer, Eduardo Balbinot, Gary Da Costa, Dougal Mackey

We also estimate the effect on the velocity dispersion of different amounts of stellar-mass black holes and unbound stars from the tidal tails with varying escape rates and find that these effects cannot explain the difference between the LOS dispersion and the N-body model.

Astrophysics of Galaxies

Cannot find the paper you are looking for? You can Submit a new open access paper.