1 code implementation • 17 Mar 2024 • Yuzhao Heng, Chunyuan Deng, Yitong Li, Yue Yu, Yinghao Li, Rongzhi Zhang, Chao Zhang
Although Large Language Models (LLMs) exhibit remarkable adaptability across domains, these models often fall short in structured knowledge extraction tasks such as named entity recognition (NER).
1 code implementation • 20 Feb 2024 • Yinghao Li, Rampi Ramprasad, Chao Zhang
It breaks the generation into a two-step pipeline: initially, LLMs generate answers in natural language as intermediate responses.
no code implementations • 24 Jan 2024 • Haorui Wang, Rongzhi Zhang, Yinghao Li, Lingkai Kong, Yuchen Zhuang, Xiusi Chen, Chao Zhang
The teacher LLM generates problem-solving instructions and corrective principles based on the student LLM's errors.
no code implementations • 14 Nov 2023 • Huashan Sun, Yixiao Wu, Yinghao Li, Jiawei Li, Yizhe Yang, Yang Gao
In summary, we present the TSST task, a new benchmark for style transfer and emphasizing human-oriented evaluation, exploring and advancing the performance of current LLMs.
1 code implementation • 13 Nov 2023 • Jerry Junyang Cheung, Yuchen Zhuang, Yinghao Li, Pranav Shetty, Wantian Zhao, Sanjeev Grampurohit, Rampi Ramprasad, Chao Zhang
Scientific information extraction (SciIE), which aims to automatically extract information from scientific literature, is becoming more important than ever.
1 code implementation • 13 Nov 2023 • Yinghao Li, Haorui Wang, Chao Zhang
Large Language Models (LLMs) have shown remarkable proficiency in language understanding and have been successfully applied to a variety of real-world tasks through task-specific fine-tuning or prompt engineering.
no code implementations • 24 Oct 2023 • Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Heyan Huang, Yang Gao
Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence.
2 code implementations • 14 Jun 2023 • Yinghao Li, Lingkai Kong, Yuanqi Du, Yue Yu, Yuchen Zhuang, Wenhao Mu, Chao Zhang
While some studies have included UQ to improve molecular pre-trained models, the process of selecting suitable backbone and UQ methods for reliable molecular uncertainty estimation remains underexplored.
no code implementations • 23 May 2023 • Yinghao Li, Colin Lockard, Prashant Shiralkar, Chao Zhang
To establish such connections, we propose to extract PTs from the Web pages containing hand-crafted PT recommendations for SIs.
1 code implementation • 26 Oct 2022 • Yuchen Zhuang, Yinghao Li, Jerry Junyang Cheung, Yue Yu, Yingjun Mou, Xiang Chen, Le Song, Chao Zhang
We study the problem of extracting N-ary relation tuples from scientific articles.
1 code implementation • 7 Aug 2022 • Mengyang Liu, Haozheng Luo, Leonard Thong, Yinghao Li, Chao Zhang, Le Song
Compared to frequently used text annotation tools, our annotation tool allows for the development of weak labels in addition to providing a manual annotation experience.
1 code implementation • 27 May 2022 • Yinghao Li, Le Song, Chao Zhang
Weakly supervised named entity recognition methods train label models to aggregate the token annotations of multiple noisy labeling functions (LFs) without seeing any manually annotated labels.
1 code implementation • 23 Sep 2021 • Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, Alexander Ratner
To address these problems, we introduce a benchmark platform, WRENCH, for thorough and standardized evaluation of WS approaches.
2 code implementations • ACL 2021 • Yinghao Li, Pranav Shetty, Lucas Liu, Chao Zhang, Le Song
To address this challenge, we propose a conditional hidden Markov model (CHMM), which can effectively infer true labels from multi-source noisy labels in an unsupervised way.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wendi Ren, Yinghao Li, Hanting Su, David Kartchner, Cassie Mitchell, Chao Zhang
We study the problem of learning neural text classifiers without using any labeled data, but only easy-to-provide rules as multiple weak supervision sources.
1 code implementation • 5 Oct 2020 • Yinghao Li, Rui Feng, Isaac Rehg, Chao Zhang
We study the problem of using (partial) constituency parse trees as syntactic guidance for controlled text generation.
1 code implementation • 18 Jun 2020 • Yue Yu, Yinghao Li, Jiaming Shen, Hao Feng, Jimeng Sun, Chao Zhang
We propose a self-supervised taxonomy expansion model named STEAM, which leverages natural supervision in the existing taxonomy for expansion.