no code implementations • EMNLP 2020 • Qinzhuo Wu, Qi Zhang, Jinlan Fu, Xuanjing Huang
With the advancements in natural language processing tasks, math word problem solving has received increasing attention.
no code implementations • 24 Jul 2024 • Anhao Zhao, Fanghua Ye, Jinlan Fu, Xiaoyu Shen
Large language models (LLMs) exhibit remarkable in-context learning (ICL) capabilities.
1 code implementation • 21 Jun 2024 • Siyin Wang, Xingsong Ye, Qinyuan Cheng, Junwen Duan, ShiMin Li, Jinlan Fu, Xipeng Qiu, Xuanjing Huang
As Artificial General Intelligence (AGI) becomes increasingly integrated into various facets of human life, ensuring the safety and ethical alignment of such systems is paramount.
1 code implementation • 17 Jun 2024 • Yongting Zhang, Lu Chen, Guodong Zheng, Yifeng Gao, Rui Zheng, Jinlan Fu, Zhenfei Yin, Senjie Jin, Yu Qiao, Xuanjing Huang, Feng Zhao, Tao Gui, Jing Shao
To address these limitations, we propose a Safety Preference Alignment dataset for Vision Language Models named SPA-VL.
no code implementations • 7 Mar 2024 • Lin Xu, Ningxin Peng, Daquan Zhou, See-Kiong Ng, Jinlan Fu
Dialogue state tracking (DST) aims to record user queries and goals during a conversational interaction achieved by maintaining a predefined set of slots and their corresponding values.
no code implementations • 4 Mar 2024 • Lin Xu, Qixian Zhou, Jinlan Fu, See-Kiong Ng
Knowledge-grounded dialogue systems aim to generate coherent and engaging responses based on the dialogue contexts and selected external knowledge.
1 code implementation • 22 Feb 2024 • Jinlan Fu, Shenzhen Huangfu, Hang Yan, See-Kiong Ng, Xipeng Qiu
Large Language Models (LLMs) have recently showcased remarkable generalizability in various domains.
no code implementations • 17 Feb 2024 • Siyin Wang, ShiMin Li, Tianxiang Sun, Jinlan Fu, Qinyuan Cheng, Jiasheng Ye, Junjie Ye, Xipeng Qiu, Xuanjing Huang
HAG extends the current paradigm in the text generation process, highlighting the feasibility of endowing the LLMs with self-regulate decoding strategies.
no code implementations • 26 Jan 2024 • Chaochao Lu, Chen Qian, Guodong Zheng, Hongxing Fan, Hongzhi Gao, Jie Zhang, Jing Shao, Jingyi Deng, Jinlan Fu, Kexin Huang, Kunchang Li, Lijun Li, LiMin Wang, Lu Sheng, Meiqi Chen, Ming Zhang, Qibing Ren, Sirui Chen, Tao Gui, Wanli Ouyang, Yali Wang, Yan Teng, Yaru Wang, Yi Wang, Yinan He, Yingchun Wang, Yixu Wang, Yongting Zhang, Yu Qiao, Yujiong Shen, Yurong Mou, Yuxi Chen, Zaibin Zhang, Zhelun Shi, Zhenfei Yin, Zhipin Wang
Multi-modal Large Language Models (MLLMs) have shown impressive abilities in generating reasonable responses with respect to multi-modal contents.
1 code implementation • 28 Dec 2023 • Yang Xiao, Yi Cheng, Jinlan Fu, Jiashuo Wang, Wenjie Li, PengFei Liu
In recent years, AI has demonstrated remarkable capabilities in simulating human behaviors, particularly those implemented with large language models (LLMs).
no code implementations • 28 Dec 2023 • Mingtao Yang, See-Kiong Ng, Jinlan Fu
Furthermore, to glean a nuanced understanding of OmniDialog's strengths and potential pitfalls, we designed a fine-grained analysis framework for dialogue-centric tasks.
3 code implementations • 8 Feb 2023 • Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, PengFei Liu
Generative Artificial Intelligence (AI) has enabled the development of sophisticated models that are capable of producing high-caliber text, images, and other outputs through the utilization of large pre-trained models.
no code implementations • COLING 2022 • Lin Xu, Qixian Zhou, Jinlan Fu, Min-Yen Kan, See-Kiong Ng
Knowledge-grounded dialog systems need to incorporate smooth transitions among knowledge selected for generating responses, to ensure that dialog flows naturally.
no code implementations • NAACL 2022 • Yang Xiao, Jinlan Fu, See-Kiong Ng, PengFei Liu
In this paper, we ask the research question of whether all the datasets in the benchmark are necessary.
1 code implementation • 29 Apr 2022 • Jinlan Fu, See-Kiong Ng, PengFei Liu
This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i. e. without any task/language-specific module?
no code implementations • ACL 2022 • Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, PengFei Liu
Despite data's crucial role in machine learning, most existing tools and research tend to focus on systems on top of existing data rather than how to interpret and manipulate data.
1 code implementation • EMNLP 2021 • Zhiheng Yan, Chong Zhang, Jinlan Fu, Qi Zhang, Zhongyu Wei
In our encoder, we leverage two gates: entity and relation gate, to segment neurons into two task partitions and one shared partition.
Ranked #1 on Relation Extraction on ADE Corpus
1 code implementation • 28 Jul 2021 • PengFei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning".
1 code implementation • ACL 2021 • Jinlan Fu, Xuanjing Huang, PengFei Liu
Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction.
1 code implementation • EMNLP 2021 • Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, PengFei Liu, Junjie Hu, Dan Garrette, Graham Neubig, Melvin Johnson
While a sizeable gap to human-level performance remains, improvements have been easier to achieve in some tasks than in others.
1 code implementation • ACL 2021 • PengFei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Zi-Yi Dou, Graham Neubig
In this paper, we present a new conceptualization and implementation of NLP evaluation: the ExplainaBoard, which in addition to inheriting the functionality of the standard leaderboard, also allows researchers to (i) diagnose strengths and weaknesses of a single system (e. g.~what is the best-performing system bad at?)
no code implementations • NAACL 2021 • Jinlan Fu, Liangjing Feng, Qi Zhang, Xuanjing Huang, PengFei Liu
The development of neural networks and pretraining techniques has spawned many sentence-level tagging systems that achieved superior performance on typical benchmarks.
1 code implementation • ACL 2021 • Tao Gui, Xiao Wang, Qi Zhang, Qin Liu, Yicheng Zou, Xin Zhou, Rui Zheng, Chong Zhang, Qinzhuo Wu, Jiacheng Ye, Zexiong Pang, Yongxin Zhang, Zhengyan Li, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Bolin Zhu, Shan Qin, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, Xuanjing Huang
To guarantee user acceptability, all the text transformations are linguistically based, and we provide a human evaluation for each one.
1 code implementation • EACL 2021 • Zihuiwen Ye, PengFei Liu, Jinlan Fu, Graham Neubig
We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future.
2 code implementations • EMNLP 2020 • Jinlan Fu, PengFei Liu, Graham Neubig
With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits.
1 code implementation • EMNLP 2020 • Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang
The performance of the Chinese Word Segmentation (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks, especially the successful use of large pre-trained models.
1 code implementation • 12 Jan 2020 • Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang
While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations?
no code implementations • IJCNLP 2019 • Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, Xuanjing Huang
Recurrent neural networks (RNN) used for Chinese named entity recognition (NER) that sequentially track character and word information have achieved great success.
Ranked #13 on Chinese Named Entity Recognition on OntoNotes 4
no code implementations • 25 Sep 2019 • Jinlan Fu, PengFei Liu, Xuanjing Huang
With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits.
1 code implementation • ACL 2019 • Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, Xuanjing Huang
In this work, we explore the way to perform named entity recognition (NER) using only unlabeled data and named entity dictionaries.
1 code implementation • 29 May 2019 • Minlong Peng, Qi Zhang, Xiaoyu Xing, Tao Gui, Jinlan Fu, Xuanjing Huang
However, representations of unseen or rare words trained on the end task are usually poor for appreciable performance.