1 code implementation • COLING 2022 • Baoxin Wang, Xingyi Duan, Dayong Wu, Wanxiang Che, Zhigang Chen, Guoping Hu
The Chinese text correction (CTC) focuses on detecting and correcting Chinese spelling errors and grammatical errors.
no code implementations • 21 Sep 2024 • Yuqing Huang, Rongyang Zhang, Xuesong He, Xuyang Zhi, Hao Wang, Xin Li, Feiyang Xu, Deguang Liu, Huadong Liang, Yi Li, Jian Cui, Zimu Liu, Shijin Wang, Guoping Hu, Guiquan Liu, Qi Liu, Defu Lian, Enhong Chen
To this end, we propose \textbf{\textit{ChemEval}}, which provides a comprehensive assessment of the capabilities of LLMs across a wide range of chemical domain tasks.
no code implementations • 13 Aug 2024 • Dayong Wu, Jiaqi Li, Baoxin Wang, Honghong Zhao, Siyuan Xue, Yanjie Yang, Zhijun Chang, Rui Zhang, Li Qian, Bo wang, Shijin Wang, Zhixiong Zhang, Guoping Hu
Large language models (LLMs) have shown remarkable achievements across various language tasks. To enhance the performance of LLMs in scientific literature services, we developed the scientific literature LLM (SciLit-LLM) through pre-training and supervised fine-tuning on scientific literature, building upon the iFLYTEK Spark LLM.
no code implementations • 19 Jun 2023 • Wayne Xin Zhao, Kun Zhou, Beichen Zhang, Zheng Gong, Zhipeng Chen, Yuanhang Zhou, Ji-Rong Wen, Jing Sha, Shijin Wang, Cong Liu, Guoping Hu
Specially, we construct a Mixture-of-Experts~(MoE) architecture for modeling mathematical text, so as to capture the common mathematical knowledge across tasks.
no code implementations • 21 May 2023 • Jun-Yu Ma, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu, Guoping Hu
The proposed encoder is capable of interactively capturing complementary information between features and contextual information, to derive language-agnostic representations for various IE tasks.
1 code implementation • 16 May 2023 • Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu, Guoping Hu
Addressing the issues of who saying what to whom in multi-party conversations (MPCs) has recently attracted a lot of research attention.
no code implementations • 9 Mar 2023 • Caiyuan Chu, Ya Li, Yifan Liu, Jia-Chen Gu, Quan Liu, Yongxin Ge, Guoping Hu
The key to automatic intention induction is that, for any given set of new data, the sentence representation obtained by the model can be well distinguished from different labels.
2 code implementations • 12 Oct 2021 • Jiaan Wang, Zhixu Li, Qiang Yang, Jianfeng Qu, Zhigang Chen, Qingsheng Liu, Guoping Hu
Sports game summarization aims to generate news articles from live text commentaries.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Ziqing Yang, Yiming Cui, Chenglei Si, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks.
no code implementations • 7 Feb 2021 • Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
To deal with this challenge, most of the existing works consider paragraphs as nodes in a graph and propose graph-based methods to retrieve them.
no code implementations • 13 Nov 2020 • Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
With the blooming of various Pre-trained Language Models (PLMs), Machine Reading Comprehension (MRC) has embraced significant improvements on various benchmarks and even surpass human performances.
1 code implementation • COLING 2020 • Wentao Ma, Yiming Cui, Chenglei Si, Ting Liu, Shijin Wang, Guoping Hu
Most pre-trained language models (PLMs) construct word representations at subword level with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are almost avoidable.
no code implementations • 1 Oct 2020 • Shaolei Wang, Baoxin Wang, Jiefu Gong, Zhongyuan Wang, Xiao Hu, Xingyi Duan, Zizhuo Shen, Gang Yue, Ruiji Fu, Dayong Wu, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu
Grammatical error diagnosis is an important task in natural language processing.
6 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.
Ranked #13 on Stock Market Prediction on Astock
1 code implementation • ACL 2020 • Wentao Ma, Yiming Cui, Ting Liu, Dong Wang, Shijin Wang, Guoping Hu
Human conversations contain many types of information, e. g., knowledge, common sense, and language habits.
no code implementations • EMNLP 2020 • Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
We construct a strong baseline model to establish that, with the proper use of pre-trained models, graph structure may not be necessary for multi-hop question answering.
1 code implementation • COLING 2020 • Yiming Cui, Ting Liu, Ziqing Yang, Zhipeng Chen, Wentao Ma, Wanxiang Che, Shijin Wang, Guoping Hu
To add diversity in this area, in this paper, we propose a new task called Sentence Cloze-style Machine Reading Comprehension (SC-MRC).
1 code implementation • ACL 2020 • Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing.
no code implementations • 19 Dec 2019 • Yiming Cui, Wanxiang Che, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
Story Ending Prediction is a task that needs to select an appropriate ending for the given story, which requires the machine to understand the story and sometimes needs commonsense knowledge.
no code implementations • 14 Nov 2019 • Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu
Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks.
no code implementations • 9 Nov 2019 • Ziqing Yang, Yiming Cui, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
With virtual adversarial training (VAT), we explore the possibility of improving the RC models with semi-supervised learning and prove that examples from a different task are also beneficial.
no code implementations • IJCNLP 2019 • Ziyue Wang, Baoxin Wang, Xingyi Duan, Dayong Wu, Shijin Wang, Guoping Hu, Ting Liu
To our knowledge, IFlyLegal is the first Chinese legal system that employs up-to-date NLP techniques and caters for needs of different user groups, such as lawyers, judges, procurators, and clients.
no code implementations • CONLL 2019 • Wentao Ma, Yiming Cui, Nan Shao, Su He, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
The heart of TripleNet is a novel attention mechanism named triple attention to model the relationships within the triple at four levels.
2 code implementations • IJCNLP 2019 • Xiyuan Yang, Xiaotao Gu, Sheng Lin, Siliang Tang, Yueting Zhuang, Fei Wu, Zhigang Chen, Guoping Hu, Xiang Ren
Despite of the recent success of collective entity linking (EL) methods, these "global" inference methods may yield sub-optimal results when the "all-mention coherence" assumption breaks, and often suffer from high computational cost at the inference stage, due to the complex search space.
Ranked #5 on Entity Disambiguation on AIDA-CoNLL
1 code implementation • IJCNLP 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English.
1 code implementation • ACL 2019 • Sheng Lin, Luye Zheng, Bo Chen, Siliang Tang, Yueting Zhuang, Fei Wu, Zhigang Chen, Guoping Hu, Xiang Ren
Fine-grained Entity Typing is a tough task which suffers from noise samples extracted from distant supervision.
1 code implementation • 7 Jun 2019 • Qi Liu, Zhenya Huang, Yu Yin, Enhong Chen, Hui Xiong, Yu Su, Guoping Hu
In EERNN, we simply summarize each student's state into an integrated vector and trace it with a recurrent neural network, where we design a bidirectional LSTM to learn the encoding of each exercise's content.
no code implementations • 27 May 2019 • Yu Yin, Zhenya Huang, Enhong Chen, Qi Liu, Fuzheng Zhang, Xing Xie, Guoping Hu
Then, we decide "what-to-write" by developing a GRU based network with the spotlight areas for transcribing the content accordingly.
no code implementations • NAACL 2019 • Bo Chen, Xiaotao Gu, Yu-Feng Hu, Siliang Tang, Guoping Hu, Yueting Zhuang, Xiang Ren
Recently, distant supervision has gained great success on Fine-grained Entity Typing (FET).
no code implementations • 21 Nov 2018 • Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) with multiple-choice questions requires the machine to read given passage and select the correct answer among several candidates.
1 code implementation • IJCNLP 2019 • Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention.
no code implementations • EMNLP 2018 • Lizhen Liu, Xiao Hu, Wei Song, Ruiji Fu, Ting Liu, Guoping Hu
Simile is a special type of metaphor, where comparators such as like and as are used to compare two objects.
no code implementations • WS 2018 • Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu
This paper describes our system at NLPTEA-2018 Task {\#}1: Chinese Grammatical Error Diagnosis.
no code implementations • 15 Mar 2018 • Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, Ting Liu, Guoping Hu
This paper describes the system which got the state-of-the-art results at SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge.
2 code implementations • 29 Sep 2017 • Wei-Nan Zhang, Zhigang Chen, Wanxiang Che, Guoping Hu, Ting Liu
In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology.
1 code implementation • LREC 2018 • Yiming Cui, Ting Liu, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention.
no code implementations • ACL 2017 • Wei Song, Dong Wang, Ruiji Fu, Lizhen Liu, Ting Liu, Guoping Hu
Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0. 7.
2 code implementations • ACL 2017 • Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, Guoping Hu
Cloze-style queries are representative problems in reading comprehension.
Ranked #3 on Question Answering on Children's Book Test
no code implementations • COLING 2016 • Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu
Reading comprehension has embraced a booming in recent NLP research.
no code implementations • ACL 2017 • Ting Liu, Yiming Cui, Qingyu Yin, Wei-Nan Zhang, Shijin Wang, Guoping Hu
Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers.