Search Results for author: Guoping Hu

Found 41 papers, 17 papers with code

SparkRA: A Retrieval-Augmented Knowledge Service System Based on Spark Large Language Model

no code implementations13 Aug 2024 Dayong Wu, Jiaqi Li, Baoxin Wang, Honghong Zhao, Siyuan Xue, Yanjie Yang, Zhijun Chang, Rui Zhang, Li Qian, Bo wang, Shijin Wang, Zhixiong Zhang, Guoping Hu

Large language models (LLMs) have shown remarkable achievements across various language tasks. To enhance the performance of LLMs in scientific literature services, we developed the scientific literature LLM (SciLit-LLM) through pre-training and supervised fine-tuning on scientific literature, building upon the iFLYTEK Spark LLM.

Language Modelling Large Language Model +1

SHINE: Syntax-augmented Hierarchical Interactive Encoder for Zero-shot Cross-lingual Information Extraction

no code implementations21 May 2023 Jun-Yu Ma, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu, Guoping Hu

The proposed encoder is capable of interactively capturing complementary information between features and contextual information, to derive language-agnostic representations for various IE tasks.

GIFT: Graph-Induced Fine-Tuning for Multi-Party Conversation Understanding

1 code implementation16 May 2023 Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu, Guoping Hu

Addressing the issues of who saying what to whom in multi-party conversations (MPCs) has recently attracted a lot of research attention.

Speaker Identification

Multi-Stage Coarse-to-Fine Contrastive Learning for Conversation Intent Induction

no code implementations9 Mar 2023 Caiyuan Chu, Ya Li, Yifan Liu, Jia-Chen Gu, Quan Liu, Yongxin Ge, Guoping Hu

The key to automatic intention induction is that, for any given set of new data, the sentence representation obtained by the model can be well distinguished from different labels.

Clustering Contrastive Learning +3

Memory Augmented Sequential Paragraph Retrieval for Multi-hop Question Answering

no code implementations7 Feb 2021 Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu

To deal with this challenge, most of the existing works consider paragraphs as nodes in a graph and propose graph-based methods to retrieve them.

Information Retrieval Multi-hop Question Answering +2

Unsupervised Explanation Generation for Machine Reading Comprehension

no code implementations13 Nov 2020 Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu

With the blooming of various Pre-trained Language Models (PLMs), Machine Reading Comprehension (MRC) has embraced significant improvements on various benchmarks and even surpass human performances.

Explanation Generation Machine Reading Comprehension +1

CharBERT: Character-aware Pre-trained Language Model

1 code implementation COLING 2020 Wentao Ma, Yiming Cui, Chenglei Si, Ting Liu, Shijin Wang, Guoping Hu

Most pre-trained language models (PLMs) construct word representations at subword level with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are almost avoidable.

Language Modelling Question Answering +3

Revisiting Pre-Trained Models for Chinese Natural Language Processing

6 code implementations Findings of the Association for Computational Linguistics 2020 Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu

Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.

Language Modelling Stock Market Prediction

Is Graph Structure Necessary for Multi-hop Question Answering?

no code implementations EMNLP 2020 Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu

We construct a strong baseline model to establish that, with the proper use of pre-trained models, graph structure may not be necessary for multi-hop question answering.

Graph Attention Multi-hop Question Answering +1

Discriminative Sentence Modeling for Story Ending Prediction

no code implementations19 Dec 2019 Yiming Cui, Wanxiang Che, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu

Story Ending Prediction is a task that needs to select an appropriate ending for the given story, which requires the machine to understand the story and sometimes needs commonsense knowledge.

Cloze Test Sentence

Contextual Recurrent Units for Cloze-style Reading Comprehension

no code implementations14 Nov 2019 Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu

Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks.

Reading Comprehension Sentence +2

Improving Machine Reading Comprehension via Adversarial Training

no code implementations9 Nov 2019 Ziqing Yang, Yiming Cui, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu

With virtual adversarial training (VAT), we explore the possibility of improving the RC models with semi-supervised learning and prove that examples from a different task are also beneficial.

General Classification Image Classification +3

IFlyLegal: A Chinese Legal System for Consultation, Law Searching, and Document Analysis

no code implementations IJCNLP 2019 Ziyue Wang, Baoxin Wang, Xingyi Duan, Dayong Wu, Shijin Wang, Guoping Hu, Ting Liu

To our knowledge, IFlyLegal is the first Chinese legal system that employs up-to-date NLP techniques and caters for needs of different user groups, such as lawyers, judges, procurators, and clients.

Natural Language Inference Question Answering +1

Learning Dynamic Context Augmentation for Global Entity Linking

2 code implementations IJCNLP 2019 Xiyuan Yang, Xiaotao Gu, Sheng Lin, Siliang Tang, Yueting Zhuang, Fei Wu, Zhigang Chen, Guoping Hu, Xiang Ren

Despite of the recent success of collective entity linking (EL) methods, these "global" inference methods may yield sub-optimal results when the "all-mention coherence" assumption breaks, and often suffer from high computational cost at the inference stage, due to the complex search space.

Entity Disambiguation Entity Linking +2

EKT: Exercise-aware Knowledge Tracing for Student Performance Prediction

1 code implementation7 Jun 2019 Qi Liu, Zhenya Huang, Yu Yin, Enhong Chen, Hui Xiong, Yu Su, Guoping Hu

In EERNN, we simply summarize each student's state into an integrated vector and trace it with a recurrent neural network, where we design a bidirectional LSTM to learn the encoding of each exercise's content.

Knowledge Tracing

Transcribing Content from Structural Images with Spotlight Mechanism

no code implementations27 May 2019 Yu Yin, Zhenya Huang, Enhong Chen, Qi Liu, Fuzheng Zhang, Xing Xie, Guoping Hu

Then, we decide "what-to-write" by developing a GRU based network with the spotlight areas for transcribing the content accordingly.

Convolutional Spatial Attention Model for Reading Comprehension with Multiple-Choice Questions

no code implementations21 Nov 2018 Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, Guoping Hu

Machine Reading Comprehension (MRC) with multiple-choice questions requires the machine to read given passage and select the correct answer among several candidates.

Machine Reading Comprehension Multiple-choice

HFL-RC System at SemEval-2018 Task 11: Hybrid Multi-Aspects Model for Commonsense Reading Comprehension

no code implementations15 Mar 2018 Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, Ting Liu, Guoping Hu

This paper describes the system which got the state-of-the-art results at SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge.

Multiple-choice Reading Comprehension

Discourse Mode Identification in Essays

no code implementations ACL 2017 Wei Song, Dong Wang, Ruiji Fu, Lizhen Liu, Ting Liu, Guoping Hu

Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0. 7.

Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution

no code implementations ACL 2017 Ting Liu, Yiming Cui, Qingyu Yin, Wei-Nan Zhang, Shijin Wang, Guoping Hu

Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers.

Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.