Search Results for author: Weiguang Qu

Found 23 papers, 0 papers with code

An Element-aware Multi-representation Model for Law Article Prediction

no code implementations EMNLP 2020 Huilin Zhong, Junsheng Zhou, Weiguang Qu, Yunfei Long, Yanhui Gu

To capture the dependencies between law articles, the model also introduces a self-attention mechanism between multiple representations.

Automated Essay Scoring via Pairwise Contrastive Regression

no code implementations COLING 2022 Jiayi Xie, Kaiwei Cai, Li Kong, Junsheng Zhou, Weiguang Qu

To this end, in this paper we take inspiration from contrastive learning and propose a novel unified Neural Pairwise Contrastive Regression (NPCR) model in which both objectives are optimized simultaneously as a single loss.

Automated Essay Scoring Contrastive Learning +1

The First International Ancient Chinese Word Segmentation and POS Tagging Bakeoff: Overview of the EvaHan 2022 Evaluation Campaign

no code implementations LT4HALA (LREC) 2022 Bin Li, Yiguo Yuan, Jingya Lu, Minxuan Feng, Chao Xu, Weiguang Qu, Dongbo Wang

This paper presents the results of the First Ancient Chinese Word Segmentation and POS Tagging Bakeoff (EvaHan), which was held at the Second Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) 2022, in the context of the 13th Edition of the Language Resources and Evaluation Conference (LREC 2022).

Chinese Word Segmentation POS +2

Align-smatch: A Novel Evaluation Method for Chinese Abstract Meaning Representation Parsing based on Alignment of Concept and Relation

no code implementations LREC 2022 Liming Xiao, Bin Li, Zhixing Xu, Kairui Huo, Minxuan Feng, Junsheng Zhou, Weiguang Qu

Therefore, to make up for the vacancy of Chinese AMR parsing evaluation methods, based on AMR evaluation metric smatch, we have improved the algorithm of generating triples so that to make it compatible with concept alignment and relation alignment.

AMR Parsing Concept Alignment +2

基于神经网络的连动句识别(Recognition of serial-verb sentences based on Neural Network)

no code implementations CCL 2020 Chao Sun, Weiguang Qu, Tingxin Wei, Yanhui Gu, Bin Li, Junsheng Zhou

连动句是具有连动结构的句子, 是汉语中的特殊句法结构, 在现代汉语中十分常见且使用频繁。连动句语法结构和语义关系都很复杂, 在识别中存在许多问题, 对此本文针对连动句的识别问题进行了研究, 提出了一种基于神经网络的连动句识别方法。本方法分两步:第一步, 运用简单的规则对语料进行预处理;第二步, 用文本分类的思想, 使用BERT编码, 利用多层CNN与BiLSTM模型联合提取特征进行分类, 进而完成连动句识别任务。在人工标注的语料上进行实验, 实验结果达到92. 71%的准确率, F1值为87. 41%。

基于深度学习的实体关系抽取研究综述(Review of Entity Relation Extraction based on deep learning)

no code implementations CCL 2020 Zhentao Xia, Weiguang Qu, Yanhui Gu, Junsheng Zhou, Bin Li

作为信息抽取的一项核心子任务, 实体关系抽取对于知识图谱、智能问答、语义搜索等自然语言处理应用都十分重要。关系抽取在于从非结构化文本中自动地识别实体之间具有的某种语义关系。该文聚焦句子级别的关系抽取研究, 介绍用于关系抽取的主要数据集并对现有的技术作了阐述, 主要分为:有监督的关系抽取、远程监督的关系抽取和实体关系联合抽取。我们对比用于该任务的各种模型, 分析它们的贡献与缺 陷。最后介绍中文实体关系抽取的研究现状和方法。

Relation Extraction

中文连动句语义关系识别研究(Research on Semantic Relation Recognition of Chinese Serial-verb Sentences)

no code implementations CCL 2021 Chao Sun, Weiguang Qu, Tingxin Wei, Yanhui Gu, Bin Li, Junsheng Zhou

“连动句是形如“NP+VP1+VP2”的句子, 句中含有两个或两个以上的动词(或动词结构)且动词的施事为同一对象。相同结构的连动句可以表示多种不同的语义关系。本文基于前人对连动句中VP1和VP2之间的语义关系分类, 标注了连动句语义关系数据集, 基于神经网络完成了对连动句语义关系的识别。该方法将连动句语义识别任务进行分解, 基于BERT进行编码, 利用BiLSTM-CRF先识别出连动句中连动词(VP)及其主语(NP), 再基于融合连动词信息的编码, 利用BiLSTM-Attention对连动词进行关系判别, 实验结果验证了所提方法的有效性。”

中文词语离合现象识别研究(Research on Recognition of the Separation and Reunion Phenomena of Words in Chinese)

no code implementations CCL 2021 Lou Zhou, Weiguang Qu, Tingxin Wei, Junsheng Zhou, Bin Li, Yanhui Gu

“汉语词语的离合现象是汉语中一种词语可分可合的特殊现象。本文采用字符级序列标注方法解决二字动词离合现象的自动识别问题, 以避免中文分词及词性标注的错误传递, 节省制定匹配规则与特征模板的人工开支。在训练过程中微调BERT中文预训练模型, 获取面向目标任务的字符向量表示, 并引入掩码机制对模型隐藏离用法中分离的词语, 减轻词语本身对识别结果的影响, 强化中间插入成分的学习, 并对前后语素采用不同的掩码以强调其出现顺序, 进而使模型具备了识别复杂及偶发性离用法的能力。为获得含有上下文信息的句子表达, 将原始的句子表达与采用掩码的句子表达分别输入两个不同参数的BiLSTM层进行训练, 最后采用CRF算法捕捉句子标签序列的依赖关系。本文提出的BERT MASK + 2BiLSTMs + CRF模型比现有最优的离合词识别模型提高了2. 85%的F1值。”

Building a Chinese AMR Bank with Concept and Relation Alignments

no code implementations LILT 2019 Bin Li, Yuan Wen, Li Song, Weiguang Qu, Nianwen Xue

One significant change we have made to the AMR annotation methodology is the inclusion of the alignment between word tokens in the sentence and the concepts/relations in the CAMR annotation to make it easier for automatic parsers to model the correspondence between a sentence and its meaning representation.

Relation Sentence

多轮对话的篇章级抽象语义表示标注体系研究(Research on Discourse-level Abstract Meaning Representation Annotation framework in Multi-round Dialogue)

no code implementations CCL 2020 Tong Huang, Bin Li, Peiyi Yan, Tingting Ji, Weiguang Qu

对话分析是智能客服、聊天机器人等自然语言对话应用的基础课题, 而对话语料与常规书面语料有较大差异, 存在大量的称谓、情感短语、省略、语序颠倒、冗余等复杂现象, 对句法和语义分析器的影响较大, 对话自动分析的准确率相对书面语料一直不高。其主要原因在于对多轮对话缺乏严整的形式化描写方式, 不利于后续的分析计算。因此, 本文在梳理国内外针对对话的标注体系和语料库的基础上, 提出了基于抽象语义表示的篇章级多轮对话标注体系。具体探讨了了篇章级别的语义结构标注方法, 给出了词语和概念关系的对齐方案, 针对称谓语和情感短语增加了相应的语义关系和概念, 调整了表示主观情感词语的论元结构, 并对对话中一些特殊现象进行了规定, 设计了人工标注平台, 为大规模的多轮对话语料库标注与计算研究奠定基础。

基于抽象语义表示的汉语疑问句的标注与分析(Chinese Interrogative Sentences Annotation and Analysis Based on the Abstract Meaning Representation)

no code implementations CCL 2020 Peiyi Yan, Bin Li, Tong Huang, Kairui Huo, Jin Chen, Weiguang Qu

疑问句的句法语义分析在搜索引擎、信息抽取和问答系统等领域有着广泛的应用。计算语言学多采取问句分类和句法分析相结合的方式来处理疑问句, 精度和效率还不理想。而疑问句的语言学研究成果丰富, 比如疑问句的结构类型、疑问焦点和疑问代词的非疑问用法等, 但缺乏系统的形式化表示。本文致力于解决这一难题, 采用基于图结构的汉语句子语义的整体表示方法—中文抽象语义表示(CAMR)来标注疑问句的语义结构, 将疑问焦点和整句语义一体化表示出来。然后选取了宾州中文树库CTB8. 0网络媒体语料、小学语文教材以及《小王子》中文译本的2万句语料中共计2071句疑问句, 统计了疑问句的主要特点。统计表明, 各种疑问代词都可以通过疑问概念amr-unknown和语义关系的组合来表示, 能够完整地表示出疑问句的关键信息、疑问焦点和语义结构。最后, 根据疑问代词所关联的语义关系, 统计了疑问焦点的概率分布, 其中原因、修饰语和受事的占比最高, 分别占26. 53%、16. 73%以及16. 44%。基于抽象语义表示的疑问句标注与分析可以为汉语疑问句研究提供基础理论与资源。

Construct a Sense-Frame Aligned Predicate Lexicon for Chinese AMR Corpus

no code implementations LREC 2020 Li Song, Yuling Dai, Yihuan Liu, Bin Li, Weiguang Qu

The existing lexicons blur senses and frames of predicates, which needs to be refined to meet the tasks like word sense disambiguation and event extraction.

Event Extraction Sentence +1

Neural Network based Deep Transfer Learning for Cross-domain Dependency Parsing

no code implementations8 Aug 2019 Zhentao Xia, Likai Wang, Weiguang Qu, Junsheng Zhou, Yanhui Gu

In this paper, we describe the details of the neural dependency parser sub-mitted by our team to the NLPCC 2019 Shared Task of Semi-supervised do-main adaptation subtask on Cross-domain Dependency Parsing.

Dependency Parsing Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.