Search Results for author: Baobao Chang

Found 108 papers, 43 papers with code

S^4-Tuning: A Simple Cross-lingual Sub-network Tuning Method

no code implementations ACL 2022 Runxin Xu, Fuli Luo, Baobao Chang, Songfang Huang, Fei Huang

The emergence of multilingual pre-trained language models makes it possible to adapt to target languages with only few labeled examples. However, vanilla fine-tuning tends to achieve degenerated and unstable results, owing to the Language Interference among different languages, and Parameter Overload under the few-sample transfer learning scenarios. To address two problems elegantly, we propose S^4-Tuning, a Simple Cross-lingual Sub-network Tuning method.

Transfer Learning

基于双编码器的医学文本中文分词(Chinese word segmentation of medical text based on dual-encoder)

no code implementations CCL 2021 Yuan Zong, Baobao Chang

“中文分词是自然语言处理领域的基础工作, 然而前人的医学文本分词工作都只是直接套用通用分词的方法, 而医学文本多专用术语的特点让分词系统需要对医学专用术语和医学文本中的非医学术语文本提供不同的分词粒度。本文提出了双编码器医学文本中文分词模型, 利用辅助编码器为医学专有术语提供粗粒度表示。模型将需要粗粒度分词的医学专用术语和需要通用分词粒度的文本分开, 在提升医学专用术语的分词能力的同时最大限度地避免了其粗粒度对于医学文本中通用文本分词的干扰。”

Chinese Word Segmentation

生成,推理与排序:基于多任务架构的数学文字题生成(Generating, Reasoning & Ranking: Multitask Learning Framework for Math Word Problem Generation)

no code implementations CCL 2022 Tianyang Cao, Xiaodan Xu, Baobao Chang

“数学文字题是一段能反映数学等式潜在逻辑的叙述性文本。成功的数学问题生成在语言生成和教育领域都具有广阔的应用前景。前人的工作大多需要人工标注的模板或关键词作为输入, 且未考虑数学表达式本身的特点。本文提出了一种多任务联合训练的问题文本生成模型。我们设计了三个辅助任务, 包括数字间关系抽取、数值排序和片段替换预测。他们与生成目标联合训练, 用以监督解码器的学习, 增强模型对运算逻辑和问题条件的感知能力。实验证明所提方法能有效提升生成的数学文字题的质量。”

Math

融合知识的多目标词联合框架语义分析模型(Knowledge-integrated Joint Model For Multi-target Frame Semantic Parsing)

no code implementations CCL 2022 Xudong Chen, Ce Zheng, Baobao Chang

“框架语义分析任务是自然语言处理领域的一项基础性任务。先前的研究工作大多针对单目标词进行模型设计, 无法一次性完成多个目标词的框架语义结构提取。本文提出了一个面向多目标的框架语义分析模型, 实现对多目标词的联合预测。该模型对框架语义分析的各项子任务进行交互性建模, 实现子任务间的双向交互。此外, 本文利用关系图网络对框架关系信息进行编码, 将其作为框架语义学知识融入模型中。实验表明, 本文模型在不借助额外语料的情况下相比之前模型都有不同程度的提高。消融实验证明了本文模型设计的有效性。此外我们分析了模型目前存在的局限性以及未来的改进方向。”

Semantic Parsing

Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation

1 code implementation12 Apr 2024 Haozhe Zhao, Zefan Cai, Shuzheng Si, Liang Chen, Yufeng He, Kaikai An, Baobao Chang

Therefore, we introduce ALSACE to leverage the learned knowledge from the well-performing languages to guide under-performing ones within the same mPLM, eliminating the need for additional labeled multilingual data.

An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models

1 code implementation11 Mar 2024 Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, Baobao Chang

To this end, we introduce FastV, a versatile plug-and-play method designed to optimize computational efficiency by learning adaptive attention patterns in early layers and pruning visual tokens in subsequent ones.

Computational Efficiency Video Understanding

Improving Event Definition Following For Zero-Shot Event Detection

no code implementations5 Mar 2024 Zefan Cai, Po-Nien Kung, Ashima Suvarna, Mingyu Derek Ma, Hritik Bansal, Baobao Chang, P. Jeffrey Brantingham, Wei Wang, Nanyun Peng

We hypothesize that a diverse set of event types and definitions are the key for models to learn to follow event definitions while existing event extraction datasets focus on annotating many high-quality examples for a few event types.

Event Detection Event Extraction

PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain

1 code implementation21 Feb 2024 Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao, Zefan Cai, Yuchi Wang, Peiyi Wang, Xiangdi Meng, Tianyu Liu, Baobao Chang

To address this, we introduce Embodied-Instruction-Evolution (EIE), an automatic framework for synthesizing instruction tuning examples in multimodal embodied environments.

Autonomous Driving Decision Making

ML-Bench: Evaluating Large Language Models for Code Generation in Repository-Level Machine Learning Tasks

1 code implementation16 Nov 2023 Yuliang Liu, Xiangru Tang, Zefan Cai, Junjie Lu, Yichi Zhang, Yanjun Shao, Zexuan Deng, Helan Hu, Kaikai An, Ruijun Huang, Shuzheng Si, Sheng Chen, Haozhe Zhao, Liang Chen, Yan Wang, Tianyu Liu, Zhiwei Jiang, Baobao Chang, Yujia Qin, Wangchunshu Zhou, Yilun Zhao, Arman Cohan, Mark Gerstein

While Large Language Models (LLMs) have demonstrated proficiency in code generation benchmarks, translating these results into practical development scenarios - where leveraging existing repository-level libraries is the norm - remains challenging.

Code Generation Navigate

Coarse-to-Fine Dual Encoders are Better Frame Identification Learners

1 code implementation20 Oct 2023 Kaikai An, Ce Zheng, Bofei Gao, Haozhe Zhao, Baobao Chang

Recent researches measure the similarity or matching score between targets and candidate frames by modeling frame definitions.

Contrastive Learning Representation Learning +1

Guiding AMR Parsing with Reverse Graph Linearization

1 code implementation13 Oct 2023 Bofei Gao, Liang Chen, Peiyi Wang, Zhifang Sui, Baobao Chang

Abstract Meaning Representation (AMR) parsing aims to extract an abstract semantic graph from a given sentence.

AMR Parsing Sentence

MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning

2 code implementations14 Sep 2023 Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang

In this paper, we address the limitation above by 1) introducing vision-language Model with Multi-Modal In-Context Learning(MMICL), a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts.

Hallucination In-Context Learning +2

Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting

no code implementations NAACL 2022 Shuzheng Si, Shuang Zeng, Baobao Chang

Then, we adopt a fast and effective edit operation scoring network to model the relation between two tokens.

Human-in-the-Loop through Chain-of-Thought

no code implementations10 Jun 2023 Zefan Cai, Baobao Chang, Wenjuan Han

While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning.

Logical Reasoning

DialogVCS: Robust Natural Language Understanding in Dialogue System Upgrade

no code implementations24 May 2023 Zefan Cai, Xin Zheng, Tianyu Liu, Xu Wang, Haoran Meng, Jiaqi Han, Gang Yuan, Binghuai Lin, Baobao Chang, Yunbo Cao

In the constant updates of the product dialogue systems, we need to retrain the natural language understanding (NLU) model as new data from the real users would be merged into the existent data accumulated in the last updates.

Intent Detection Multi-Label Classification +1

Can We Edit Factual Knowledge by In-Context Learning?

2 code implementations22 May 2023 Ce Zheng, Lei LI, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang

Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge.

In-Context Learning knowledge editing

DiffCap: Exploring Continuous Diffusion on Image Captioning

no code implementations20 May 2023 Yufeng He, Zefan Cai, Xu Gan, Baobao Chang

Our method transforms discrete tokens in a natural way and applies continuous diffusion on them to successfully fuse extracted image features for diffusion caption generation.

Caption Generation Image Captioning +2

A Survey on In-context Learning

1 code implementation31 Dec 2022 Qingxiu Dong, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu sun, Jingjing Xu, Lei LI, Zhifang Sui

With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few examples.

In-Context Learning

Query Your Model with Definitions in FrameNet: An Effective Method for Frame Semantic Role Labeling

1 code implementation5 Dec 2022 Ce Zheng, Yiming Wang, Baobao Chang

Such methods usually model role classification as naive multi-class classification and treat arguments individually, which neglects label semantics and interactions between arguments and thus hindering performance and generalization of models.

Classification Multi-class Classification +1

A Two-Stage Method for Chinese AMR Parsing

1 code implementation29 Sep 2022 Liang Chen, Bofei Gao, Baobao Chang

In this paper, we provide a detailed description of our system at CAMRP-2022 evaluation.

AMR Parsing Vocal Bursts Valence Prediction

Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances

1 code implementation2 May 2022 Shoujie Tong, Qingxiu Dong, Damai Dai, YiFan Song, Tianyu Liu, Baobao Chang, Zhifang Sui

For each instance in a batch, we involve other instances in the same batch to interact with it.

A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction

1 code implementation NAACL 2022 Runxin Xu, Peiyi Wang, Tianyu Liu, Shuang Zeng, Baobao Chang, Zhifang Sui

In this paper, we focus on extracting event arguments from an entire document, which mainly faces two critical problems: a) the long-distance dependency between trigger and arguments over sentences; b) the distracting context towards an event in the document.

Document-level Event Extraction Event Argument Extraction +2

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

2 code implementations Findings (NAACL) 2022 Liang Chen, Peiyi Wang, Runxin Xu, Tianyu Liu, Zhifang Sui, Baobao Chang

As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing.

Ranked #7 on AMR Parsing on LDC2020T02 (using extra training data)

AMR Parsing Dependency Parsing +1

StableMoE: Stable Routing Strategy for Mixture of Experts

1 code implementation ACL 2022 Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, Furu Wei

We point out that existing learning-to-route MoE methods suffer from the routing fluctuation issue, i. e., the target expert of the same input may change along with training, but only one expert will be activated for the input during inference.

Language Modelling Machine Translation

Mixture of Experts for Biomedical Question Answering

no code implementations15 Apr 2022 Damai Dai, Wenbin Jiang, Jiyuan Zhang, Weihua Peng, Yajuan Lyu, Zhifang Sui, Baobao Chang, Yong Zhu

In this paper, in order to alleviate the parameter competition problem, we propose a Mixture-of-Expert (MoE) based question answering method called MoEBQA that decouples the computation for different types of questions by sparse routing.

Question Answering

Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation

2 code implementations6 Mar 2022 Liang Chen, Runxin Xu, Baobao Chang

Label smoothing and vocabulary sharing are two widely used techniques in neural machine translation models.

Machine Translation Translation

From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression

2 code implementations14 Dec 2021 Runxin Xu, Fuli Luo, Chengyu Wang, Baobao Chang, Jun Huang, Songfang Huang, Fei Huang

Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge.

Contrastive Learning Language Modelling +2

Hierarchical Curriculum Learning for AMR Parsing

1 code implementation ACL 2022 Peiyi Wang, Liang Chen, Tianyu Liu, Damai Dai, Yunbo Cao, Baobao Chang, Zhifang Sui

Abstract Meaning Representation (AMR) parsing aims to translate sentences to semantic representation with a hierarchical structure, and is recently empowered by pretrained sequence-to-sequence models.

AMR Parsing Representation Learning

An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling

1 code implementation NAACL 2022 Peiyi Wang, Runxin Xu, Tianyu Liu, Qingyu Zhou, Yunbo Cao, Baobao Chang, Zhifang Sui

Few-Shot Sequence Labeling (FSSL) is a canonical paradigm for the tagging models, e. g., named entity recognition and slot filling, to generalize on an emerging, resource-scarce domain.

Few-shot NER Meta-Learning +4

Behind the Scenes: An Exploration of Trigger Biases Problem in Few-Shot Event Classification

1 code implementation29 Aug 2021 Peiyi Wang, Runxin Xu, Tianyu Liu, Damai Dai, Baobao Chang, Zhifang Sui

However, we find they suffer from trigger biases that signify the statistical homogeneity between some trigger words and target event types, which we summarize as trigger overlapping and trigger separability.

Explicit Interaction Network for Aspect Sentiment Triplet Extraction

no code implementations21 Jun 2021 Peiyi Wang, Tianyu Liu, Damai Dai, Runxin Xu, Baobao Chang, Zhifang Sui

Table encoder extracts sentiment at token-pair level, so that the compositional feature between targets and opinions can be easily captured.

Aspect Sentiment Triplet Extraction Sentence +1

Decompose, Fuse and Generate: A Formation-Informed Method for Chinese Definition Generation

no code implementations NAACL 2021 Hua Zheng, Damai Dai, Lei LI, Tianyu Liu, Zhifang Sui, Baobao Chang, Yang Liu

In this paper, we tackle the task of Definition Generation (DG) in Chinese, which aims at automatically generating a definition for a word.

Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker

2 code implementations ACL 2021 Runxin Xu, Tianyu Liu, Lei LI, Baobao Chang

Existing methods are not effective due to two challenges of this task: a) the target event arguments are scattered across sentences; b) the correlation among events in a document is non-trivial to model.

Document-level Event Extraction Event Extraction

Problems and Countermeasures in Natural Language Processing Evaluation

no code implementations20 Apr 2021 Qingxiu Dong, Zhifang Sui, Weidong Zhan, Baobao Chang

Starting from the concept, com-position, development and meaning of natural language evaluation, this article classifies and summarizes the tasks and char-acteristics of mainstream natural language evaluation, and then summarizes the problems and causes of natural language pro-cessing evaluation.

Position

Knowledge Neurons in Pretrained Transformers

3 code implementations ACL 2022 Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, Furu Wei

In this paper, we present preliminary studies on how factual knowledge is stored in pretrained Transformers by introducing the concept of knowledge neurons.

Incorporating Connections Beyond Knowledge Embeddings: A Plug-and-Play Module to Enhance Commonsense Reasoning in Machine Reading Comprehension

no code implementations26 Mar 2021 Damai Dai, Hua Zheng, Zhifang Sui, Baobao Chang

Conventional Machine Reading Comprehension (MRC) has been well-addressed by pattern matching, but the ability of commonsense reasoning remains a gap between humans and machines.

Knowledge Graph Embeddings Knowledge Graphs +1

Towards Faithfulness in Open Domain Table-to-text Generation from an Entity-centric View

1 code implementation17 Feb 2021 Tianyu Liu, Xin Zheng, Baobao Chang, Zhifang Sui

In open domain table-to-text generation, we notice that the unfaithful generation usually contains hallucinated content which can not be aligned to any input table record.

Few-Shot Learning Table-to-Text Generation

Coarse-to-Fine Entity Representations for Document-level Relation Extraction

1 code implementation4 Dec 2020 Damai Dai, Jing Ren, Shuang Zeng, Baobao Chang, Zhifang Sui

In classification, we combine the entity representations from both two levels into more comprehensive representations for relation extraction.

Document-level Relation Extraction Relation

An Anchor-Based Automatic Evaluation Metric for Document Summarization

no code implementations COLING 2020 Kexiang Wang, Tianyu Liu, Baobao Chang, Zhifang Sui

The widespread adoption of reference-based automatic evaluation metrics such as ROUGE has promoted the development of document summarization.

Document Summarization

Discriminatively-Tuned Generative Classifiers for Robust Natural Language Inference

1 code implementation EMNLP 2020 Xiaoan Ding, Tianyu Liu, Baobao Chang, Zhifang Sui, Kevin Gimpel

We explore training objectives for discriminative fine-tuning of our generative classifiers, showing improvements over log loss fine-tuning from prior work .

Natural Language Inference

An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference

1 code implementation CONLL 2020 Tianyu Liu, Xin Zheng, Xiaoan Ding, Baobao Chang, Zhifang Sui

The prior work on natural language inference (NLI) debiasing mainly targets at one or few known biases while not necessarily making the models more robust.

Data Augmentation Natural Language Inference

HypoNLI: Exploring the Artificial Patterns of Hypothesis-only Bias in Natural Language Inference

no code implementations LREC 2020 Tianyu Liu, Xin Zheng, Baobao Chang, Zhifang Sui

Many recent studies have shown that for models trained on datasets for natural language inference (NLI), it is possible to make correct predictions by merely looking at the hypothesis while completely ignoring the premise.

Natural Language Inference

Pun-GAN: Generative Adversarial Network for Pun Generation

1 code implementation IJCNLP 2019 Fuli Luo, Shunyao Li, Pengcheng Yang, Lei LI, Baobao Chang, Zhifang Sui, Xu sun

It consists of a generator to produce pun sentences, and a discriminator to distinguish between the generated pun sentences and the real sentences with specific word senses.

Generative Adversarial Network Sentence

Learning to Control the Fine-grained Sentiment for Story Ending Generation

no code implementations ACL 2019 Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, Xu sun

Therefore, we propose a generic and novel framework which consists of a sentiment analyzer and a sentimental generator, respectively addressing the two challenges.

Text Generation

Towards Comprehensive Description Generation from Factual Attribute-value Tables

no code implementations ACL 2019 Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, Zhifang Sui

To relieve these problems, we first propose force attention (FA) method to encourage the generator to pay more attention to the uncovered attributes to avoid potential key attributes missing.

Attribute

A Soft Label Strategy for Target-Level Sentiment Classification

no code implementations WS 2019 Da Yin, Xiao Liu, Xiuyu Wu, Baobao Chang

In this paper, we propose a soft label approach to target-level sentiment classification task, in which a history-based soft labeling model is proposed to measure the possibility of a context word as an opinion word.

Classification General Classification +2

A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer

2 code implementations24 May 2019 Fuli Luo, Peng Li, Jie zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, Xu sun

Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style.

reinforcement-learning Reinforcement Learning (RL) +2

Incorporating Glosses into Neural Word Sense Disambiguation

1 code implementation ACL 2018 Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, Zhifang Sui

GAS models the semantic relationship between the context and the gloss in an improved memory network framework, which breaks the barriers of the previous supervised methods and knowledge-based methods.

Word Sense Disambiguation

Table-to-text Generation by Structure-aware Seq2seq Learning

3 code implementations27 Nov 2017 Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui

In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table.

Table-to-Text Generation

Order-Planning Neural Text Generation From Structured Data

1 code implementation1 Sep 2017 Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui

Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.

Question Answering Table-to-Text Generation

A Soft-label Method for Noise-tolerant Distantly Supervised Relation Extraction

no code implementations EMNLP 2017 Tianyu Liu, Kexiang Wang, Baobao Chang, Zhifang Sui

Distant-supervised relation extraction inevitably suffers from wrong labeling problems because it heuristically labels relational facts with knowledge bases.

Relation Relation Extraction +1

Syntax Aware LSTM model for Semantic Role Labeling

no code implementations WS 2017 Feng Qian, Lei Sha, Baobao Chang, Lu-chen Liu, Ming Zhang

In Semantic Role Labeling (SRL) task, the tree structured dependency relation is rich in syntax information, but it is not well handled by existing models.

Feature Engineering Machine Translation +4

Syntax Aware LSTM Model for Chinese Semantic Role Labeling

no code implementations3 Apr 2017 Feng Qian, Lei Sha, Baobao Chang, Lu-chen Liu, Ming Zhang

As for semantic role labeling (SRL) task, when it comes to utilizing parsing information, both traditional methods and recent recurrent neural network (RNN) based methods use the feature engineering way.

Chinese Semantic Role Labeling Dependency Parsing +2

Improving Chinese SRL with Heterogeneous Annotations

no code implementations22 Feb 2017 Qiaolin Xia, Baobao Chang, Zhifang Sui

Previous studies on Chinese semantic role labeling (SRL) have concentrated on single semantically annotated corpus.

Chinese Semantic Role Labeling Semantic Role Labeling

Towards Time-Aware Knowledge Graph Completion

no code implementations COLING 2016 Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, Zhifang Sui

In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts.

Question Answering Relation Extraction +1

Event Detection with Burst Information Networks

no code implementations COLING 2016 Tao Ge, Lei Cui, Baobao Chang, Zhifang Sui, Ming Zhou

Retrospective event detection is an important task for discovering previously unidentified events in a text stream.

Clustering Event Detection

Reading and Thinking: Re-read LSTM Unit for Textual Entailment Recognition

no code implementations COLING 2016 Lei Sha, Baobao Chang, Zhifang Sui, Sujian Li

After read the premise again, the model can get a better understanding of the premise, which can also affect the understanding of the hypothesis.

Information Retrieval Machine Translation +4

Aligning Coordinated Text Streams through Burst Information Network Construction and Decipherment

no code implementations27 Sep 2016 Tao Ge, Qing Dou, Xiaoman Pan, Heng Ji, Lei Cui, Baobao Chang, Zhifang Sui, Ming Zhou

We introduce a novel Burst Information Network (BINet) representation that can display the most important information and illustrate the connections among bursty entities, events and keywords in the corpus.

Decipherment Translation

Joint Learning Templates and Slots for Event Schema Induction

no code implementations NAACL 2016 Lei Sha, Sujian Li, Baobao Chang, Zhifang Sui

Automatic event schema induction (AESI) means to extract meta-event from raw text, in other words, to find out what types (templates) of event may exist in the raw text and what roles (slots) may exist in each event type.

Image Segmentation Semantic Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.