In this paper, we describe our system for SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding.
In this task, we have to parse opinions considering both structure- and context-dependent subjective aspects, which is different from typical dependency parsing.
We rethink this and adopt a well-grounded set of deduction rules based on formal logic theory, which can derive any other deduction rules when combined in a multistep way.
Masked language modeling (MLM) is a widely used self-supervised pretraining objective, where a model needs to predict an original token that is replaced with a mask given contexts.
We propose a fundamental theory on ensemble learning that answers the central question: what factors make an ensemble system good or bad?
This paper introduces the proposed automatic minuting system of the Hitachi team for the First Shared Task on Automatic Minuting (AutoMin-2021).
This paper describes the first report on cross-lingual transfer for semantic dependency parsing.
Due to the unsupervised nature of the task, we concentrated on inquiring about the similarity measures induced by different layers of different pre-trained Transformer-based language models, which can be good approximations of the human sense of word similarity.
Our experimental results show that SaS outperforms a naive average ensemble, leveraging weaker PLMs as well as high-performing PLMs.
Users of social networking services often share their emotions via multi-modal content, usually images paired with text embedded in them.
This paper shows our system for SemEval-2020 task 10, Emphasis Selection for Written Text in Visual Media.
In this paper, we show our system for SemEval-2020 task 11, where we tackle propaganda span identification (SI) and technique classification (TC).
This paper presents our proposed parser for the shared task on Meaning Representation Parsing (MRP 2020) at CoNLL, where participant systems were required to parse five types of graphs in different languages.
Our proposed model incorporates (i) task-specific parameterization (TSP) that effectively encodes a sequence of propositions and (ii) a proposition-level biaffine attention (PLBA) that can predict a non-tree argument consisting of edges.
To analyze persuasive strategies, it is important to understand how individuals construct posts and comments based on the semantics of the argumentative components.
In online arguments, identifying how users construct their arguments to persuade others is important in order to understand a persuasive strategy directly.
This paper describes the proposed system of the Hitachi team for the Cross-Framework Meaning Representation Parsing (MRP 2019) shared task.
For analyzing online persuasions, one of the important goals is to semantically understand how people construct comments to persuade others.
Argument Mining (AM) is a relatively recent discipline, which concentrates on extracting claims or premises from discourses, and inferring their structures.