Search Results for author: Hiroaki Ozaki

Found 17 papers, 3 papers with code

How does the task complexity of masked pretraining objectives affect downstream performance?

1 code implementation18 May 2023 Atsuki Yamaguchi, Hiroaki Ozaki, Terufumi Morishita, Gaku Morio, Yasuhiro Sogawa

Masked language modeling (MLM) is a widely used self-supervised pretraining objective, where a model needs to predict an original token that is replaced with a mask given contexts.

Language Modelling Masked Language Modeling

Controlling keywords and their positions in text generation

1 code implementation19 Apr 2023 Yuichi Sasazawa, Terufumi Morishita, Hiroaki Ozaki, Osamu Imaichi, Yasuhiro Sogawa

In this paper, we tackle a novel task of controlling not only keywords but also the position of each keyword in the text generation.

Story Generation

Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News

no code implementations3 Mar 2023 Yuta Koreeda, Ken-ichi Yokote, Hiroaki Ozaki, Atsuki Yamaguchi, Masaya Tsunokake, Yasuhiro Sogawa

Based on the multilingual, multi-task nature of the task and the low-resource setting, we investigated different cross-lingual and multi-task strategies for training the pretrained language models.

Rethinking Fano's Inequality in Ensemble Learning

1 code implementation25 May 2022 Terufumi Morishita, Gaku Morio, Shota Horiguchi, Hiroaki Ozaki, Nobuo Nukaga

We propose a fundamental theory on ensemble learning that answers the central question: what factors make an ensemble system good or bad?

Ensemble Learning

Hitachi at SemEval-2020 Task 7: Stacking at Scale with Heterogeneous Language Models for Humor Recognition

no code implementations SEMEVAL 2020 Terufumi Morishita, Gaku Morio, Hiroaki Ozaki, Toshinori Miyoshi

Our experimental results show that SaS outperforms a naive average ensemble, leveraging weaker PLMs as well as high-performing PLMs.

Hitachi at SemEval-2020 Task 3: Exploring the Representation Spaces of Transformers for Human Sense Word Similarity

no code implementations SEMEVAL 2020 Terufumi Morishita, Gaku Morio, Hiroaki Ozaki, Toshinori Miyoshi

Due to the unsupervised nature of the task, we concentrated on inquiring about the similarity measures induced by different layers of different pre-trained Transformer-based language models, which can be good approximations of the human sense of word similarity.

Word Similarity

Hitachi at MRP 2020: Text-to-Graph-Notation Transducer

no code implementations CONLL 2020 Hiroaki Ozaki, Gaku Morio, Yuta Koreeda, Terufumi Morishita, Toshinori Miyoshi

This paper presents our proposed parser for the shared task on Meaning Representation Parsing (MRP 2020) at CoNLL, where participant systems were required to parse five types of graphs in different languages.

Towards Better Non-Tree Argument Mining: Proposition-Level Biaffine Parsing with Task-Specific Parameterization

no code implementations ACL 2020 Gaku Morio, Hiroaki Ozaki, Terufumi Morishita, Yuta Koreeda, Kohsuke Yanai

Our proposed model incorporates (i) task-specific parameterization (TSP) that effectively encodes a sequence of propositions and (ii) a proposition-level biaffine attention (PLBA) that can predict a non-tree argument consisting of edges.

Argument Mining

Cannot find the paper you are looking for? You can Submit a new open access paper.