no code implementations • NAACL (DeeLIO) 2021 • Taku Sakamoto, Akiko Aizawa
In this task, we use two evaluation metrics to evaluate the language models in terms of the symbolic and quantitative aspects of the numerals, respectively.
no code implementations • WOSP 2020 • Paul Molloy, Joeran Beel, Akiko Aizawa
The prediction can be used in the same way as real citation proximity to calculate document relatedness, even for uncited documents.
no code implementations • EMNLP (sdp) 2020 • Takuto Asakura, André Greiner-Petter, Akiko Aizawa, Yusuke Miyao
Our results indicate that it is worthwhile to grow the techniques for the proposed task to contribute to the further progress of mathematical language processing.
no code implementations • EMNLP (MRQA) 2021 • Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa
Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question–context lexical overlap.
1 code implementation • Findings (EMNLP) 2021 • Timo Spinde, Manuel Plank, Jan-David Krieger, Terry Ruas, Bela Gipp, Akiko Aizawa
Fine-tuning and evaluating the model on our proposed supervised data set, we achieve a macro F1-score of 0. 804, outperforming existing methods.
1 code implementation • 15 Mar 2022 • Taichi Iki, Akiko Aizawa
We develop task pages with and without page transitions and propose a BERT extension for the framework.
1 code implementation • 23 Sep 2021 • Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa
Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question-context lexical overlap.
1 code implementation • ACL 2020 • Florian Boudin, Ygor Gallina, Akiko Aizawa
Sequence-to-sequence models have lead to significant progress in keyphrase generation, but it remains unknown whether they are reliable enough to be beneficial for document retrieval.
1 code implementation • ACL 2021 • Johannes Mario Meissner, Napat Thumwanit, Saku Sugawara, Akiko Aizawa
Natural Language Inference (NLI) datasets contain examples with highly ambiguous labels.
1 code implementation • 29 May 2021 • Takuma Udagawa, Akiko Aizawa
Common grounding is the process of creating and maintaining mutual understandings, which is a critical aspect of sophisticated human communication.
End-To-End Dialogue Modelling
Goal-Oriented Dialogue Systems
+1
1 code implementation • EMNLP 2021 • Taichi Iki, Akiko Aizawa
A method for creating a vision-and-language (V&L) model is to extend a language model through structural modifications and V&L pre-training.
1 code implementation • EACL 2021 • Junfeng Jiang, An Wang, Akiko Aizawa
It aims to extract the corresponding opinion words for a given opinion target in a review sentence.
Aspect-Based Sentiment Analysis
target-oriented opinion words extraction
no code implementations • EACL 2021 • Kenichi Iwatsuki, Akiko Aizawa
In this study, we considered a fully automated construction of a CF-labelled FE database using the top{--}down approach, in which the CF labels are first assigned to sentences, and then the FEs are extracted.
1 code implementation • COLING 2020 • Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, Akiko Aizawa
The evidence information has two benefits: (i) providing a comprehensive explanation for predictions and (ii) evaluating the reasoning skills of a model.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Taichi Iki, Akiko Aizawa
However, few models consider the fusion of linguistic features with multiple visual features with different sizes of receptive fields, though the proper size of the receptive field of visual features intuitively varies depending on expressions.
1 code implementation • COLING 2020 • Vitou Phy, Yang Zhao, Akiko Aizawa
For instance, specificity is mandatory in a food-ordering dialogue task, whereas fluency is preferred in a language-teaching dialogue system.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Takuma Udagawa, Takato Yamazaki, Akiko Aizawa
Recent models achieve promising results in visually grounded dialogues.
no code implementations • EMNLP (NLP-COVID19) 2020 • Akiko Aizawa, Frederic Bergeron, Junjie Chen, Fei Cheng, Katsuhiko Hayashi, Kentaro Inui, Hiroyoshi Ito, Daisuke Kawahara, Masaru Kitsuregawa, Hirokazu Kiyomaru, Masaki Kobayashi, Takashi Kodama, Sadao Kurohashi, Qianying Liu, Masaki Matsubara, Yusuke Miyao, Atsuyuki Morishima, Yugo Murawaki, Kazumasa Omura, Haiyue Song, Eiichiro Sumita, Shinji Suzuki, Ribeka Tanaka, Yu Tanaka, Masashi Toyoda, Nobuhiro Ueda, Honai Ueoka, Masao Utiyama, Ying Zhong
The global pandemic of COVID-19 has made the public pay close attention to related news, covering various domains, such as sanitation, treatment, and effects on education.
no code implementations • 18 Jun 2020 • Kenichi Iwatsuki, Florian Boudin, Akiko Aizawa
We also propose a new extraction method that utilises named entities and dependency structures to remove the non-formulaic part from a sentence.
no code implementations • LREC 2020 • Kenichi Iwatsuki, Florian Boudin, Akiko Aizawa
Formulaic expressions, such as {`}in this paper we propose{'}, are used by authors of scholarly papers to perform communicative functions; the communicative function of the present example is {`}stating the aim of the paper{'}.
1 code implementation • ACL 2021 • Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa
While most existing QAG methods aim to improve the quality of synthetic examples, we conjecture that diversity-promoting QAG can mitigate the sparsity of training sets and lead to better robustness.
no code implementations • EACL 2021 • Saku Sugawara, Pontus Stenetorp, Akiko Aizawa
Machine reading comprehension (MRC) has received considerable attention as a benchmark for natural language understanding.
Machine Reading Comprehension
Natural Language Understanding
1 code implementation • 7 Feb 2020 • Andre Greiner-Petter, Moritz Schubotz, Fabian Mueller, Corinna Breitinger, Howard S. Cohl, Akiko Aizawa, Bela Gipp
The contributions of our presented research are as follows: (1) we present the first distributional analysis of mathematical formulae on arXiv and zbMATH; (2) we retrieve relevant mathematical objects for given textual search queries (e. g., linking $P_{n}^{(\alpha, \beta)}\!\left(x\right)$ with `Jacobi polynomial'); (3) we extend zbMATH's search engine by providing relevant mathematical formulae; and (4) we exemplify the applicability of the results by presenting auto-completion for math inputs as the first contribution to math recommendation systems.
no code implementations • 8 Jan 2020 • Nobuhiro Ito, Yuya Suzuki, Akiko Aizawa
A natural language interface for such automation is expected as an elemental technology for the IPA realization.
no code implementations • 21 Nov 2019 • Saku Sugawara, Pontus Stenetorp, Kentaro Inui, Akiko Aizawa
Existing analysis work in machine reading comprehension (MRC) is largely concerned with evaluating the capabilities of systems.
1 code implementation • 18 Nov 2019 • Takuma Udagawa, Akiko Aizawa
Common grounding is the process of creating, repairing and updating mutual understandings, which is a fundamental aspect of natural language conversation.
1 code implementation • 8 Jul 2019 • Takuma Udagawa, Akiko Aizawa
Finally, we evaluate and analyze baseline neural models on a simple subtask that requires recognition of the created common ground.
no code implementations • ACL 2019 • Yang Zhao, Xiaoyu Shen, Wei Bi, Akiko Aizawa
First, the word graph approach that simply concatenates fragments from multiple sentences may yield non-fluent or ungrammatical compression.
no code implementations • 20 May 2019 • André Greiner-Petter, Terry Ruas, Moritz Schubotz, Akiko Aizawa, William Grosky, Bela Gipp
Nowadays, Machine Learning (ML) is seen as the universal solution to improve the effectiveness of information retrieval (IR) methods.
no code implementations • 26 Nov 2018 • Joeran Beel, Andrew Collins, Akiko Aizawa
In this paper, we introduce Mr. DLib's "Recommendations as-a-Service" (RaaS) API that allows operators of academic products to easily integrate a scientific recommender system into their products.
2 code implementations • 3 Sep 2018 • Junjun Jiang, Yi Yu, Suhua Tang, Jiayi Ma, Akiko Aizawa, Kiyoharu Aizawa
To this end, this study incorporates the contextual information of image patch and proposes a powerful and efficient context-patch based face hallucination approach, namely Thresholding Locality-constrained Representation and Reproducing learning (TLcR-RL).
1 code implementation • EMNLP 2018 • Saku Sugawara, Kentaro Inui, Satoshi Sekine, Akiko Aizawa
From this study, we observed that (i) the baseline performances for the hard subsets remarkably degrade compared to those of entire datasets, (ii) hard questions require knowledge inference and multiple-sentence reasoning in comparison with easy questions, and (iii) multiple-choice questions tend to require a broader range of reasoning skills than answer extraction and description questions.
1 code implementation • COLING 2018 • Kenichi Iwatsuki, Akiko Aizawa
Formulaic expressions (FEs) used in scholarly papers, such as {`}there has been little discussion about{'}, are helpful for non-native English speakers.
no code implementations • ACL 2018 • Yang Zhao, Zhiyuan Luo, Akiko Aizawa
We herein present a language-model-based evaluator for deletion-based sentence compression and view this task as a series of deletion-and-evaluation operations using the evaluator.
Ranked #2 on
Sentence Compression
on Google Dataset
no code implementations • SEMEVAL 2018 • V{\'\i}ctor Su{\'a}rez-Paniagua, Isabel Segura-Bedmar, Akiko Aizawa
This paper reports our participation for SemEval-2018 Task 7 on extraction and classification of relationships between entities in scientific papers.
no code implementations • 8 May 2018 • Yi Yu, Suhua Tang, Kiyoharu Aizawa, Akiko Aizawa
Given a photo as input, this model performs (i) exact venue search (find the venue where the photo was taken), and (ii) group venue search (find relevant venues with the same category as that of the photo), by the cross-modal correlation between the input photo and textual description of venues.
no code implementations • 19 Feb 2018 • Andrew Collins, Dominika Tkaczyk, Akiko Aizawa, Joeran Beel
We conduct a study in a real-world recommender system that delivered ten million related-article recommendations to the users of the digital library Sowiport, and the reference manager JabRef.
no code implementations • ACL 2017 • Saku Sugawara, Yusuke Kido, Hikaru Yokono, Akiko Aizawa
Knowing the quality of reading comprehension (RC) datasets is important for the development of natural-language understanding systems.
no code implementations • ACL 2017 • Xiaoyu Shen, Hui Su, Yan-ran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, Guoping Long
Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems.
no code implementations • COLING 2016 • Takeshi Abekawa, Akiko Aizawa
In this paper, we discuss our ongoing efforts to construct a scientific paper browsing system that helps users to read and understand advanced technical content distributed in PDF.
no code implementations • COLING 2016 • Hajime Senuma, Akiko Aizawa
The recent proliferation of smart devices necessitates methods to learn small-sized models.
no code implementations • LREC 2016 • Michael Carl, Akiko Aizawa, Masaru Yamada
Speech-enabled interfaces have the potential to become one of the most efficient and ergonomic environments for human-computer interaction and for text production.
1 code implementation • LREC 2016 • Yuka Tateisi, Tomoko Ohta, Sampo Pyysalo, Yusuke Miyao, Akiko Aizawa
In our scheme, mentions of entities are annotated with ontology-based types, and the roles of the entities are annotated as relations with other entities described in the text.
1 code implementation • 19 Dec 2014 • Hubert Soyer, Pontus Stenetorp, Akiko Aizawa
In this work, we present a novel neural network based architecture for inducing compositional crosslingual word representations.
no code implementations • LREC 2014 • Yuka Tateisi, Yo Shidahara, Yusuke Miyao, Akiko Aizawa
We designed a new annotation scheme for formalising relation structures in research papers, through the investigation of computer science papers.
1 code implementation • LREC 2014 • Panot Chaimongkol, Akiko Aizawa, Yuka Tateisi
Through these comparisons, we have demonstrated quantitatively that our manually annotated corpus differs from a general-domain corpus, which suggests deep differences between general-domain texts and scientific texts and which shows that different approaches can be made to tackle coreference resolution for general texts and scientific texts.
no code implementations • LREC 2012 • Hidetsugu Nanba, Toshiyuki Takezawa, Kiyoko Uchiyama, Akiko Aizawa
Retrieving research papers and patents is important for any researcher assessing the scope of a field with high industrial relevance.
no code implementations • LREC 2012 • Yuichiroh Matsubayashi, Yusuke Miyao, Akiko Aizawa
In this paper, we report our framework for creating the corpus and the current status of creating an LCS dictionary for Japanese predicates.