no code implementations • ACL (WAT) 2021 • Hideya Mino, Kazutaka Kinugawa, Hitoshi Ito, Isao Goto, Ichiro Yamada, Takenobu Tokunaga
In this paper, we use a lexically-constrained neural machine translation (NMT), which concatenates the source sentence and constrained words with a special token to input them into the encoder of NMT.
no code implementations • LREC 2022 • Hiroaki Yamada, Takenobu Tokunaga, Ryutaro Ohara, Keisuke Takeshita, Mihoko Sumida
Moreover, the scheme can capture the explicit causal relation between judge’s decisions and their corresponding rationales, allowing multiple decisions in a document.
1 code implementation • EACL (BEA) 2021 • Jan Wira Gotama Putra, Simone Teufel, Takenobu Tokunaga
Two sentence encoders are employed, and we observed that non-fine-tuning models generally performed better when using Sentence-BERT as opposed to BERT encoder.
no code implementations • LREC 2022 • Marcello Gecchele, Hiroaki Yamada, Takenobu Tokunaga, Yasuyo Sawaki, Mika Ishizuka
The algorithm was tested over the L2WS 2021 corpus.
no code implementations • 1 Dec 2023 • Hiroaki Yamada, Takenobu Tokunaga, Ryutaro Ohara, Akira Tokutsu, Keisuke Takeshita, Mihoko Sumida
This paper presents the first dataset for Japanese Legal Judgment Prediction (LJP), the Japanese Tort-case Dataset (JTD), which features two tasks: tort prediction and its rationale extraction.
1 code implementation • EMNLP (ArgMining) 2021 • Jan Wira Gotama Putra, Simone Teufel, Takenobu Tokunaga
Argumentative structure prediction aims to establish links between textual units and label the relationship between them, forming a structured representation for a given input text.
no code implementations • COLING 2020 • Hideya Mino, Hitoshi Ito, Isao Goto, Ichiro Yamada, Takenobu Tokunaga
The first problem is the quality of parallel corpora.
no code implementations • LREC 2020 • Hideya Mino, Hideki Tanaka, Hitoshi Ito, Isao Goto, Ichiro Yamada, Takenobu Tokunaga
The first problem is the quality of parallel corpora.
no code implementations • LREC 2020 • Haruna Ogawa, Hitoshi Nishikawa, Takenobu Tokunaga, Hikaru Yokono
Our platform enables data collectors to create their original video game in which they can collect dialogue data of various types of tasks by using the logging function of the platform.
no code implementations • LREC 2020 • Jan Wira Gotama Putra, Simone Teufel, Kana Matsumura, Takenobu Tokunaga
A separate tree view allows them to review their analysis in terms of the overall discourse structure.
no code implementations • WS 2019 • Hideya Mino, Hitoshi Ito, Isao Goto, Ichiro Yamada, Hideki Tanaka, Takenobu Tokunaga
The content-equivalent corpus was effective for improving translation quality, and our systems achieved the best human evaluation scores in the newswire translation tasks at WAT 2019.
no code implementations • WS 2019 • Marcello Gecchele, Hiroaki Yamada, Takenobu Tokunaga, Yasuyo Sawaki
We im-plemented the proposed method in a GUI tool{``}Segment Matcher{''} that aids teachers to estab-lish a link between corresponding IUs acrossthe summary and source text.
no code implementations • COLING 2018 • Shun-ya Fukunaga, Hitoshi Nishikawa, Takenobu Tokunaga, Hikaru Yokono, Tetsuro Takahashi
We implemented two models for this task: an SVM-based model and an RCNN-based model.
no code implementations • LREC 2018 • Nicoletta Calzolari, Khalid Choukri, Christopher Cieri, Thierry Declerck, Koiti Hasida, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis, Takenobu Tokunaga, Sara Goggi, H{\'e}l{\`e}ne Mazo
no code implementations • IJCNLP 2017 • Hideya Mino, Masao Utiyama, Eiichiro Sumita, Takenobu Tokunaga
In this paper, we propose a neural machine translation (NMT) with a key-value attention mechanism on the source-side encoder.
no code implementations • RANLP 2017 • Takenobu Tokunaga, Hitoshi Nishikawa, Tomoya Iwakura
Utilising effective features in machine learning-based natural language processing (NLP) is crucial in achieving good performance for a given NLP task.
no code implementations • WS 2017 • Hiroaki Yamada, Simone Teufel, Takenobu Tokunaga
In particular, we utilize the hierarchical argument structure of the judgment documents.
no code implementations • WS 2017 • Arief Yudha Satria, Takenobu Tokunaga
This study provides a detailed analysis of evaluation of English pronoun reference questions which are created automatically by machine.
no code implementations • WS 2017 • Jan Wira Gotama Putra, Takenobu Tokunaga
Coherence is a crucial feature of text because it is indispensable for conveying its communication purpose and meaning to its readers.
no code implementations • COLING 2016 • Ryosuke Maki, Hitoshi Nishikawa, Takenobu Tokunaga
In this paper, we propose utilising eye gaze information for estimating parameters of a Japanese predicate argument structure (PAS) analysis model.
no code implementations • WS 2016 • Daiki Gotou, Hitoshi Nishikawa, Takenobu Tokunaga
In this paper, we extend an existing annotation scheme ISO-Space for annotating necessary spatial information for the task placing an specified object at a specified location with a specified direction according to a natural language instruction.
no code implementations • LREC 2016 • Dain Kaplan, Neil Rubens, Simone Teufel, Takenobu Tokunaga
Active learning (AL) is often used in corpus construction (CC) for selecting {``}informative{''} documents for annotation.
no code implementations • LREC 2014 • Ryu Iida, Takenobu Tokunaga
This paper presents building a corpus of manually revised texts which includes both before and after-revision information.
no code implementations • LREC 2012 • Takenobu Tokunaga, Ryu Iida, Asuka Terai, Naoko Kuriyama
In this respect, we succeeded in constructing a collection of corpora that included a variety of characteristics by changing the configurations for each set of dialogues, as originally planned.
no code implementations • LREC 2012 • Atsushi Fujii, Yuya Fujii, Takenobu Tokunaga
Because viewpoints required for explanation are different depending on the type of a term, such as animal and disease, we model articles in Wikipedia to extract a viewpoint structure for each term type.