no code implementations • EcomNLP (COLING) 2020 • Shotaro Misawa, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma
To generate a slogan, we apply an encoder–decoder model which has shown effectiveness in many kinds of natural language generation tasks, such as abstractive summarization.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Ryuji Kano, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma
The training task of the model is to predict whether a reply candidate is a true reply to a post.
Extractive Summarization Unsupervised Extractive Summarization
3 code implementations • NAACL 2021 • Yasuhide Miura, Yuhao Zhang, Emily Bao Tsai, Curtis P. Langlotz, Dan Jurafsky
We further show via a human evaluation and a qualitative analysis that our system leads to generations that are more factually complete and consistent compared to the baselines.
7 code implementations • 2 Oct 2020 • Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, Curtis P. Langlotz
Existing work commonly relies on fine-tuning weights transferred from ImageNet pretraining, which is suboptimal due to drastically different image characteristics, or rule-based label extraction from the textual report data paired with medical images, which is inaccurate and hard to generalize.
no code implementations • WS 2019 • Yuki Tagawa, Motoki Taniguchi, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma, Takayuki Yamamoto, Keiichi Nemoto
Knowledge graphs (KGs) are generally used for various NLP tasks.
no code implementations • IJCNLP 2019 • Toru Nishino, Shotaro Misawa, Ryuji Kano, Tomoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
The results show that our model generates more consistent headlines, key phrases and categories.
no code implementations • WS 2018 • Motoki Taniguchi, Tomoki Taniguchi, Takumi Takahashi, Yasuhide Miura, Tomoko Ohkuma
A simple entity linking approach with text match is used as the document selection component, this component identifies relevant documents for a given claim by using mentioned entities as clues.
no code implementations • WS 2018 • Motoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
Information extraction about an event can be improved by incorporating external evidence.
no code implementations • EMNLP 2018 • Ryuji Kano, Yasuhide Miura, Motoki Taniguchi, Yan-Ying Chen, Francine Chen, Tomoko Ohkuma
We leverage a popularity measure in social media as a distant label for extractive summarization of online conversations.
no code implementations • COLING 2018 • Yasuhide Miura, Ryuji Kano, Motoki Taniguchi, Tomoki Taniguchi, Shotaro Misawa, Tomoko Ohkuma
We proposed a model that integrates discussion structures with neural networks to classify discourse acts.
no code implementations • IJCNLP 2017 • Yasuhide Miura, Tomoki Taniguchi, Motoki Taniguchi, Shotaro Misawa, Tomoko Ohkuma
We propose a hierarchical neural network model for language variety identification that integrates information from a social network.
no code implementations • WS 2017 • Shotaro Misawa, Motoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
The contributions of this work are (1) verifying the effectiveness of the state-of-the-art NER model for Japanese, (2) proposing a neural model for predicting a tag for each character using word and character information.
no code implementations • ACL 2017 • Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
We propose a novel geolocation prediction model using a complex neural network.
no code implementations • WS 2016 • Tuan Anh Le, David Moeljadi, Yasuhide Miura, Tomoko Ohkuma
This paper describes our attempt to build a sentiment analysis system for Indonesian tweets.
no code implementations • WS 2016 • Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
In the test run of the task, the model achieved the accuracy of 40. 91{\%} and the median distance error of 69. 50 km in message-level prediction and the accuracy of 47. 55{\%} and the median distance error of 16. 13 km in user-level prediction.