Search Results for author: Hayato Kobayashi

Found 15 papers, 2 papers with code

Dataset Creation for Ranking Constructive News Comments

no code implementations ACL 2019 Soichiro Fujita, Hayato Kobayashi, Manabu Okumura

Ranking comments on an online news service is a practically important task for the service provider, and thus there have been many studies on this task.

Frustratingly Easy Model Ensemble for Abstractive Summarization

no code implementations EMNLP 2018 Hayato Kobayashi

Ensemble methods, which combine multiple models at decoding time, are now widely known to be effective for text-generation tasks.

Abstractive Text Summarization Density Estimation +3

Extractive Headline Generation Based on Learning to Rank for Community Question Answering

no code implementations COLING 2018 Tatsuru Higurashi, Hayato Kobayashi, Takeshi Masuyama, Kazuma Murao

User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks.

Community Question Answering Headline Generation +1

Pretraining Sentiment Classifiers with Unlabeled Dialog Data

no code implementations ACL 2018 Toru Shimizu, Nobuyuki Shimizu, Hayato Kobayashi

Recent studies showed that pretraining with unlabeled data via a language model can improve the performance of classification models.

Classification General Classification +3

Cross-domain Recommendation via Deep Domain Adaptation

no code implementations8 Mar 2018 Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami, Taiji Suzuki

The behavior of users in certain services could be a clue that can be used to infer their preferences and may be used to make recommendations for other services they have never used.

Collaborative Filtering Denoising +2

Incremental Skip-gram Model with Negative Sampling

1 code implementation EMNLP 2017 Nobuhiro Kaji, Hayato Kobayashi

This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives.

Word Embeddings

Minimax Optimal Alternating Minimization for Kernel Nonparametric Tensor Learning

no code implementations NeurIPS 2016 Taiji Suzuki, Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami

We investigate the statistical performance and computational efficiency of the alternating minimization procedure for nonparametric tensor learning.

Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.