Search Results for author: Ming-Feng Tsai

Found 14 papers, 1 papers with code

Designing Templates for Eliciting Commonsense Knowledge from Pretrained Sequence-to-Sequence Models

no code implementations COLING 2020 Jheng-Hong Yang, Sheng-Chieh Lin, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin

While internalized {``}implicit knowledge{''} in pretrained transformers has led to fruitful progress in many natural language understanding tasks, how to most effectively elicit such knowledge remains an open question.

Natural Language Understanding Question Answering

Personalized TV Recommendation: Fusing User Behavior and Preferences

no code implementations30 Aug 2020 Sheng-Chieh Lin, Ting-Wei Lin, Jing-Kai Lou, Ming-Feng Tsai, Chuan-Ju Wang

In this paper, we propose a two-stage ranking approach for recommending linear TV programs.

Skewness Ranking Optimization for Personalized Recommendation

no code implementations23 May 2020 Chuan-Ju Wang, Yu-Neng Chuang, Chih-Ming Chen, Ming-Feng Tsai

In this paper, we propose a novel optimization criterion that leverages features of the skew normal distribution to better model the problem of personalized recommendation.

Conversational Question Reformulation via Sequence-to-Sequence Architectures and Pretrained Language Models

no code implementations4 Apr 2020 Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin

This paper presents an empirical study of conversational question reformulation (CQR) with sequence-to-sequence architectures and pretrained language models (PLMs).

Task-Oriented Dialogue Systems

TTTTTackling WinoGrande Schemas

no code implementations18 Mar 2020 Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin

We applied the T5 sequence-to-sequence model to tackle the AI2 WinoGrande Challenge by decomposing each example into two input text strings, each containing a hypothesis, and using the probabilities assigned to the "entailment" token as a score of the hypothesis.

Collaborative Similarity Embedding for Recommender Systems

2 code implementations17 Feb 2019 Chih-Ming Chen, Chuan-Ju Wang, Ming-Feng Tsai, Yi-Hsuan Yang

We present collaborative similarity embedding (CSE), a unified framework that exploits comprehensive collaborative relations available in a user-item bipartite graph for representation learning and recommendation.

 Ranked #1 on Recommendation Systems on Netflix (Recall@10 metric)

Graph Learning Recommendation Systems +1

Representation Learning for Image-based Music Recommendation

no code implementations28 Aug 2018 Chih-Chun Hsia, Kwei-Herng Lai, Yi-An Chen, Chuan-Ju Wang, Ming-Feng Tsai

Image perception is one of the most direct ways to provide contextual information about a user concerning his/her surrounding environment; hence images are a suitable proxy for contextual recommendation.

Representation Learning

Superhighway: Bypass Data Sparsity in Cross-Domain CF

no code implementations28 Aug 2018 Kwei-Herng Lai, Ting-Hsiang Wang, Heng-Yu Chi, Yi-An Chen, Ming-Feng Tsai, Chuan-Ju Wang

Cross-domain collaborative filtering (CF) aims to alleviate data sparsity in single-domain CF by leveraging knowledge transferred from related domains.

RiskFinder: A Sentence-level Risk Detector for Financial Reports

no code implementations NAACL 2018 Yu-Wen Liu, Liang-Chih Liu, Chuan-Ju Wang, Ming-Feng Tsai

This paper presents a web-based information system, RiskFinder, for facilitating the analyses of soft and hard information in financial reports.

Sentence Embedding Sentiment Analysis +1

Vertex-Context Sampling for Weighted Network Embedding

no code implementations1 Nov 2017 Chih-Ming Chen, Yi-Hsuan Yang, Yi-An Chen, Ming-Feng Tsai

Many existing methods adopt a uniform sampling method to reduce learning complexity, but when the network is non-uniform (i. e. a weighted network) such uniform sampling incurs information loss.

Information Retrieval Multi-Label Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.