Search Results for author: Tzu-Chuan Lin

Found 2 papers, 0 papers with code

Reactive Multi-Stage Feature Fusion for Multimodal Dialogue Modeling

no code implementations14 Aug 2019 Yi-Ting Yeh, Tzu-Chuan Lin, Hsiao-Hua Cheng, Yu-Hsuan Deng, Shang-Yu Su, Yun-Nung Chen

Visual question answering and visual dialogue tasks have been increasingly studied in the multimodal field towards more practical real-world scenarios.

Question Answering Scene-Aware Dialogue +2

Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder

no code implementations31 Oct 2018 Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su, Yun-Nung Chen

Automatic melody generation has been a long-time aspiration for both AI researchers and musicians.

Cannot find the paper you are looking for? You can Submit a new open access paper.