no code implementations • 23 Jul 2024 • Chao-Chun Hsu, Erin Bransom, Jenna Sparks, Bailey Kuehl, Chenhao Tan, David Wadden, Lucy Lu Wang, Aakanksha Naik
In this work, we investigate the potential of LLMs for producing hierarchical organizations of scientific studies to assist researchers with literature review.
no code implementations • 5 Dec 2023 • Chao-Chun Hsu, Ziad Obermeyer, Chenhao Tan
Finally, the model indicates that notes written about Black and Hispanic patients have 12% and 21% higher predicted fatigue than Whites -- larger than overnight vs. daytime differences.
1 code implementation • EMNLP 2021 • Chao-Chun Hsu, Chenhao Tan
To evaluate our method (DecSum), we build a testbed where the task is to summarize the first ten reviews of a restaurant in support of predicting its future rating on Yelp.
no code implementations • Findings (ACL) 2021 • Chao-Chun Hsu, Eric Lind, Luca Soldaini, Alessandro Moschitti
Recent advancements in transformer-based models have greatly improved the ability of Question Answering (QA) systems to provide correct answers; in particular, answer sentence selection (AS2) models, core components of retrieval-based systems, have achieved impressive results.
1 code implementation • 10 Jan 2021 • Jui-Te Huang, Chen-Lung Lu, Po-Kai Chang, Ching-I Huang, Chao-Chun Hsu, Zu Lin Ewe, Po-Jui Huang, Hsueh-Cheng Wang
However, because mmWave radar signals are often noisy and sparse, we propose a cross-modal contrastive learning for representation (CM-CLR) method that maximizes the agreement between mmWave radar data and LiDAR data in the training stage.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Chao-Chun Hsu, Shantanu Karnwal, Sendhil Mullainathan, Ziad Obermeyer, Chenhao Tan
Machine learning models depend on the quality of input data.
no code implementations • 17 Jan 2020 • Yun-Wei Chu, Kuan-Yen Lin, Chao-Chun Hsu, Lun-Wei Ku
Understanding dynamic scenes and dialogue contexts in order to converse with users has been challenging for multimodal dialogue systems.
1 code implementation • 3 Dec 2019 • Chao-Chun Hsu, Zi-Yuan Chen, Chi-Yang Hsu, Chih-Chia Li, Tzu-Yuan Lin, Ting-Hao 'Kenneth' Huang, Lun-Wei Ku
This paper introduces KG-Story, a three-stage framework that allows the story generation model to take advantage of external Knowledge Graphs to produce interesting stories.
no code implementations • 22 Aug 2019 • Kuan-Yen Lin, Chao-Chun Hsu, Yun-Nung Chen, Lun-Wei Ku
After the entropy-enhanced DMN secures the video context, we apply an attention model that in-corporates summary and caption to generate an accurate answer given the question about the video.
no code implementations • 6 Mar 2019 • Chao-Chun Hsu, Yu-Hua Chen, Zi-Yuan Chen, Hsin-Yu Lin, Ting-Hao 'Kenneth' Huang, Lun-Wei Ku
In this paper, we introduce Dixit, an interactive visual storytelling system that the user interacts with iteratively to compose a short story for a photo sequence.
no code implementations • WS 2018 • Chao-Chun Hsu, Lun-Wei Ku
This paper describes an overview of the Dialogue Emotion Recognition Challenge, EmotionX, at the Sixth SocialNLP Workshop, which recognizes the emotion of each utterance in dialogues.
no code implementations • 30 May 2018 • Chao-Chun Hsu, Szu-Min Chen, Ming-Hsun Hsieh, Lun-Wei Ku
Visual storytelling includes two important parts: coherence between the story and images as well as the story structure.
no code implementations • LREC 2018 • Sheng-Yeh Chen, Chao-Chun Hsu, Chuan-Chun Kuo, Ting-Hao, Huang, Lun-Wei Ku
A total of 29, 245 utterances from 2, 000 dialogues are labeled in EmotionLines.