Search Results for author: Bingbing Wen

Found 5 papers, 0 papers with code

EmojiCloud: a Tool for Emoji Cloud Visualization

no code implementations NAACL (Emoji) 2022 Yunhe Feng, Cheng Guo, Bingbing Wen, Peng Sun, Yufei Yue, Dingwen Tao

This paper proposes EmojiCloud, an open-source Python-based emoji cloud visualization tool, to generate a quick and straightforward understanding of emojis from the perspective of frequency and importance.

InfoVisDial: An Informative Visual Dialogue Dataset by Bridging Large Multimodal and Language Models

no code implementations21 Dec 2023 Bingbing Wen, Zhengyuan Yang, JianFeng Wang, Zhe Gan, Bill Howe, Lijuan Wang

In this paper, we build a visual dialogue dataset, named InfoVisDial, which provides rich informative answers in each round even with external knowledge related to the visual content.

OmniMotionGPT: Animal Motion Generation with Limited Data

no code implementations30 Nov 2023 Zhangsihao Yang, Mingyuan Zhou, Mengyi Shan, Bingbing Wen, Ziwei Xuan, Mitch Hill, Junjie Bai, Guo-Jun Qi, Yalin Wang

Our paper aims to generate diverse and realistic animal motion sequences from textual descriptions, without a large-scale animal text-motion dataset.

Motion Synthesis

EGCR: Explanation Generation for Conversational Recommendation

no code implementations17 Aug 2022 Bingbing Wen, Xiaoning Bu, Chirag Shah

To the best of our knowledge, this is the first framework for explainable conversational recommendation on real-world datasets.

Explanation Generation Informativeness

Towards Generating Robust, Fair, and Emotion-Aware Explanations for Recommender Systems

no code implementations17 Aug 2022 Bingbing Wen, Yunhe Feng, Yongfeng Zhang, Chirag Shah

Current explanation generation models are found to exaggerate certain emotions without accurately capturing the underlying tone or the meaning.

Explainable Recommendation Explanation Generation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.