no code implementations • 26 Jul 2024 • Mengjie Zhao, Cees Taal, Stephan Baggerohr, Olga Fink
These results highlight HTGNN's potential as a robust and accurate virtual sensing approach for complex systems, paving the way for improved monitoring, predictive maintenance, and enhanced system performance.
no code implementations • 17 Jun 2024 • Hiromi Wakaki, Yuki Mitsufuji, Yoshinori Maeda, Yukiko Nishimura, Silin Gao, Mengjie Zhao, Keiichi Yamada, Antoine Bosselut
We propose a new benchmark, ComperDial, which facilitates the training and evaluation of evaluation metrics for open-domain dialogue systems.
no code implementations • 23 May 2024 • Shiqi Yang, Zhi Zhong, Mengjie Zhao, Shusuke Takahashi, Masato Ishii, Takashi Shibuya, Yuki Mitsufuji
The recent audio-visual generation methods usually resort to huge large language model or composable diffusion models.
no code implementations • 2 Apr 2024 • Mengjie Zhao, Cees Taal, Stephan Baggerohr, Olga Fink
Since temperature and vibration signals exhibit vastly different dynamics, we propose Heterogeneous Temporal Graph Neural Networks (HTGNN), which explicitly models these signal types and their interactions for effective load prediction.
no code implementations • 23 Mar 2024 • Zhouhang Xie, Bodhisattwa Prasad Majumder, Mengjie Zhao, Yoshinori Maeda, Keiichi Yamada, Hiromi Wakaki, Julian McAuley
We consider the task of building a dialogue system that can motivate users to adopt positive lifestyle changes: Motivational Interviewing.
1 code implementation • 26 Feb 2024 • Silin Gao, Mete Ismayilzada, Mengjie Zhao, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut
Inferring contextually-relevant and diverse commonsense to understand narratives remains challenging for knowledge models.
no code implementations • 12 Jan 2024 • Alexandra DeLucia, Mengjie Zhao, Yoshinori Maeda, Makoto Yoda, Keiichi Yamada, Hiromi Wakaki
To address both these issues, we introduce a natural language inference method for post-hoc adapting a trained persona extraction model to a new setting.
no code implementations • 20 Oct 2023 • Mengjie Zhao, Junya Ono, Zhi Zhong, Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Wei-Hsiang Liao, Takashi Shibuya, Hiromi Wakaki, Yuki Mitsufuji
Contrastive cross-modal models such as CLIP and CLAP aid various vision-language (VL) and audio-language (AL) tasks.
no code implementations • 2 Oct 2023 • Qiyu Wu, Mengjie Zhao, Yutong He, Lang Huang, Junya Ono, Hiromi Wakaki, Yuki Mitsufuji
In this paper, we focus on the wide existence of reporting bias in visual-language datasets, embodied as the object-attribute association, which can subsequentially degrade models trained on them.
no code implementations • 8 Sep 2023 • Keivan Faghih Niresi, Mengjie Zhao, Hugo Bissig, Henri Baumann, Olga Fink
The use of Internet of Things (IoT) sensors for air pollution monitoring has significantly increased, resulting in the deployment of low-cost sensors.
1 code implementation • 7 Jul 2023 • Mengjie Zhao, Olga Fink
We rigorously evaluated DyEdgeGAT using both a synthetic dataset, simulating varying levels of fault severity, and a real-world industrial-scale multiphase flow facility benchmark with diverse fault types under varying operating conditions and detection complexities.
Ranked #1 on Unsupervised Anomaly Detection on PRONTO
no code implementations • 3 Feb 2023 • Ismail Nejjar, Fabian Geissmann, Mengjie Zhao, Cees Taal, Olga Fink
Domain adaptation (DA) methods aim to address the domain shift problem by extracting domain invariant features.
no code implementations • 20 Dec 2022 • Wei Ma, Shangqing Liu, Mengjie Zhao, Xiaofei Xie, Wenhan Wang, Qiang Hu, Jie Zhang, Yang Liu
These structures are fundamental to understanding code.
no code implementations • 25 Oct 2022 • Junze Li, Mengjie Zhao, Yubo Xie, Antonis Maronikolakis, Pearl Pu, Hinrich Schütze
Humor is a magnetic component in everyday human interactions and communications.
no code implementations • Findings (ACL) 2022 • Sheng Liang, Mengjie Zhao, Hinrich Schütze
Recent research has made impressive progress in large-scale multimodal pre-training.
no code implementations • Findings (NAACL) 2022 • Mengjie Zhao, Fei Mi, Yasheng Wang, Minglei Li, Xin Jiang, Qun Liu, Hinrich Schütze
We propose LMTurk, a novel approach that treats few-shot learners as crowdsourcing workers.
1 code implementation • EMNLP 2021 • Mengjie Zhao, Hinrich Schütze
It has been shown for English that discrete and soft prompting perform strongly in few-shot learning with pretrained language models (PLMs).
no code implementations • ACL 2021 • Mengjie Zhao, Yi Zhu, Ehsan Shareghi, Ivan Vulić, Roi Reichart, Anna Korhonen, Hinrich Schütze
Few-shot crosslingual transfer has been shown to outperform its zero-shot counterpart with pretrained encoders like multilingual BERT.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Fei Mi, LiangWei Chen, Mengjie Zhao, Minlie Huang, Boi Faltings
Natural language generation (NLG) is an essential component of task-oriented dialog systems.
no code implementations • EMNLP 2020 • Mengjie Zhao, Tao Lin, Fei Mi, Martin Jaggi, Hinrich Schütze
We present an efficient method of utilizing pretrained language models, where we learn selective binary masks for pretrained weights in lieu of modifying them through finetuning.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Mengjie Zhao, Philipp Dufter, Yadollah Yaghoobzadeh, Hinrich Schütze
Pretrained language models have achieved a new state of the art on many NLP tasks, but there are still many open questions about how and why they work so well.
no code implementations • ACL 2019 • Mengjie Zhao, Hinrich Sch{\"u}tze
We present a new method for sentiment lexicon induction that is designed to be applicable to the entire range of typological diversity of the world{'}s languages.
no code implementations • 1 Nov 2018 • Philipp Dufter, Mengjie Zhao, Hinrich Schütze
A simple and effective context-based multilingual embedding learner is Levy et al. (2017)'s S-ID (sentence ID) method.
no code implementations • ACL 2018 • Philipp Dufter, Mengjie Zhao, Martin Schmitt, Alexander Fraser, Hinrich Schütze
We present a new method for estimating vector space representations of words: embedding learning by concept induction.