1 code implementation • 10 Feb 2025 • Yong Cao, Haijiang Liu, Arnav Arora, Isabelle Augenstein, Paul Röttger, Daniel Hershcovich
In this paper, we are the first to specialize LLMs for the task of simulating survey response distributions.
no code implementations • 7 Feb 2025 • Steffen Eger, Yong Cao, Jennifer D'Souza, Andreas Geiger, Christian Greisinger, Stephanie Gross, Yufang Hou, Brigitte Krenn, Anne Lauscher, Yizhi Li, Chenghua Lin, Nafise Sadat Moosavi, Wei Zhao, Tristan Miller
With the advent of large multimodal language models, science is now at a threshold of an AI-based technological transformation.
no code implementations • 30 Aug 2024 • Shuai Peng, Di Fu, Baole Wei, Yong Cao, Liangcai Gao, Zhi Tang
Despite the remarkable success of Vision Transformers (ViTs) in various visual tasks, they are often hindered by substantial computational cost.
no code implementations • 8 Jul 2024 • Antonia Karamolegkou, Phillip Rust, Yong Cao, Ruixiang Cui, Anders Søgaard, Daniel Hershcovich
Large vision-language models (VLMs) can assist visually impaired people by describing images from their daily lives.
1 code implementation • 6 Jul 2024 • Zhengdao Li, Yong Cao, Kefan Shuai, Yiming Miao, Kai Hwang
We further propose a novel metric to quantify the dataset effectiveness by considering both dataset complexity and model performance.
no code implementations • 4 Jul 2024 • Zhigen Li, Jianxiang Peng, Yanmeng Wang, Yong Cao, Tianhao Shen, Minghui Zhang, Linxi Su, Shang Wu, Yihang Wu, Yuqian Wang, Ye Wang, Wei Hu, Jianfeng Li, Shaojun Wang, Jing Xiao, Deyi Xiong
Conversational agents powered by Large Language Models (LLMs) show superior performance in various tasks.
1 code implementation • 10 Apr 2024 • Li Zhou, Taelin Karidi, Wanlong Liu, Nicolas Garneau, Yong Cao, Wenyu Chen, Haizhou Li, Daniel Hershcovich
Recent studies have highlighted the presence of cultural biases in Large Language Models (LLMs), yet often lack a robust methodology to dissect these phenomena comprehensively.
no code implementations • 8 Feb 2024 • Yong Cao, Wenyan Li, Jiaang Li, Yifei Yuan, Antonia Karamolegkou, Daniel Hershcovich
Pretrained large Vision-Language models have drawn considerable interest in recent years due to their remarkable performance.
1 code implementation • 18 Jan 2024 • Yong Cao, Min Chen, Daniel Hershcovich
The cultural landscape of interactions with dialogue agents is a compelling yet relatively unexplored territory.
no code implementations • 3 Jan 2024 • Li Zhou, Wenyu Chen, Yong Cao, Dingyi Zeng, Wanlong Liu, Hong Qu
While Transformer-based pre-trained language models and their variants exhibit strong semantic representation capabilities, the question of comprehending the information gain derived from the additional components of PLMs remains an open question in this field.
no code implementations • 26 Oct 2023 • Yong Cao, Yova Kementchedjhieva, Ruixiang Cui, Antonia Karamolegkou, Li Zhou, Megan Dare, Lucia Donatelli, Daniel Hershcovich
We introduce a new task involving the translation and cultural adaptation of recipes between Chinese and English-speaking cuisines.
1 code implementation • 4 Sep 2023 • Yong Cao, Ruixue Ding, Boli Chen, Xianzhi Li, Min Chen, Daniel Hershcovich, Pengjun Xie, Fei Huang
Chinese geographic re-ranking task aims to find the most relevant addresses among retrieved candidates, which is crucial for location-related services such as navigation maps.
no code implementations • 3 May 2023 • Yong Cao, Xianzhi Li, Huiwen Liu, Wen Dai, Shuai Chen, Bin Wang, Min Chen, Daniel Hershcovich
In this study, we propose a novel framework, RE-KBQA, that utilizes relations in the knowledge base to enhance entity representation and introduce additional supervision.
no code implementations • 31 Mar 2023 • Li Zhou, Laura Cabello, Yong Cao, Daniel Hershcovich
Detecting offensive language is a challenging task.
Cultural Vocal Bursts Intensity Prediction
Few-Shot Learning
+1
1 code implementation • 30 Mar 2023 • Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, Daniel Hershcovich
The recent release of ChatGPT has garnered widespread recognition for its exceptional ability to generate human-like responses in dialogue.
no code implementations • 2 Feb 2023 • Meng Zhao, Yifan Hu, Ruixuan Jiang, Yuanli Zhao, Dong Zhang, Yan Zhang, Rong Wang, Yong Cao, Qian Zhang, Yonggang Ma, Jiaxi Li, Shaochen Yu, Wenjie Li, Ran Zhang, Yefeng Zheng, Shuo Wang, Jizong Zhao
Conclusions: The proposed deep learning algorithms can be an effective tool for early identification of hemorrhage etiologies based on NCCT scans.
1 code implementation • Findings (NAACL) 2022 • Yong Cao, Wei Li, Xianzhi Li, Min Chen, Guangyong Chen, Long Hu, Zhengdao Li, Hwang Kai
Sign language recognition and translation first uses a recognition module to generate glosses from sign language videos and then employs a translation module to translate glosses into spoken sentences.
no code implementations • 20 Dec 2021 • Yong Cao, Yukun Feng, Shaohui Kuang, Gu Xu
In almost all text generation applications, word sequences are constructed in a left-to-right (L2R) or right-to-left (R2L) manner, as natural language sentences are written either L2R or R2L.
no code implementations • 17 Mar 2020 • Zhenshen Qu, Jingda Du, Yong Cao, Qiuyu Guan, Pengbo Zhao
Recently, CNN object detectors have achieved high accuracy on remote sensing images but require huge labor and time costs on annotation.
no code implementations • 24 Sep 2019 • Pengwei Wang, Liang-Chen Wei, Yong Cao, Jinghui Xie, Yuji Cao, Zaiqing Nie
End-to-end Spoken Language Understanding (SLU) is proposed to infer the semantic meaning directly from audio features without intermediate text representation.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+5
no code implementations • 22 Jun 2018 • Yihong Chen, Bei Chen, Xuguang Duan, Jian-Guang Lou, Yue Wang, Wenwu Zhu, Yong Cao
Almost all the knowledge empowered applications rely upon accurate knowledge, which has to be either collected manually with high cost, or extracted automatically with unignorable errors.