Search Results for author: Qiquan Zhang

Found 8 papers, 1 papers with code

When LLMs Meets Acoustic Landmarks: An Efficient Approach to Integrate Speech into Large Language Models for Depression Detection

no code implementations17 Feb 2024 Xiangyu Zhang, Hexin Liu, Kaishuai Xu, Qiquan Zhang, Daijiao Liu, Beena Ahmed, Julien Epps

In addition, this approach is not only valuable for the detection of depression but also represents a new perspective in enhancing the ability of LLMs to comprehend and process speech signals.

Depression Detection

EEG-Derived Voice Signature for Attended Speaker Detection

no code implementations28 Aug 2023 Hongxu Zhu, Siqi Cai, Yidi Jiang, Qiquan Zhang, Haizhou Li

\textit{Conclusion:} We conclude that it is possible to derive the attended speaker's voice signature from the EEG signals so as to detect the attended speaker in a listening brain.

EEG

PoE: a Panel of Experts for Generalized Automatic Dialogue Assessment

no code implementations18 Dec 2022 Chen Zhang, Luis Fernando D'Haro, Qiquan Zhang, Thomas Friedrichs, Haizhou Li

To tackle the multi-domain dialogue evaluation task, we propose a Panel of Experts (PoE), a multitask network that consists of a shared transformer encoder and a collection of lightweight adapters.

Data Augmentation Dialogue Evaluation +4

FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation

2 code implementations25 Oct 2022 Chen Zhang, Luis Fernando D'Haro, Qiquan Zhang, Thomas Friedrichs, Haizhou Li

Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment.

Dialogue Evaluation

Learning Reinforced Attentional Representation for End-to-End Visual Tracking

no code implementations27 Aug 2019 Peng Gao, Qiquan Zhang, Fei Wang, Liyi Xiao, Hamido Fujita, Yan Zhang

Although numerous recent tracking approaches have made tremendous advances in the last decade, achieving high-performance visual tracking remains a challenge.

Visual Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.