Search Results for author: Haoyu Gao

Found 7 papers, 3 papers with code

Antelope: Potent and Concealed Jailbreak Attack Strategy

no code implementations11 Dec 2024 Xin Zhao, Xiaojun Chen, Haoyu Gao

Due to the remarkable generative potential of diffusion-based models, numerous researches have investigated jailbreak attacks targeting these frameworks.

Prompt Engineering

An Explicit Discrete-Time Dynamic Vehicle Model with Assured Numerical Stability

no code implementations26 Nov 2024 Guojian Zhan, Qiang Ge, Haoyu Gao, Yuming Yin, Bin Zhao, Shengbo Eben Li

Subsequent to the validation process, we conduct comprehensive simulations comparing our proposed model with both kinematic models and existing dynamic models discretized through the forward Euler method.

QGait: Toward Accurate Quantization for Gait Recognition with Binarized Input

no code implementations22 May 2024 Senmao Tian, Haoyu Gao, Gangyi Hong, Shuyun Wang, JingJie Wang, Xin Yu, Shunli Zhang

Minor variations in silhouette sequences can be diminished in the network's intermediate layers due to the accumulation of quantization errors.

Gait Recognition Quantization

Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models

no code implementations22 Sep 2023 Haoyu Gao, Ting-En Lin, Hangyu Li, Min Yang, Yuchuan Wu, Wentao Ma, Yongbin Li

Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but Large Language Models (LLMs) often struggle to comprehend these intricate contexts.

Dialogue Understanding

SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue Agents

1 code implementation NeurIPS 2023 Shuzheng Si, Wentao Ma, Haoyu Gao, Yuchuan Wu, Ting-En Lin, Yinpei Dai, Hangyu Li, Rui Yan, Fei Huang, Yongbin Li

SpokenWOZ further incorporates common spoken characteristics such as word-by-word processing and reasoning in spoken language.

Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment

1 code implementation19 May 2023 Tianshu Yu, Haoyu Gao, Ting-En Lin, Min Yang, Yuchuan Wu, Wentao Ma, Chao Wang, Fei Huang, Yongbin Li

In this paper, we propose Speech-text dialog Pre-training for spoken dialog understanding with ExpliCiT cRoss-Modal Alignment (SPECTRA), which is the first-ever speech-text dialog pre-training model.

Ranked #2 on Multimodal Sentiment Analysis on CMU-MOSI (Acc-2 metric, using extra training data)

cross-modal alignment Emotion Recognition in Conversation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.