no code implementations • 3 Jan 2024 • Li Zhou, Wenyu Chen, Yong Cao, Dingyi Zeng, Wanlong Liu, Hong Qu
While Transformer-based pre-trained language models and their variants exhibit strong semantic representation capabilities, the question of comprehending the information gain derived from the additional components of PLMs remains an open question in this field.
1 code implementation • Association for Computational Linguistics 2023 • Wanlong Liu, Shaohuan Cheng, Dingyi Zeng, Hong Qu
Document-level event argument extraction poses new challenges of long input and cross-sentence inference compared to its sentence-level counterpart.
Ranked #1 on Event Argument Extraction on WikiEvents (F1 metric)
no code implementations • ICCV 2023 • Wenjie Wei, Malu Zhang, Hong Qu, Ammar Belatreche, Jian Zhang, Hong Chen
As a temporal encoding scheme for SNNs, Time-To-First-Spike (TTFS) encodes information using the timing of a single spike, which allows spiking neurons to transmit information through sparse spike trains and results in lower power consumption and higher computational efficiency compared to traditional rate-based encoding counterparts.
no code implementations • 7 Jun 2022 • Zhangzi Zhu, Hong Qu
In the dataset of image captioning, each image is aligned with several descriptions.
no code implementations • 21 Feb 2022 • Shuqing Shi, Xiaobin Wang, Zhiyou Yang, Fan Zhang, Hong Qu
This algorithm achieves a total regret bound of $\tilde{\mathcal{O}}(D\sqrt{SAT})$in time horizon $T$ with $S$ states, $A$ actions and diameter $D$.
no code implementations • 16 Oct 2021 • Zhangzi Zhu, Tianlei Wang, Hong Qu
In this paper, we propose a novel reinforcement training method for structure-related control signals: Self-Annotated Training (SAT), to improve both the accuracy and controllability of CIC models.
no code implementations • 15 Oct 2021 • Li Zhou, Wenyu Chen, Dingyi Zeng, Shaohuan Cheng, Wanlong Liu, Malu Zhang, Hong Qu
To address these drawbacks, we present a novel message-passing paradigm, based on the properties of multi-step message source, node-specific message output, and multi-space message interaction.
no code implementations • 22 Feb 2021 • Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Sefik Eskimez, Liyang Lu, Hong Qu, Michael Zeng
Many downstream tasks and human readers rely on the output of the ASR system; therefore, errors introduced by the speaker and ASR system alike will be propagated to the next task in the pipeline.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 12 Feb 2021 • Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng
However, the performance of using multiple encoders and decoders on zero-shot translation still lags behind universal NMT.
no code implementations • 20 Jan 2021 • Zhangzi Zhu, Tianlei Wang, Hong Qu
With such a control signal, the controllability and diversity of existing captioning models are enhanced.
1 code implementation • COLING 2020 • Rubungo Andre Niyongabo, Hong Qu, Julia Kreutzer, Li Huang
Recent progress in text classification has been focused on high-resource languages such as English and Chinese.
no code implementations • 9 Apr 2020 • Junwei Liao, Sefik Emre Eskimez, Liyang Lu, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng
In this work, we propose a novel NLP task called ASR post-processing for readability (APR) that aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 26 Mar 2020 • Malu Zhang, Jiadong Wang, Burin Amornpaisannon, Zhixuan Zhang, VPK Miriyala, Ammar Belatreche, Hong Qu, Jibin Wu, Yansong Chua, Trevor E. Carlson, Haizhou Li
In STDBP algorithm, the timing of individual spikes is used to convey information (temporal coding), and learning (back-propagation) is performed based on spike timing in an event-driven manner.