Search Results for author: Hong Qu

Found 13 papers, 2 papers with code

MLPs Compass: What is learned when MLPs are combined with PLMs?

no code implementations3 Jan 2024 Li Zhou, Wenyu Chen, Yong Cao, Dingyi Zeng, Wanlong Liu, Hong Qu

While Transformer-based pre-trained language models and their variants exhibit strong semantic representation capabilities, the question of comprehending the information gain derived from the additional components of PLMs remains an open question in this field.

Temporal-Coded Spiking Neural Networks with Dynamic Firing Threshold: Learning with Event-Driven Backpropagation

no code implementations ICCV 2023 Wenjie Wei, Malu Zhang, Hong Qu, Ammar Belatreche, Jian Zhang, Hong Chen

As a temporal encoding scheme for SNNs, Time-To-First-Spike (TTFS) encodes information using the timing of a single spike, which allows spiking neurons to transmit information through sparse spike trains and results in lower power consumption and higher computational efficiency compared to traditional rate-based encoding counterparts.

Computational Efficiency Image Classification

Improving Image Captioning with Control Signal of Sentence Quality

no code implementations7 Jun 2022 Zhangzi Zhu, Hong Qu

In the dataset of image captioning, each image is aligned with several descriptions.

Image Captioning Sentence

Double Thompson Sampling in Finite stochastic Games

no code implementations21 Feb 2022 Shuqing Shi, Xiaobin Wang, Zhiyou Yang, Fan Zhang, Hong Qu

This algorithm achieves a total regret bound of $\tilde{\mathcal{O}}(D\sqrt{SAT})$in time horizon $T$ with $S$ states, $A$ actions and diameter $D$.

Thompson Sampling

Self-Annotated Training for Controllable Image Captioning

no code implementations16 Oct 2021 Zhangzi Zhu, Tianlei Wang, Hong Qu

In this paper, we propose a novel reinforcement training method for structure-related control signals: Self-Annotated Training (SAT), to improve both the accuracy and controllability of CIC models.

controllable image captioning Sentence +1

DPGNN: Dual-Perception Graph Neural Network for Representation Learning

no code implementations15 Oct 2021 Li Zhou, Wenyu Chen, Dingyi Zeng, Shaohuan Cheng, Wanlong Liu, Malu Zhang, Hong Qu

To address these drawbacks, we present a novel message-passing paradigm, based on the properties of multi-step message source, node-specific message output, and multi-space message interaction.

Graph Neural Network Graph Representation Learning

Generating Human Readable Transcript for Automatic Speech Recognition with Pre-trained Language Model

no code implementations22 Feb 2021 Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Sefik Eskimez, Liyang Lu, Hong Qu, Michael Zeng

Many downstream tasks and human readers rely on the output of the ASR system; therefore, errors introduced by the speaker and ASR system alike will be propagated to the next task in the pipeline.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Improving Zero-shot Neural Machine Translation on Language-specific Encoders-Decoders

no code implementations12 Feb 2021 Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng

However, the performance of using multiple encoders and decoders on zero-shot translation still lags behind universal NMT.

Decoder Denoising +3

Macroscopic Control of Text Generation for Image Captioning

no code implementations20 Jan 2021 Zhangzi Zhu, Tianlei Wang, Hong Qu

With such a control signal, the controllability and diversity of existing captioning models are enhanced.

Diversity Image Captioning +4

Improving Readability for Automatic Speech Recognition Transcription

no code implementations9 Apr 2020 Junwei Liao, Sefik Emre Eskimez, Liyang Lu, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng

In this work, we propose a novel NLP task called ASR post-processing for readability (APR) that aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks

no code implementations26 Mar 2020 Malu Zhang, Jiadong Wang, Burin Amornpaisannon, Zhixuan Zhang, VPK Miriyala, Ammar Belatreche, Hong Qu, Jibin Wu, Yansong Chua, Trevor E. Carlson, Haizhou Li

In STDBP algorithm, the timing of individual spikes is used to convey information (temporal coding), and learning (back-propagation) is performed based on spike timing in an event-driven manner.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.