Search Results for author: Jun Qi

Found 16 papers, 6 papers with code

Classical-to-Quantum Transfer Learning for Spoken Command Recognition Based on Quantum Neural Networks

no code implementations17 Oct 2021 Jun Qi, Javier Tejedor

Our QNN-based SCR system is composed of classical and quantum components: (1) the classical part mainly relies on a 1D convolutional neural network (CNN) to extract speech features; (2) the quantum part is built upon the variational quantum circuit with a few learnable parameters.

Fine-tuning Transfer Learning

QTN-VQC: An End-to-End Learning framework for Quantum Neural Networks

no code implementations6 Oct 2021 Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen

The advent of noisy intermediate-scale quantum (NISQ) computers raises a crucial challenge to design quantum neural networks for fully quantum learning tasks.

Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition

2 code implementations26 Oct 2020 Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Pin-Yu Chen, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95. 12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features.

 Ranked #1 on Keyword Spotting on Google Speech Commands (10-keyword Speech Commands dataset metric)

automatic-speech-recognition Federated Learning +2

MFL_COVID19: Quantifying Country-based Factors affecting Case Fatality Rate in Early Phase of COVID-19 Epidemic via Regularised Multi-task Feature Learning

no code implementations6 Sep 2020 Po Yang, Jun Qi, Xulong Wang, Yun Yang

The fused sparse group Lasso (FSGL) method allows the simultaneous selection of a common set of country-based factors for multiple time points of COVID-19 epidemic and also enables incorporating temporal smoothness of each factor over the whole early phase period.

Feature Selection Multi-Task Learning

Variational Inference-Based Dropout in Recurrent Neural Networks for Slot Filling in Spoken Language Understanding

no code implementations23 Aug 2020 Jun Qi, Xu Liu, Javier Tejedor

This paper proposes to generalize the variational recurrent neural network (RNN) with variational inference (VI)-based dropout regularization employed for the long short-term memory (LSTM) cells to more advanced RNN architectures like gated recurrent unit (GRU) and bi-directional LSTM/GRU.

Language understanding Slot Filling +2

On Mean Absolute Error for Deep Neural Network Based Vector-to-Vector Regression

no code implementations12 Aug 2020 Jun Qi, Jun Du, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

In this paper, we exploit the properties of mean absolute error (MAE) as a loss function for the deep neural network (DNN) based vector-to-vector regression.

Speech Enhancement

Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network Based Vector-to-Vector Regression

no code implementations4 Aug 2020 Jun Qi, Jun Du, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

In this paper, we show that, in vector-to-vector regression utilizing deep neural networks (DNNs), a generalized loss of mean absolute error (MAE) between the predicted and expected feature vectors is upper bounded by the sum of an approximation error, an estimation error, and an optimization error.

Learning Theory Speech Enhancement

Exploring Deep Hybrid Tensor-to-Vector Network Architectures for Regression Based Speech Enhancement

2 code implementations25 Jul 2020 Jun Qi, Hu Hu, Yannan Wang, Chao-Han Huck Yang, Sabato Marco Siniscalchi, Chin-Hui Lee

Finally, our experiments of multi-channel speech enhancement on a simulated noisy WSJ0 corpus demonstrate that our proposed hybrid CNN-TT architecture achieves better results than both DNN and CNN models in terms of better-enhanced speech qualities and smaller parameter sizes.

Speech Enhancement Speech Quality

Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement

no code implementations31 Mar 2020 Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, Chin-Hui Lee

Recent studies have highlighted adversarial examples as ubiquitous threats to the deep neural network (DNN) based speech recognition systems.

automatic-speech-recognition Data Augmentation +3

Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning

no code implementations20 Feb 2020 Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Yi Ouyang, I-Te Danny Hung, Chin-Hui Lee, Xiaoli Ma

Recent deep neural networks based techniques, especially those equipped with the ability of self-adaptation in the system level such as deep reinforcement learning (DRL), are shown to possess many advantages of optimizing robot learning systems (e. g., autonomous navigation and continuous robot arm control.)

Autonomous Navigation

Tensor-to-Vector Regression for Multi-channel Speech Enhancement based on Tensor-Train Network

2 code implementations3 Feb 2020 Jun Qi, Hu Hu, Yannan Wang, Chao-Han Huck Yang, Sabato Marco Siniscalchi, Chin-Hui Lee

Finally, in 8-channel conditions, a PESQ of 3. 12 is achieved using 20 million parameters for TTN, whereas a DNN with 68 million parameters can only attain a PESQ of 3. 06.

Speech Enhancement

Submodular Rank Aggregation on Score-based Permutations for Distributed Automatic Speech Recognition

1 code implementation27 Jan 2020 Jun Qi, Chao-Han Huck Yang, Javier Tejedor

Distributed automatic speech recognition (ASR) requires to aggregate outputs of distributed deep neural network (DNN)-based models.

automatic-speech-recognition Speech Recognition

Variational Quantum Circuits for Deep Reinforcement Learning

1 code implementation30 Jun 2019 Samuel Yen-Chi Chen, Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, Hsi-Sheng Goan

To the best of our knowledge, this work is the first proof-of-principle demonstration of variational quantum circuits to approximate the deep $Q$-value function for decision-making and policy-selection reinforcement learning with experience replay and target network.

Decision Making Quantum Machine Learning

Riemannian Stochastic Gradient Descent for Tensor-Train Recurrent Neural Networks

no code implementations ICLR 2019 Jun Qi, Chin-Hui Lee, Javier Tejedor

The Tensor-Train factorization (TTF) is an efficient way to compress large weight matrices of fully-connected layers and recurrent layers in recurrent neural networks (RNNs).

Machine Translation Translation

Submodular Mini-Batch Training in Generative Moment Matching Networks

no code implementations18 Jul 2017 Jun Qi

This article was withdrawn because (1) it was uploaded without the co-authors' knowledge or consent, and (2) there are allegations of plagiarism.

Unsupervised Submodular Rank Aggregation on Score-based Permutations

1 code implementation4 Jul 2017 Jun Qi, Xu Liu, Javier Tejedor, Shunsuke Kamijo

Unsupervised rank aggregation on score-based permutations, which is widely used in many applications, has not been deeply explored yet.

automatic-speech-recognition Information Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.