Search Results for author: Jun Qi

Found 25 papers, 8 papers with code

Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition

2 code implementations26 Oct 2020 Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Pin-Yu Chen, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95. 12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features.

 Ranked #1 on Keyword Spotting on Google Speech Commands (10-keyword Speech Commands dataset metric)

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Variational Quantum Circuits for Deep Reinforcement Learning

1 code implementation30 Jun 2019 Samuel Yen-Chi Chen, Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, Hsi-Sheng Goan

To the best of our knowledge, this work is the first proof-of-principle demonstration of variational quantum circuits to approximate the deep $Q$-value function for decision-making and policy-selection reinforcement learning with experience replay and target network.

BIG-bench Machine Learning Decision Making +3

Tensor-to-Vector Regression for Multi-channel Speech Enhancement based on Tensor-Train Network

2 code implementations3 Feb 2020 Jun Qi, Hu Hu, Yannan Wang, Chao-Han Huck Yang, Sabato Marco Siniscalchi, Chin-Hui Lee

Finally, in 8-channel conditions, a PESQ of 3. 12 is achieved using 20 million parameters for TTN, whereas a DNN with 68 million parameters can only attain a PESQ of 3. 06.

regression Speech Enhancement

Exploring Deep Hybrid Tensor-to-Vector Network Architectures for Regression Based Speech Enhancement

2 code implementations25 Jul 2020 Jun Qi, Hu Hu, Yannan Wang, Chao-Han Huck Yang, Sabato Marco Siniscalchi, Chin-Hui Lee

Finally, our experiments of multi-channel speech enhancement on a simulated noisy WSJ0 corpus demonstrate that our proposed hybrid CNN-TT architecture achieves better results than both DNN and CNN models in terms of better-enhanced speech qualities and smaller parameter sizes.

regression Speech Enhancement

Theoretical Error Performance Analysis for Variational Quantum Circuit Based Functional Regression

1 code implementation8 Jun 2022 Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hsiu Hsieh

In this work, we first put forth an end-to-end quantum neural network, TTN-VQC, which consists of a quantum tensor network based on a tensor-train network (TTN) for dimensionality reduction and a VQC for functional regression.

Dimensionality Reduction regression

Unsupervised Submodular Rank Aggregation on Score-based Permutations

1 code implementation4 Jul 2017 Jun Qi, Xu Liu, Javier Tejedor, Shunsuke Kamijo

Unsupervised rank aggregation on score-based permutations, which is widely used in many applications, has not been deeply explored yet.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Submodular Mini-Batch Training in Generative Moment Matching Networks

no code implementations18 Jul 2017 Jun Qi

This article was withdrawn because (1) it was uploaded without the co-authors' knowledge or consent, and (2) there are allegations of plagiarism.

Riemannian Stochastic Gradient Descent for Tensor-Train Recurrent Neural Networks

no code implementations ICLR 2019 Jun Qi, Chin-Hui Lee, Javier Tejedor

The Tensor-Train factorization (TTF) is an efficient way to compress large weight matrices of fully-connected layers and recurrent layers in recurrent neural networks (RNNs).

Machine Translation Translation

Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning

no code implementations20 Feb 2020 Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Yi Ouyang, I-Te Danny Hung, Chin-Hui Lee, Xiaoli Ma

Recent deep neural networks based techniques, especially those equipped with the ability of self-adaptation in the system level such as deep reinforcement learning (DRL), are shown to possess many advantages of optimizing robot learning systems (e. g., autonomous navigation and continuous robot arm control.)

Autonomous Navigation reinforcement-learning +1

Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement

no code implementations31 Mar 2020 Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, Chin-Hui Lee

Recent studies have highlighted adversarial examples as ubiquitous threats to the deep neural network (DNN) based speech recognition systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network Based Vector-to-Vector Regression

no code implementations4 Aug 2020 Jun Qi, Jun Du, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

In this paper, we show that, in vector-to-vector regression utilizing deep neural networks (DNNs), a generalized loss of mean absolute error (MAE) between the predicted and expected feature vectors is upper bounded by the sum of an approximation error, an estimation error, and an optimization error.

Learning Theory regression +2

On Mean Absolute Error for Deep Neural Network Based Vector-to-Vector Regression

no code implementations12 Aug 2020 Jun Qi, Jun Du, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

In this paper, we exploit the properties of mean absolute error (MAE) as a loss function for the deep neural network (DNN) based vector-to-vector regression.

regression Speech Enhancement

Variational Inference-Based Dropout in Recurrent Neural Networks for Slot Filling in Spoken Language Understanding

no code implementations23 Aug 2020 Jun Qi, Xu Liu, Javier Tejedor

This paper proposes to generalize the variational recurrent neural network (RNN) with variational inference (VI)-based dropout regularization employed for the long short-term memory (LSTM) cells to more advanced RNN architectures like gated recurrent unit (GRU) and bi-directional LSTM/GRU.

slot-filling Slot Filling +2

MFL_COVID19: Quantifying Country-based Factors affecting Case Fatality Rate in Early Phase of COVID-19 Epidemic via Regularised Multi-task Feature Learning

no code implementations6 Sep 2020 Po Yang, Jun Qi, Xulong Wang, Yun Yang

The fused sparse group Lasso (FSGL) method allows the simultaneous selection of a common set of country-based factors for multiple time points of COVID-19 epidemic and also enables incorporating temporal smoothness of each factor over the whole early phase period.

feature selection Multi-Task Learning +1

QTN-VQC: An End-to-End Learning framework for Quantum Neural Networks

no code implementations6 Oct 2021 Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen

The advent of noisy intermediate-scale quantum (NISQ) computers raises a crucial challenge to design quantum neural networks for fully quantum learning tasks.

Classical-to-Quantum Transfer Learning for Spoken Command Recognition Based on Quantum Neural Networks

no code implementations17 Oct 2021 Jun Qi, Javier Tejedor

Our QNN-based SCR system is composed of classical and quantum components: (1) the classical part mainly relies on a 1D convolutional neural network (CNN) to extract speech features; (2) the quantum part is built upon the variational quantum circuit with a few learnable parameters.

Spoken Command Recognition Transfer Learning

Exploiting Hybrid Models of Tensor-Train Networks for Spoken Command Recognition

no code implementations11 Jan 2022 Jun Qi, Javier Tejedor

Our command recognition system, namely CNN+(TT-DNN), is composed of convolutional layers at the bottom for spectral feature extraction and TT layers at the top for command classification.

Spoken Command Recognition

When BERT Meets Quantum Temporal Convolution Learning for Text Classification in Heterogeneous Computing

no code implementations17 Feb 2022 Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Yu Tsao, Pin-Yu Chen

Our experiments on intent classification show that our proposed BERT-QTC model attains competitive experimental results in the Snips and ATIS spoken language datasets.

intent-classification Intent Classification +4

Federated Quantum Natural Gradient Descent for Quantum Federated Learning

no code implementations15 Aug 2022 Jun Qi

The heart of Quantum Federated Learning (QFL) is associated with a distributed learning architecture across several local quantum devices and a more efficient training algorithm for the QFL is expected to minimize the communication overhead among different quantum participants.

Federated Learning

An Ensemble Teacher-Student Learning Approach with Poisson Sub-sampling to Differential Privacy Preserving Speech Recognition

no code implementations12 Oct 2022 Chao-Han Huck Yang, Jun Qi, Sabato Marco Siniscalchi, Chin-Hui Lee

We propose an ensemble learning framework with Poisson sub-sampling to effectively train a collection of teacher models to issue some differential privacy (DP) guarantee for training data.

Ensemble Learning Privacy Preserving +3

Optimizing Quantum Federated Learning Based on Federated Quantum Natural Gradient Descent

no code implementations27 Feb 2023 Jun Qi, Xiao-Lei Zhang, Javier Tejedor

In this work, we propose an efficient optimization algorithm, namely federated quantum natural gradient descent (FQNGD), and further, apply it to a QFL framework that is composed of a variational quantum circuit (VQC)-based quantum neural networks (QNN).

Federated Learning

Pre-training Tensor-Train Networks Facilitates Machine Learning with Variational Quantum Circuits

no code implementations18 May 2023 Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hsiu Hsieh

Variational quantum circuit (VQC) is a promising approach for implementing quantum neural networks on noisy intermediate-scale quantum (NISQ) devices.

Spatio-Temporal Similarity Measure based Multi-Task Learning for Predicting Alzheimer's Disease Progression using MRI Data

no code implementations6 Nov 2023 Xulong Wang, Yu Zhang, Menghui Zhou, Tong Liu, Jun Qi, Po Yang

The experimental results show that compared with directly ROI based learning, our proposed method is more effective in predicting disease progression.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.