Search Results for author: Javier Tejedor

Found 8 papers, 3 papers with code

Optimizing Quantum Federated Learning Based on Federated Quantum Natural Gradient Descent

no code implementations27 Feb 2023 Jun Qi, Xiao-Lei Zhang, Javier Tejedor

In this work, we propose an efficient optimization algorithm, namely federated quantum natural gradient descent (FQNGD), and further, apply it to a QFL framework that is composed of a variational quantum circuit (VQC)-based quantum neural networks (QNN).

Federated Learning

Exploiting Hybrid Models of Tensor-Train Networks for Spoken Command Recognition

no code implementations11 Jan 2022 Jun Qi, Javier Tejedor

Our command recognition system, namely CNN+(TT-DNN), is composed of convolutional layers at the bottom for spectral feature extraction and TT layers at the top for command classification.

Spoken Command Recognition

Classical-to-Quantum Transfer Learning for Spoken Command Recognition Based on Quantum Neural Networks

no code implementations17 Oct 2021 Jun Qi, Javier Tejedor

Our QNN-based SCR system is composed of classical and quantum components: (1) the classical part mainly relies on a 1D convolutional neural network (CNN) to extract speech features; (2) the quantum part is built upon the variational quantum circuit with a few learnable parameters.

Spoken Command Recognition Transfer Learning

Variational Inference-Based Dropout in Recurrent Neural Networks for Slot Filling in Spoken Language Understanding

no code implementations23 Aug 2020 Jun Qi, Xu Liu, Javier Tejedor

This paper proposes to generalize the variational recurrent neural network (RNN) with variational inference (VI)-based dropout regularization employed for the long short-term memory (LSTM) cells to more advanced RNN architectures like gated recurrent unit (GRU) and bi-directional LSTM/GRU.

slot-filling Slot Filling +2

Riemannian Stochastic Gradient Descent for Tensor-Train Recurrent Neural Networks

no code implementations ICLR 2019 Jun Qi, Chin-Hui Lee, Javier Tejedor

The Tensor-Train factorization (TTF) is an efficient way to compress large weight matrices of fully-connected layers and recurrent layers in recurrent neural networks (RNNs).

Machine Translation Translation

Unsupervised Submodular Rank Aggregation on Score-based Permutations

1 code implementation4 Jul 2017 Jun Qi, Xu Liu, Javier Tejedor, Shunsuke Kamijo

Unsupervised rank aggregation on score-based permutations, which is widely used in many applications, has not been deeply explored yet.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Cannot find the paper you are looking for? You can Submit a new open access paper.