Search Results for author: Mayumi Ohta

Found 5 papers, 4 papers with code

JoeyS2T: Minimalistic Speech-to-Text Modeling with JoeyNMT

2 code implementations5 Oct 2022 Mayumi Ohta, Julia Kreutzer, Stefan Riezler

JoeyS2T is a JoeyNMT extension for speech-to-text tasks such as automatic speech recognition and end-to-end speech translation.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

On-the-Fly Aligned Data Augmentation for Sequence-to-Sequence ASR

1 code implementation3 Apr 2021 Tsz Kin Lam, Mayumi Ohta, Shigehiko Schamoni, Stefan Riezler

Our method, called Aligned Data Augmentation (ADA) for ASR, replaces transcribed tokens and the speech representations in an aligned manner to generate previously unseen training pairs.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order Optimization

1 code implementation2 Jun 2020 Mayumi Ohta, Nathaniel Berger, Artem Sokolov, Stefan Riezler

Interest in stochastic zeroth-order (SZO) methods has recently been revived in black-box optimization scenarios such as adversarial black-box attacks to deep neural networks.

Sparse Stochastic Zeroth-Order Optimization with an Application to Bandit Structured Prediction

no code implementations12 Jun 2018 Artem Sokolov, Julian Hitschler, Mayumi Ohta, Stefan Riezler

Stochastic zeroth-order (SZO), or gradient-free, optimization allows to optimize arbitrary functions by relying only on function evaluations under parameter perturbations, however, the iteration complexity of SZO methods suffers a factor proportional to the dimensionality of the perturbed function.

Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.