1 code implementation • EMNLP 2018 • Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Li-Wei Chen, Vadim Sheinin
Existing neural semantic parsers mainly utilize a sequence encoder, i. e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency graph or constituent trees.
2 code implementations • 30 Oct 2018 • Li-Wei Chen, Hung-Yi Lee, Yu Tsao
This paper focuses on using voice conversion (VC) to improve the speech intelligibility of surgical patients who have had parts of their articulators removed.
no code implementations • 9 Nov 2018 • Li-Wei Chen, Yansong Feng, Songfang Huang, Bingfeng Luo, Dongyan Zhao
Relation extraction is the task of identifying predefined relationship between entities, and plays an essential role in information extraction, knowledge base construction, question answering and so on.
1 code implementation • 29 Sep 2020 • Li-Wei Chen, Berkay Alp Cakal, Xiangyu Hu, Nils Thuerey
In the present study, U-net based deep neural network (DNN) models are trained with high-fidelity datasets to infer flow fields, and then employed as surrogate models to carry out the shape optimisation problem, i. e. to find a drag minimal profile with a fixed cross-section area subjected to a two-dimensional steady laminar flow.
Fluid Dynamics
no code implementations • 28 Dec 2020 • Li-Wei Chen, Wei-Chen Chiu, Chin-Tien Wu
We propose a spectral analysis to investigate the correlations among the resolution of the down sampled grid, the loss function and the accuracy of the SSNNs.
1 code implementation • 5 Sep 2021 • Li-Wei Chen, Nils Thuerey
The present study investigates the accurate inference of Reynolds-averaged Navier-Stokes solutions for the compressible flow over aerofoils in two dimensions with a deep neural network.
1 code implementation • 12 Oct 2021 • Li-Wei Chen, Alexander Rudnicky
While Wav2Vec 2. 0 has been proposed for speech recognition (ASR), it can also be used for speech emotion recognition (SER); its performance can be significantly improved using different fine-tuning strategies.
1 code implementation • 12 Oct 2021 • Li-Wei Chen, Alexander Rudnicky
In this paper, we present a novel architecture to realize fine-grained style control on the transformer-based text-to-speech synthesis (TransformerTTS).
1 code implementation • 8 Feb 2022 • Georg Kohl, Li-Wei Chen, Nils Thuerey
Simulations that produce three-dimensional data are ubiquitous in science, ranging from fluid flows to plasma physics.
1 code implementation • 14 Feb 2022 • Björn List, Li-Wei Chen, Nils Thuerey
In this paper, we train turbulence models based on convolutional neural networks.
2 code implementations • 27 Oct 2022 • Li-Wei Chen, Yao-Fei Cheng, Hung-Shin Lee, Yu Tsao, Hsin-Min Wang
The lack of clean speech is a practical challenge to the development of speech enhancement systems, which means that there is an inevitable mismatch between their training criterion and evaluation metric.
1 code implementation • 12 Nov 2022 • Li-Wei Chen, Shinji Watanabe, Alexander Rudnicky
To address these issues, we devise a cascaded modular system leveraging self-supervised discrete speech units as language representation.
1 code implementation • 8 Feb 2023 • Li-Wei Chen, Shinji Watanabe, Alexander Rudnicky
Recent Text-to-Speech (TTS) systems trained on reading or acted corpora have achieved near human-level naturalness.
1 code implementation • 14 Feb 2023 • Peter Wu, Li-Wei Chen, Cheol Jun Cho, Shinji Watanabe, Louis Goldstein, Alan W Black, Gopala K. Anumanchipalli
To build speech processing methods that can handle speech as naturally as humans, researchers have explored multiple ways of building an invertible mapping from speech to an interpretable space.
no code implementations • 23 May 2023 • Ta-Chung Chi, Ting-Han Fan, Li-Wei Chen, Alexander I. Rudnicky, Peter J. Ramadge
The use of positional embeddings in transformer language models is widely accepted.
1 code implementation • 4 Sep 2023 • Georg Kohl, Li-Wei Chen, Nils Thuerey
We find that even simple diffusion-based approaches can outperform multiple established flow prediction methods in terms of accuracy and temporal stability, while being on par with state-of-the-art stabilization techniques like unrolling at training time.
no code implementations • 5 Oct 2023 • Li-Wei Chen, Kai-Chen Cheng, Hung-Shin Lee
This report provides a concise overview of the proposed North system, which aims to achieve automatic word/syllable recognition for Taiwanese Hakka (Sixian).
no code implementations • 20 Feb 2024 • Bjoern List, Li-Wei Chen, Kartik Bali, Nils Thuerey
We also quantify a difference in the accuracy of models trained in a fully differentiable setup compared to their non-differentiable counterparts.