Search Results for author: Chen-Chou Lo

Found 8 papers, 6 papers with code

RCDPT: Radar-Camera fusion Dense Prediction Transformer

1 code implementation4 Nov 2022 Chen-Chou Lo, Patrick Vandewalle

Instead of using readout tokens, radar representations contribute additional depth information to a monocular depth estimation model and improve performance.

Monocular Depth Estimation

How Much Depth Information can Radar Contribute to a Depth Estimation Model?

no code implementations26 Feb 2022 Chen-Chou Lo, Patrick Vandewalle

In the supervision experiment, a monocular depth estimation model is trained under radar supervision to show the intrinsic depth information that radar can contribute.

Autonomous Driving Monocular Depth Estimation

Depth Estimation from Monocular Images and Sparse radar using Deep Ordinal Regression Network

1 code implementation15 Jul 2021 Chen-Chou Lo, Patrick Vandewalle

We integrate sparse radar data into a monocular depth estimation model and introduce a novel preprocessing method for reducing the sparseness and limited field of view provided by radar.

Monocular Depth Estimation regression

Lite Audio-Visual Speech Enhancement

1 code implementation24 May 2020 Shang-Yi Chuang, Yu Tsao, Chen-Chou Lo, Hsin-Min Wang

Previous studies have confirmed the effectiveness of incorporating visual information into speech enhancement (SE) systems.

Data Compression Denoising +1

Unsupervised Representation Disentanglement using Cross Domain Features and Adversarial Learning in Variational Autoencoder based Voice Conversion

1 code implementation22 Jan 2020 Wen-Chin Huang, Hao Luo, Hsin-Te Hwang, Chen-Chou Lo, Yu-Huai Peng, Yu Tsao, Hsin-Min Wang

In this paper, we extend the CDVAE-VC framework by incorporating the concept of adversarial learning, in order to further increase the degree of disentanglement, thereby improving the quality and similarity of converted speech.

Disentanglement Voice Conversion

MOSNet: Deep Learning based Objective Assessment for Voice Conversion

6 code implementations17 Apr 2019 Chen-Chou Lo, Szu-Wei Fu, Wen-Chin Huang, Xin Wang, Junichi Yamagishi, Yu Tsao, Hsin-Min Wang

In this paper, we propose deep learning-based assessment models to predict human ratings of converted speech.

Voice Conversion

Cannot find the paper you are looking for? You can Submit a new open access paper.