Search Results for author: Tyler Lee

Found 5 papers, 0 papers with code

When and how epochwise double descent happens

no code implementations26 Aug 2021 Cory Stephenson, Tyler Lee

This model is based on the hypothesis that the training data contains features that are slow to learn but informative.

Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks

no code implementations26 Aug 2021 Landan Seguin, Anthony Ndirango, Neeli Mishra, SueYeon Chung, Tyler Lee

Motivated by a recent study on learning robustness without input perturbations by distilling an AT model, we explore what is learned during adversarial training by analyzing the distribution of logits in AT models.

Adversarial Robustness

Generalization in multitask deep neural classifiers: a statistical physics approach

no code implementations NeurIPS 2019 Tyler Lee, Anthony Ndirango

There has also been a recent interest in extending these analyses to understanding how multitask learning can further improve the generalization capacity of deep neural nets.

Label-efficient audio classification through multitask learning and self-supervision

no code implementations ICLR Workshop LLD 2019 Tyler Lee, Ting Gong, Suchismita Padhy, Andrew Rouditchenko, Anthony Ndirango

We demonstrate that, in scenarios with limited labeled training data, one can significantly improve the performance of three different supervised classification tasks individually by up to 6% through simultaneous training with these additional self-supervised tasks.

Audio Classification Classification +3

Many-to-Many Voice Conversion with Out-of-Dataset Speaker Support

no code implementations30 Apr 2019 Gokce Keskin, Tyler Lee, Cory Stephenson, Oguz H. Elibol

We present a Cycle-GAN based many-to-many voice conversion method that can convert between speakers that are not in the training set.

Speaker Identification Voice Conversion

Cannot find the paper you are looking for? You can Submit a new open access paper.