Search Results for author: Yangjun Ruan

Found 7 papers, 5 papers with code

Weighted Ensemble Self-Supervised Learning

no code implementations18 Nov 2022 Yangjun Ruan, Saurabh Singh, Warren Morningstar, Alexander A. Alemi, Sergey Ioffe, Ian Fischer, Joshua V. Dillon

Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning.

Self-Supervised Learning

Augment with Care: Contrastive Learning for Combinatorial Problems

no code implementations17 Feb 2022 Haonan Duan, Pashootan Vaezipoor, Max B. Paulus, Yangjun Ruan, Chris J. Maddison

While typical graph contrastive pre-training uses label-agnostic augmentations, our key insight is that many combinatorial problems have well-studied invariances, which allow for the design of label-preserving augmentations.

Contrastive Learning

Optimal Representations for Covariate Shift

2 code implementations ICLR 2022 Yangjun Ruan, Yann Dubois, Chris J. Maddison

Machine learning systems often experience a distribution shift between training and testing.

Ranked #35 on Image Classification on ObjectNet (using extra training data)

Domain Generalization Image Classification +1

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

1 code implementation ICLR Workshop Neural_Compression 2021 Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris J. Maddison

Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space.

Data Compression

Learning to Learn by Zeroth-Order Oracle

1 code implementation ICLR 2020 Yangjun Ruan, Yuanhao Xiong, Sashank Reddi, Sanjiv Kumar, Cho-Jui Hsieh

In the learning to learn (L2L) framework, we cast the design of optimization algorithms as a machine learning problem and use deep neural networks to learn the update rules.

Adversarial Attack

FastSpeech: Fast,Robustand Controllable Text-to-Speech

10 code implementations22 May 2019 Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu

Compared with traditional concatenative and statistical parametric approaches, neural network based end-to-end models suffer from slow inference speed, and the synthesized speech is usually not robust (i. e., some words are skipped or repeated) and lack of controllability (voice speed or prosody control).

Text-To-Speech Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.