Search Results for author: Junwen Bai

Found 15 papers, 7 papers with code

Xtal2DoS: Attention-based Crystal to Sequence Learning for Density of States Prediction

no code implementations3 Feb 2023 Junwen Bai, Yuanqi Du, Yingheng Wang, Shufeng Kong, John Gregoire, Carla Gomes

Modern machine learning techniques have been extensively applied to materials science, especially for property prediction tasks.

Gaussian Mixture Variational Autoencoder with Contrastive Learning for Multi-Label Classification

1 code implementation2 Dec 2021 Junwen Bai, Shufeng Kong, Carla P. Gomes

We find that by using contrastive learning in the supervised setting, we can exploit label information effectively in a data-driven manner, and learn meaningful feature and label embeddings which capture the label correlations and enhance the predictive power.

Contrastive Learning Multi-Label Classification

A GNN-RNN Approach for Harnessing Geospatial and Temporal Information: Application to Crop Yield Prediction

no code implementations17 Nov 2021 Joshua Fan, Junwen Bai, Zhiyun Li, Ariel Ortiz-Bobea, Carla P. Gomes

As far as we know, this is the first machine learning method that embeds geographical knowledge in crop yield prediction and predicts the crop yields at county level nationwide.

BIG-bench Machine Learning Crop Yield Prediction

Joint Unsupervised and Supervised Training for Multilingual ASR

no code implementations15 Nov 2021 Junwen Bai, Bo Li, Yu Zhang, Ankur Bapna, Nikhil Siddhartha, Khe Chai Sim, Tara N. Sainath

Our average WER of all languages outperforms average monolingual baseline by 33. 3%, and the state-of-the-art 2-stage XLSR by 32%.

Language Modelling Masked Language Modeling +3

Contrastively Disentangled Sequential Variational Autoencoder

1 code implementation NeurIPS 2021 Junwen Bai, Weiran Wang, Carla Gomes

We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE), to extract and separate the static (time-invariant) and dynamic (time-variant) factors in the latent space.

Representation Learning

Scaling End-to-End Models for Large-Scale Multilingual ASR

no code implementations30 Apr 2021 Bo Li, Ruoming Pang, Tara N. Sainath, Anmol Gulati, Yu Zhang, James Qin, Parisa Haghani, W. Ronny Huang, Min Ma, Junwen Bai

Building ASR models across many languages is a challenging multi-task learning problem due to large variations and heavily unbalanced data.

Multi-Task Learning

HOT-VAE: Learning High-Order Label Correlation for Multi-Label Classification via Attention-Based Variational Autoencoders

no code implementations9 Mar 2021 Wenting Zhao, Shufeng Kong, Junwen Bai, Daniel Fink, Carla Gomes

This in turn leads to a challenging and long-standing problem in the field of computer science - how to perform ac-curate multi-label classification with hundreds of labels?

Multi-Label Classification

Representation Learning for Sequence Data with Deep Autoencoding Predictive Components

2 code implementations ICLR 2021 Junwen Bai, Weiran Wang, Yingbo Zhou, Caiming Xiong

We propose Deep Autoencoding Predictive Components (DAPC) -- a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Disentangled Variational Autoencoder based Multi-Label Classification with Covariance-Aware Multivariate Probit Model

1 code implementation12 Jul 2020 Junwen Bai, Shufeng Kong, Carla Gomes

The decoder of MPVAE takes in the samples from the embedding spaces and models the joint distribution of output targets under a Multivariate Probit model by learning a shared covariance matrix.

General Classification Multi-Label Classification +1

SWALP : Stochastic Weight Averaging in Low-Precision Training

2 code implementations26 Apr 2019 Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, Christopher De Sa

Low precision operations can provide scalability, memory savings, portability, and energy efficiency.

End-to-End Refinement Guided by Pre-trained Prototypical Classifier

1 code implementation7 May 2018 Junwen Bai, Zihang Lai, Runzhe Yang, Yexiang Xue, John Gregoire, Carla Gomes

We propose imitation refinement, a novel approach to refine imperfect input patterns, guided by a pre-trained classifier incorporating prior knowledge from simulated theoretical data, such that the refined patterns imitate the ideal data.

Phase-Mapper: An AI Platform to Accelerate High Throughput Materials Discovery

1 code implementation3 Oct 2016 Yexiang Xue, Junwen Bai, Ronan Le Bras, Brendan Rappazzo, Richard Bernstein, Johan Bjorck, Liane Longpre, Santosh K. Suram, Robert B. van Dover, John Gregoire, Carla P. Gomes

A key problem in materials discovery, the phase map identification problem, involves the determination of the crystal phase diagram from the materials' composition and structural characterization data.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.