no code implementations • 3 Mar 2023 • Vikramjit Mitra, Vasudha Kowtha, Hsiang-Yun Sherry Chien, Erdrin Azemi, Carlos Avendano
We investigated the use of pre-trained model representations for estimating dimensional emotions, such as activation, valence, and dominance, from speech.
no code implementations • 27 Oct 2022 • Hsiang-Yun Sherry Chien, Hanlin Goh, Christopher M. Sandino, Joseph Y. Cheng
We propose a reconstruction-based self-supervised learning model, the masked auto-encoder for EEG (MAEEG), for learning EEG representations by learning to reconstruct the masked EEG features using a transformer architecture.
no code implementations • 2 Jul 2022 • Vikramjit Mitra, Hsiang-Yun Sherry Chien, Vasudha Kowtha, Joseph Yitan Cheng, Erdrin Azemi
We investigate the use of pre-trained model representations to improve valence estimation from acoustic speech signal.
no code implementations • 12 May 2021 • Hsiang-Yun Sherry Chien, Javier S. Turek, Nicole Beckage, Vy A. Vo, Christopher J. Honey, Ted L. Willke
Altogether, we found that LSTM with the proposed forget gate can learn long-term dependencies, outperforming other recurrent networks in multiple domains; such gating mechanism can be integrated into other architectures for improving the learning of long timescale information in recurrent neural networks.
no code implementations • ICLR 2021 • Hsiang-Yun Sherry Chien, Jinhan Zhang, Christopher. J. Honey
In summary, we demonstrated a model-free technique for mapping the timescale organization in recurrent neural networks, and we applied this method to reveal the timescale and functional organization of neural language models.