1 code implementation • 23 Dec 2024 • Francis R. Willett, Jingyuan Li, Trung Le, Chaofei Fan, Mingfei Chen, Eli Shlizerman, Yue Chen, Xin Zheng, Tatsuo S. Okubo, Tyler Benster, Hyun Dong Lee, Maxwell Kounga, E. Kelly Buchanan, David Zoltowski, Scott W. Linderman, Jaimie M. Henderson
Speech brain-computer interfaces aim to decipher what a person is trying to say from neural activity alone, restoring communication to people with paralysis who have lost the ability to speak intelligibly.
1 code implementation • NeurIPS 2023 • Hyun Dong Lee, Andrew Warrington, Joshua I. Glaser, Scott W. Linderman
In contrast, SLDSs can capture long-range dependencies in a parameter efficient way through Markovian latent dynamics, but present an intractable likelihood and a challenging parameter estimation task.
no code implementations • NeurIPS 2021 • Julien Boussard, Erdem Varol, Hyun Dong Lee, Nishchal Dethe, Liam Paninski
Neuropixels (NP) probes are dense linear multi-electrode arrays that have rapidly become essential tools for studying the electrophysiology of large neural populations.
no code implementations • 15 Oct 2021 • Hiun Kim, Jisu Jeong, Kyung-Min Kim, Dongjun Lee, Hyun Dong Lee, Dongpil Seo, Jeeseung Han, Dong Wook Park, Ji Ae Heo, Rak Yeong Kim
In this paper, we use a pretrained language model (PLM) that leverages textual attributes of web-scale products to make intent-based product collections.
no code implementations • 30 Sep 2020 • Hyun Dong Lee, Seongmin Lee, U Kang
How can we effectively regularize BERT?
no code implementations • 25 Sep 2019 • Chun Quan, Jun-Gi Jang, Hyun Dong Lee, U Kang
A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution.
no code implementations • 25 Sep 2019 • Jun-Gi Jang, Chun Quan, Hyun Dong Lee, U Kang
By exploiting the knowledge of a trained standard model and carefully determining the order of depthwise separable convolution via GEP, FALCON achieves sufficient accuracy close to that of the trained standard model.