no code implementations • 12 Sep 2024 • WooJin Chung, Jiwoo Hong, Na Min An, James Thorne, Se-Young Yun
Stable pre-training is essential for achieving better-performing language models.
2 code implementations • CONLL 2018 • WooJin Chung, Sheng-Fu Wang, Samuel R. Bowman
Tree-structured neural network architectures for sentence encoding draw inspiration from the approach to semantic composition generally seen in formal linguistics, and have shown empirical improvements over comparable sequence models by doing so.