no code implementations • COLING 2022 • Sanghee J. Kim, Lang Yu, Allyson Ettinger
A critical component of competence in language is being able to identify relevant components of an utterance and reply appropriately.
1 code implementation • 19 Dec 2023 • Lang Yu, Qin Chen, Jie zhou, Liang He
Large language models (LLMs) have shown great success in various Natural Language Processing (NLP) tasks, whist they still need updates after deployment to fix errors or keep pace with the changing knowledge in the world.
1 code implementation • 26 May 2023 • Jiaxuan Li, Lang Yu, Allyson Ettinger
Current pre-trained language models have enabled remarkable improvements in downstream tasks, but it remains difficult to distinguish effects of statistical correlation from more systematic logical reasoning grounded on the understanding of real world.
1 code implementation • 6 Dec 2022 • Jiaxuan Li, Lang Yu, Allyson Ettinger
Current pre-trained language models have enabled remarkable improvements in downstream tasks, but it remains difficult to distinguish effects of statistical correlation from more systematic logical reasoning grounded on understanding of the real world.
1 code implementation • 5 Oct 2022 • Sanghee J. Kim, Lang Yu, Allyson Ettinger
A critical component of competence in language is being able to identify relevant components of an utterance and reply appropriately.
2 code implementations • Findings (ACL) 2021 • Lang Yu, Allyson Ettinger
Here we investigate the impact of fine-tuning on the capacity of contextualized embeddings to capture phrase meaning information beyond lexical content.
1 code implementation • EMNLP 2020 • Lang Yu, Allyson Ettinger
Deep transformer models have pushed performance on NLP tasks to new limits, suggesting sophisticated treatment of complex linguistic inputs, such as phrases.