1 code implementation • EMNLP 2021 • Taelin Karidi, Yichu Zhou, Nathan Schneider, Omri Abend, Vivek Srikumar
We present a method for exploring regions around individual points in a contextualized vector space (particularly, BERT space), as a way to investigate how these regions correspond to word senses.
1 code implementation • 2 Jun 2023 • Srikar Appalaraju, Peng Tang, Qi Dong, Nishant Sankaran, Yichu Zhou, R. Manmatha
We propose DocFormerv2, a multi-modal transformer for Visual Document Understanding (VDU).
Ranked #9 on Visual Question Answering (VQA) on DocVQA test (using extra training data)
no code implementations • 27 Oct 2021 • Zeyu You, Yichu Zhou, Tao Yang, Wei Fan
Anomaly detection or outlier detection is a common task in various domains, which has attracted significant research efforts in recent years.
1 code implementation • 23 Sep 2021 • Taelin Karidi, Yichu Zhou, Nathan Schneider, Omri Abend, Vivek Srikumar
We present a method for exploring regions around individual points in a contextualized vector space (particularly, BERT space), as a way to investigate how these regions correspond to word senses.
1 code implementation • ACL 2022 • Yichu Zhou, Vivek Srikumar
Given the prevalence of pre-trained contextualized representations in today's NLP, there have been many efforts to understand what information they contain, and why they seem to be universally successful.
1 code implementation • NAACL 2021 • Yichu Zhou, Vivek Srikumar
Understanding how linguistic structures are encoded in contextualized embedding could help explain their impressive performance across NLP@.
no code implementations • 2 Sep 2020 • Yichu Zhou, Omri Koshorek, Vivek Srikumar, Jonathan Berant
Discourse parsing is largely dominated by greedy parsers with manually-designed features, while global parsing is rare due to its computational expense.
no code implementations • CONLL 2019 • Omri Koshorek, Gabriel Stanovsky, Yichu Zhou, Vivek Srikumar, Jonathan Berant
We conclude that the current applicability of LTAL for improving data efficiency in learning semantic meaning representations is limited.
Learning Semantic Representations Natural Language Understanding
no code implementations • SEMEVAL 2019 • Yichu Zhou, Vivek Srikumar
We define a new modeling framework for training word embeddings that captures this intuition.