1 code implementation • Findings (EMNLP) 2021 • Jifan Chen, Eunsol Choi, Greg Durrett
To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just “good enough” in the context of imperfect QA datasets.
no code implementations • NAACL (DADC) 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
no code implementations • NAACL (TeachingNLP) 2021 • Greg Durrett, Jifan Chen, Shrey Desai, Tanya Goyal, Lucas Kabela, Yasumasa Onoe, Jiacheng Xu
We present a series of programming assignments, adaptable to a range of experience levels from advanced undergraduate to PhD, to teach students design and implementation of modern NLP systems.
1 code implementation • 31 Jul 2024 • Zhengxuan Wu, Yuhao Zhang, Peng Qi, Yumo Xu, Rujun Han, Yian Zhang, Jifan Chen, Bonan Min, Zhiheng Huang
Surprisingly, we find that less is more, as training ReSet with high-quality, yet substantially smaller data (three-fold less) yields superior results.
1 code implementation • 24 May 2023 • Manya Wadhwa, Jifan Chen, Junyi Jessy Li, Greg Durrett
These scores should reflect the annotators' underlying assessments of the example.
1 code implementation • 19 May 2023 • Jifan Chen, Grace Kim, Aniruddh Sriram, Greg Durrett, Eunsol Choi
Evidence retrieval is a core part of automatic fact-checking.
no code implementations • 17 Dec 2022 • Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Yang Wang, Zhiheng Huang
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022).
no code implementations • 29 Jun 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
no code implementations • 14 May 2022 • Jifan Chen, Aniruddh Sriram, Eunsol Choi, Greg Durrett
Verifying complex political claims is a challenging task, especially when politicians use various tactics to subtly misrepresent the facts.
1 code implementation • 18 Apr 2021 • Jifan Chen, Eunsol Choi, Greg Durrett
To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just "good enough" in the context of imperfect QA datasets.
no code implementations • NAACL 2021 • Jifan Chen, Greg Durrett
Current textual question answering models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns in the data, so they fail to generalize to out-of-distribution settings.
3 code implementations • 7 Oct 2019 • Jifan Chen, Shih-ting Lin, Greg Durrett
Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.
Ranked #4 on
Question Answering
on WikiHop
no code implementations • NAACL 2019 • Jifan Chen, Greg Durrett
First, we explore sentence-factored models for these tasks; by design, these models cannot do multi-hop reasoning, but they are still able to solve a large number of examples in both datasets.
no code implementations • 20 Aug 2016 • Jifan Chen, Kan Chen, Xipeng Qiu, Qi Zhang, Xuanjing Huang, Zheng Zhang
To prove the effectiveness of our model, we evaluate it on four tasks, including word similarity, reverse dictionaries, Wiki link prediction, and document classification.