Search Results for author: Jifan Chen

Found 15 papers, 5 papers with code

Contemporary NLP Modeling in Six Comprehensive Programming Assignments

no code implementations NAACL (TeachingNLP) 2021 Greg Durrett, Jifan Chen, Shrey Desai, Tanya Goyal, Lucas Kabela, Yasumasa Onoe, Jiacheng Xu

We present a series of programming assignments, adaptable to a range of experience levels from advanced undergraduate to PhD, to teach students design and implementation of modern NLP systems.

Can NLI Models Verify QA Systems’ Predictions?

1 code implementation Findings (EMNLP) 2021 Jifan Chen, Eunsol Choi, Greg Durrett

To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just “good enough” in the context of imperfect QA datasets.

Natural Language Inference Question Answering +1

Using Natural Language Explanations to Rescale Human Judgments

1 code implementation24 May 2023 Manya Wadhwa, Jifan Chen, Junyi Jessy Li, Greg Durrett

The rise of large language models (LLMs) has brought a critical need for high-quality human-labeled data, particularly for processes like human feedback and evaluation.

Question Answering

Improving Cross-task Generalization of Unified Table-to-text Models with Compositional Task Configurations

no code implementations17 Dec 2022 Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Yang Wang, Zhiheng Huang

There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022).

Multi-Task Learning

Generating Literal and Implied Subquestions to Fact-check Complex Claims

no code implementations14 May 2022 Jifan Chen, Aniruddh Sriram, Eunsol Choi, Greg Durrett

Verifying complex political claims is a challenging task, especially when politicians use various tactics to subtly misrepresent the facts.

Fact Checking

Can NLI Models Verify QA Systems' Predictions?

1 code implementation18 Apr 2021 Jifan Chen, Eunsol Choi, Greg Durrett

To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just "good enough" in the context of imperfect QA datasets.

Natural Language Inference Question Answering

Robust Question Answering Through Sub-part Alignment

no code implementations NAACL 2021 Jifan Chen, Greg Durrett

Current textual question answering models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns in the data, so they fail to generalize to out-of-distribution settings.

Question Answering

Multi-hop Question Answering via Reasoning Chains

3 code implementations7 Oct 2019 Jifan Chen, Shih-ting Lin, Greg Durrett

Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.

Multi-hop Question Answering Named Entity Recognition +3

Understanding Dataset Design Choices for Multi-hop Reasoning

no code implementations NAACL 2019 Jifan Chen, Greg Durrett

First, we explore sentence-factored models for these tasks; by design, these models cannot do multi-hop reasoning, but they are still able to solve a large number of examples in both datasets.

Multi-hop Question Answering Multiple-choice +3

Learning Word Embeddings from Intrinsic and Extrinsic Views

no code implementations20 Aug 2016 Jifan Chen, Kan Chen, Xipeng Qiu, Qi Zhang, Xuanjing Huang, Zheng Zhang

To prove the effectiveness of our model, we evaluate it on four tasks, including word similarity, reverse dictionaries, Wiki link prediction, and document classification.

Descriptive Document Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.