Search Results for author: Lan Liu

Found 3 papers, 1 papers with code

Tokenization Consistency Matters for Generative Models on Extractive NLP Tasks

1 code implementation19 Dec 2022 Kaiser Sun, Peng Qi, Yuhao Zhang, Lan Liu, William Yang Wang, Zhiheng Huang

We show that, with consistent tokenization, the model performs better in both in-domain and out-of-domain datasets, with a notable average of +1. 7 F2 gain when a BART model is trained on SQuAD and evaluated on 8 QA datasets.

Extractive Question-Answering Hallucination +1

Improving Cross-task Generalization of Unified Table-to-text Models with Compositional Task Configurations

no code implementations17 Dec 2022 Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Yang Wang, Zhiheng Huang

There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022).

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.