no code implementations • 22 May 2023 • Bohong Wu, Fei Yuan, Hai Zhao, Lei LI, Jingjing Xu
Considering that encoder-based models have the advantage of efficient generation and self-correction abilities, this paper explores methods to empower multilingual understanding models the generation abilities to get a unified model.
1 code implementation • 16 Oct 2022 • Bohong Wu, Hai Zhao
Though offering amazing contextualized token-level representations, current pre-trained language models take less attention on accurately acquiring sentence-level representation during their self-supervised pre-training.
no code implementations • 20 Apr 2022 • Bohong Wu, Hai Zhao
If self-supervised learning can be distinguished into two subcategories, generative and contrastive, then most existing studies show that sentence representation learning may more benefit from the contrastive methods but not the generative methods.
no code implementations • ACL 2022 • Bohong Wu, Zhuosheng Zhang, JinYuan Wang, Hai Zhao
In detail, we introduce an in-passage negative sampling strategy to encourage a diverse generation of sentence representations within the same passage.
no code implementations • 25 Jul 2021 • Bohong Wu, Zhuosheng Zhang, Hai Zhao
Multi-hop reading comprehension (MHRC) requires not only to predict the correct answer span in the given passage, but also to provide a chain of supporting evidences for reasoning interpretability.