1 code implementation • ACL 2021 • Bill Yuchen Lin, Seyeon Lee, Xiaoyang Qiao, Xiang Ren
In addition, we also create two new datasets, X-CSQA and X-CODAH, by translating their English versions to 15 other languages, so that we can evaluate popular ML-LMs for cross-lingual commonsense reasoning.
no code implementations • ICLR 2021 • Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Xiang Ren
To augment PTLMs with common sense, we propose generative and contrastive objectives as intermediate self-supervised pre-training tasks between general pre-training and downstream task-specific fine-tuning.
1 code implementation • 24 Oct 2020 • Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Bill Yuchen Lin, Xiang Ren
Pre-trained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks.
no code implementations • EMNLP 2020 • Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, Xiang Ren
Recent works show that pre-trained language models (PTLMs), such as BERT, possess certain commonsense and factual knowledge.
no code implementations • EMNLP 2021 • Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, Xiang Ren
Pre-trained language models (PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated.
no code implementations • ACL 2020 • Dong-Ho Lee, Rahul Khanna, Bill Yuchen Lin, Jamin Chen, Seyeon Lee, Qinyuan Ye, Elizabeth Boschee, Leonardo Neves, Xiang Ren
Successfully training a deep neural network demands a huge corpus of labeled data.