BURT: BERT-inspired Universal Representation from Twin Structure

29 Apr 2020Yian LiHai Zhao

Pre-trained contextualized language models such as BERT have shown great effectiveness in a wide range of downstream Natural Language Processing (NLP) tasks. However, the effective representations offered by the models target at each token inside a sequence rather than each sequence and the fine-tuning step involves the input of both sequences at one time, leading to unsatisfying representations of various sequences with different granularities... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper