This technical report presents an overview of our solution used in the submission to 2021 HACS Temporal Action Localization Challenge on both Supervised Learning Track and Weakly-Supervised Learning Track.
More importantly, we present a practical system workflow for deploying the model in web-scale retrieval.
Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.
Ranked #1 on Generative Question Answering on CoQA (using extra training data)
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration).
Ranked #3 on Chinese Sentiment Analysis on ChnSentiCorp Dev