In this work, we first present a controlled experiment to investigate the effect of receptive fields for scale variation in object detection.
#2 best model for Object Detection on COCO test-dev
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.
SOTA for Question Answering on SQuAD2.0
In this paper we present a framework that allows for a quick and flexible design of semi-automatic annotation pipelines.
This paper presents a study of semi-supervised learning with large convolutional networks.
The goal of RLCard is to bridge reinforcement learning and imperfect information games, and push forward the research of reinforcement learning in domains with multiple agents, large state and action space, and sparse reward.
We evaluate CharNet on three standard benchmarks, where it consistently outperforms the state-of-the-art approaches [25, 24] by a large margin, e. g., with improvements of 65. 33%->71. 08% (with generic lexicon) on ICDAR 2015, and 54. 0%->69. 23% on Total-Text, on end-to-end text recognition.
SOTA for Scene Text Detection on ICDAR 2015
Ancient history relies on disciplines such as epigraphy, the study of ancient inscribed texts, for evidence of the recorded past.
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.
SOTA for Text Summarization using textRank on GigaWord (using extra training data)
We provide an open-source C++ library for real-time metric-semantic visual-inertial Simultaneous Localization And Mapping (SLAM).