Differing from the well-developed horizontal object detection area whereby the computing-friendly IoU based loss is readily adopted and well fits with the detection metrics.
Ranked #1 on Object Detection In Aerial Images on DOTA
Adversarial training has been empirically proven to be one of the most effective and reliable defense methods against adversarial attacks.
In this work, we study the effect of memorization in adversarial trained DNNs and disclose two important findings: (a) Memorizing atypical samples is only effective to improve DNN's accuracy on clean atypical samples, but hardly improve their adversarial robustness and (b) Memorizing certain atypical samples will even hurt the DNN's performance on typical samples.
Taking the perspective that horizontal detection is a special case for rotated object detection, in this paper, we are motivated to change the design of rotation regression loss from induction paradigm to deduction methodology, in terms of the relation between rotation and horizontal detection.
Ranked #3 on Object Detection In Aerial Images on DOTA
Boundary discontinuity and its inconsistency to the final detection metric have been the bottleneck for rotating detection regression loss design.
Ranked #5 on Object Detection In Aerial Images on DOTA
Conventional deep image inpainting methods are based on auto-encoder architecture, in which the spatial details of images will be lost in the down-sampling process, leading to the degradation of generated results.
Rotation detection serves as a fundamental building block in many visual applications involving aerial image, scene text, and face etc.
Ranked #14 on Object Detection In Aerial Images on DOTA
Extensive experiments on two real-world conversation datasets show that our framework significantly reduces gender bias in dialogue models while maintaining the response quality.
Extensive experiments conducted on three real-world data sets demonstrate the superiority of our framework on learning representations from limited data with crowdsourced labels, comparing with various state-of-the-art baselines.
That is, the model learns to imitate the writing style of any given exemplar sentence, with automatic adaptions to faithfully describe the content record.
Neural text generation models such as recurrent networks are typically trained by maximizing data log-likelihood based on cross entropy.
3 code implementations • • Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Wangrong Zhu, Devendra Singh Sachan, Eric P. Xing
The versatile toolkit also fosters technique sharing across different text generation tasks.