To solve this problem, we propose an end-to-end cross-domain detection Transformer based on the mean teacher framework, MTTrans, which can fully exploit unlabeled target domain data in object detection training and transfer knowledge between domains via pseudo labels.
In this paper, we propose a novel self-supervised learning framework that combines contrastive learning with neural processes.
Our approach results in a computationally and memory-efficient model: CFLOW-AD is faster and smaller by a factor of 10x than prior state-of-the-art with the same input setting.
Ranked #19 on Anomaly Detection on MVTec AD (using extra training data)
To overcome these limitations, we reformulate AutoAugment as a generalized automated dataset optimization (AutoDO) task that minimizes the distribution shift between test data and distorted train dataset.
Current home appliances are capable to execute a limited number of voice commands such as turning devices on or off, adjusting music volume or light conditions.
We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting.
We qualitatively and quantitatively show that the proposed explanation method can be used to find image features which cause failures in DNN object detection.