Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.
We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks.
We present a novel approach for oriented object detection, named TricubeNet, which localizes oriented objects using visual cues ($i. e.,$ heatmap) instead of oriented box offsets regression.
We propose a quadratic penalty method for continual learning of neural networks that contain batch normalization (BN) layers.