|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Despite the outstanding performance of convolutional neural networks (CNNs) for many vision tasks, the required computational cost during inference is problematic when resources are limited.
Recent works show that deep neural networks trained on image classification dataset bias towards textures.
The classical method of training CNNs is by labeling images in a supervised manner as in "input image belongs to this label" (Positive Learning; PL), which is a fast and accurate method if the labels are assigned correctly to all images.
In this paper, we show how to combine recent works on neural network certification tools (which are mainly used in static settings such as image classification) with robust control theory to certify a neural network policy in a control loop.
In some computer vision domains, such as medical or hyperspectral imaging, we care about the classification of tiny objects in large images.
On the websites, there exist a lot of image data which contains inaccurate annotations, but training on these datasets may make networks easier to over-fit the noisy labels and cause performance degradation.
Nowadays, Deep learning techniques show dramatic performance on computer vision area, and they even outperform human.
While there are recent robustness studies for full-image classification, we are the first to present an exhaustive study for semantic segmentation, based on the state-of-the-art model DeepLabv3$+$.
It is the subject of the main result of this article to provide space-time error estimates for DNN approximations of Euler approximations of certain perturbed differential equations.