Beyond achieving high performance across many vision tasks, multimodal models are expected to be robust to single-source faults due to the availability of redundant information between modalities.
This paper proposes a certifiable defense against adversarial patch attacks on image classification.
Most pre-trained classifiers, though they may work extremely well on the domain they were trained upon, are not trained in a robust fashion, and therefore are sensitive to adversarial attacks.
Our method improves upon the current state of the art in defending against patch attacks on CIFAR10 and ImageNet, both in terms of certified accuracy and inference time.
We show that our knowledge graph approach can reduce sign search space by 98. 9%.
Its major difference from the traditional image style transfer problem is that the style information is provided by music rather than images.
We study the problem of training machine learning models incrementally with batches of samples annotated with noisy oracles.
We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles.
Autonomous driving requires various computer vision algorithms, such as object detection and tracking. Precisely-labeled datasets (i. e., objects are fully contained in bounding boxes with only a few extra pixels) are preferred for training such algorithms, so that the algorithms can detect exact locations of the objects.