Search Results for author: Michael Spratling

Found 13 papers, 8 papers with code

One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models

1 code implementation4 Mar 2024 Lin Li, Haoyan Guan, Jianing Qiu, Michael Spratling

This work studies the adversarial robustness of VLMs from the novel perspective of the text prompt instead of the extensively studied model weights (frozen in this work).

Adversarial Attack Adversarial Robustness

The Importance of Anti-Aliasing in Tiny Object Detection

1 code implementation22 Oct 2023 Jinlai Ning, Michael Spratling

Additionally, we also propose a bottom-heavy version of the backbone, which further improves the performance of tiny object detection while also reducing the required number of parameters by almost half.

Object object-detection +1

AROID: Improving Adversarial Robustness through Online Instance-wise Data Augmentation

no code implementations12 Jun 2023 Lin Li, Jianing Qiu, Michael Spratling

This allows our method to efficiently explore a large search space for a more effective DA policy and evolve the policy as training progresses.

Adversarial Robustness Data Augmentation

Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing

1 code implementation24 Mar 2023 Lin Li, Michael Spratling

We find that during training an overall reduction of adversarial loss is achieved by sacrificing a considerable proportion of training samples to be more vulnerable to adversarial attack, which results in an uneven distribution of adversarial vulnerability among data.

Adversarial Attack Adversarial Robustness +1

Rethinking the backbone architecture for tiny object detection

no code implementations20 Mar 2023 Jinlai Ning, Haoyan Guan, Michael Spratling

Tiny object detection has become an active area of research because images with tiny targets are common in several important real-world scenarios.

Object object-detection +1

Data Augmentation Alone Can Improve Adversarial Training

1 code implementation24 Jan 2023 Lin Li, Michael Spratling

Data augmentation, which is effective at preventing overfitting in standard training, has been observed by many previous works to be ineffective in mitigating overfitting in adversarial training.

Data Augmentation

Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization

1 code implementation9 Dec 2022 Lin Li, Michael Spratling

Adversarial training is widely used to improve the robustness of deep neural networks to adversarial attack.

Adversarial Attack

On the biological plausibility of orthogonal initialisation for solving gradient instability in deep neural networks

1 code implementation27 Oct 2022 Nikolay Manchev, Michael Spratling

Initialising the synaptic weights of artificial neural networks (ANNs) with orthogonal matrices is known to alleviate vanishing and exploding gradient problems.

Attribute

CobNet: Cross Attention on Object and Background for Few-Shot Segmentation

no code implementations21 Oct 2022 Haoyan Guan, Michael Spratling

To overcome this issue, we propose CobNet which utilises information about the background that is extracted from the query images without annotations of those images.

Segmentation

Query Semantic Reconstruction for Background in Few-Shot Segmentation

no code implementations21 Oct 2022 Haoyan Guan, Michael Spratling

To successfully associate prototypes with class labels and extract a background prototype that is capable of predicting a mask for the background regions of the image, the machinery for extracting and using foreground prototypes is induced to become more discriminative between different classes.

Registration based Few-Shot Anomaly Detection

1 code implementation15 Jul 2022 Chaoqin Huang, Haoyan Guan, Aofan Jiang, Ya zhang, Michael Spratling, Yan-Feng Wang

Inspired by how humans detect anomalies, i. e., comparing an image in question to normal images, we here leverage registration, an image alignment task that is inherently generalizable across categories, as the proxy task, to train a category-agnostic anomaly detection model.

Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.