In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned language models.
The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.
Ranked #2 on
Domain Generalization
on VizWiz-Classification
(using extra training data)
The ResNet and its variants have achieved remarkable successes in various computer vision tasks.
Ranked #3 on
Medical Image Classification
on NCT-CRC-HE-100K
We also turn the feed-forward layer in DNN model into a mixture of addictive and multiplicative feature interactions by proposing MaskBlock in this paper.
Ranked #2 on
Click-Through Rate Prediction
on Criteo
We worked to increase classification accuracy and mitigate algorithmic biases on our baseline model trained on the augmented benchmark database.