Search Results for author: Shell Xu Hu

Found 12 papers, 5 papers with code

EditVal: Benchmarking Diffusion Based Text-Guided Image Editing Methods

no code implementations3 Oct 2023 Samyadeep Basu, Mehrdad Saberi, Shweta Bhardwaj, Atoosa Malemir Chegini, Daniela Massiceti, Maziar Sanjabi, Shell Xu Hu, Soheil Feizi

From both the human study and automated evaluation, we find that: (i) Instruct-Pix2Pix, Null-Text and SINE are the top-performing methods averaged across different edit types, however {\it only} Instruct-Pix2Pix and Null-Text are able to preserve original image properties; (ii) Most of the editing methods fail at edits involving spatial operations (e. g., changing the position of an object).

Benchmarking text-guided-image-editing

Augmenting CLIP with Improved Visio-Linguistic Reasoning

no code implementations18 Jul 2023 Samyadeep Basu, Maziar Sanjabi, Daniela Massiceti, Shell Xu Hu, Soheil Feizi

On the challenging Winoground compositional reasoning benchmark, our method improves the absolute visio-linguistic performance of different CLIP models by up to 7%, while on the ARO dataset, our method improves the visio-linguistic performance by upto 3%.

Retrieval Text Retrieval +2

Strong Baselines for Parameter Efficient Few-Shot Fine-tuning

no code implementations4 Apr 2023 Samyadeep Basu, Daniela Massiceti, Shell Xu Hu, Soheil Feizi

Through our controlled empirical study, we have two main findings: (i) Fine-tuning just the LayerNorm parameters (which we call LN-Tune) during few-shot adaptation is an extremely strong baseline across ViTs pre-trained with both self-supervised and supervised objectives, (ii) For self-supervised ViTs, we find that simply learning a set of scaling parameters for each attention matrix (which we call AttnScale) along with a domain-residual adapter (DRA) module leads to state-of-the-art performance (while being $\sim\!$ 9$\times$ more parameter-efficient) on MD.

Few-Shot Image Classification

Federated Learning for Inference at Anytime and Anywhere

no code implementations8 Dec 2022 Zicheng Liu, Da Li, Javier Fernandez-Marques, Stefanos Laskaridis, Yan Gao, Łukasz Dudziak, Stan Z. Li, Shell Xu Hu, Timothy Hospedales

Federated learning has been predominantly concerned with collaborative training of deep networks from scratch, and especially the many challenges that arise, such as communication cost, robustness to heterogeneous data, and support for diverse device capabilities.

Federated Learning

Feed-Forward Latent Domain Adaptation

no code implementations15 Jul 2022 Ondrej Bohdal, Da Li, Shell Xu Hu, Timothy Hospedales

Recognizing that device's data are likely to come from multiple latent domains that include a mixture of unlabelled domain-relevant and domain-irrelevant examples, we focus on the comparatively under-studied problem of latent domain adaptation.

Source-Free Domain Adaptation

Compressing Features for Learning with Noisy Labels

1 code implementation27 Jun 2022 Yingyi Chen, Shell Xu Hu, Xi Shen, Chunrong Ai, Johan A. K. Suykens

This decomposition provides three insights: (i) it shows that over-fitting is indeed an issue for learning with noisy labels; (ii) through an information bottleneck formulation, it explains why the proposed feature compression helps in combating label noise; (iii) it gives explanations on the performance boost brought by incorporating compression regularization into Co-teaching.

Ranked #10 on Image Classification on Clothing1M (using extra training data)

Feature Compression Feature Importance +2

Fisher SAM: Information Geometry and Sharpness Aware Minimisation

no code implementations10 Jun 2022 Minyoung Kim, Da Li, Shell Xu Hu, Timothy M. Hospedales

Recent sharpness-aware minimisation (SAM) is known to find flat minima which is beneficial for better generalisation with improved robustness.

Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference

1 code implementation CVPR 2022 Shell Xu Hu, Da Li, Jan Stühmer, Minyoung Kim, Timothy M. Hospedales

To this end, we explore few-shot learning from the perspective of neural network architecture, as well as a three stage pipeline of network updates under different data supplies, where unsupervised external data is considered for pre-training, base categories are used to simulate few-shot tasks for meta-training, and the scarcely labelled data of an novel task is taken for fine-tuning.

Few-Shot Image Classification Few-Shot Learning +1

Boosting Co-teaching with Compression Regularization for Label Noise

1 code implementation28 Apr 2021 Yingyi Chen, Xi Shen, Shell Xu Hu, Johan A. K. Suykens

On Clothing1M, our approach obtains 74. 9% accuracy which is slightly better than that of DivideMix.

Ranked #12 on Image Classification on Clothing1M (using extra training data)

Data Compression Learning with noisy labels +1

Empirical Bayes Transductive Meta-Learning with Synthetic Gradients

2 code implementations ICLR 2020 Shell Xu Hu, Pablo G. Moreno, Yang Xiao, Xi Shen, Guillaume Obozinski, Neil D. Lawrence, Andreas Damianou

The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior on the query set of each task.

Few-Shot Image Classification Meta-Learning +3

Variational Information Distillation for Knowledge Transfer

2 code implementations CVPR 2019 Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence, Zhenwen Dai

We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10.

Knowledge Distillation Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.