Search Results for author: Fengwei Zhou

Found 20 papers, 6 papers with code

Open-Vocabulary Object Detection with Meta Prompt Representation and Instance Contrastive Optimization

no code implementations14 Mar 2024 Zhao Wang, Aoxue Li, Fengwei Zhou, Zhenguo Li, Qi Dou

Without using knowledge distillation, ensemble model or extra training data during detector training, our proposed MIC outperforms previous SOTA methods trained with these complex techniques on LVIS.

Contrastive Learning Knowledge Distillation +2

Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization

no code implementations5 Jun 2023 Yimeng Chen, Tianyang Hu, Fengwei Zhou, Zhenguo Li, ZhiMing Ma

The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models.

Domain Generalization Out-of-Distribution Generalization

Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks

no code implementations6 Apr 2023 Xuanzhe Xiao, Zeng Li, Chuanlong Xie, Fengwei Zhou

To capitalize on this discovery, we introduce a novel regularization technique, termed Heavy-Tailed Regularization, which explicitly promotes a more heavy-tailed spectrum in the weight matrix through regularization.

Fair-CDA: Continuous and Directional Augmentation for Group Fairness

no code implementations1 Apr 2023 Rui Sun, Fengwei Zhou, Zhenhua Dong, Chuanlong Xie, Lanqing Hong, Jiawei Li, Rui Zhang, Zhen Li, Zhenguo Li

By adjusting the perturbation strength in the direction of the paths, our proposed augmentation is controllable and auditable.

Data Augmentation Disentanglement +1

ZooD: Exploiting Model Zoo for Out-of-Distribution Generalization

no code implementations17 Oct 2022 Qishi Dong, Awais Muhammad, Fengwei Zhou, Chuanlong Xie, Tianyang Hu, Yongxin Yang, Sung-Ho Bae, Zhenguo Li

We evaluate our paradigm on a diverse model zoo consisting of 35 models for various OoD tasks and demonstrate: (i) model ranking is better correlated with fine-tuning ranking than previous methods and up to 9859x faster than brute-force fine-tuning; (ii) OoD generalization after model ensemble with feature selection outperforms the state-of-the-art methods and the accuracy on most challenging task DomainNet is improved from 46. 5\% to 50. 6\%.

feature selection Out-of-Distribution Generalization

Your Contrastive Learning Is Secretly Doing Stochastic Neighbor Embedding

2 code implementations30 May 2022 Tianyang Hu, Zhili Liu, Fengwei Zhou, Wenjia Wang, Weiran Huang

Contrastive learning, especially self-supervised contrastive learning (SSCL), has achieved great success in extracting powerful features from unlabeled data.

Contrastive Learning Data Augmentation +2

MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

no code implementations NeurIPS 2021 Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li

First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation.

Transfer Learning

NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization

1 code implementation ICCV 2021 Haoyue Bai, Fengwei Zhou, Lanqing Hong, Nanyang Ye, S. -H. Gary Chan, Zhenguo Li

In this work, we propose robust Neural Architecture Search for OoD generalization (NAS-OoD), which optimizes the architecture with respect to its performance on generated OoD data by gradient descent.

Domain Generalization Neural Architecture Search +1

Adversarial Robustness for Unsupervised Domain Adaptation

no code implementations ICCV 2021 Muhammad Awais, Fengwei Zhou, Hang Xu, Lanqing Hong, Ping Luo, Sung-Ho Bae, Zhenguo Li

Extensive Unsupervised Domain Adaptation (UDA) studies have shown great success in practice by learning transferable representations across a labeled source domain and an unlabeled target domain with deep models.

Adversarial Robustness Unsupervised Domain Adaptation

Relaxed Conditional Image Transfer for Semi-supervised Domain Adaptation

no code implementations5 Jan 2021 Qijun Luo, Zhili Liu, Lanqing Hong, Chongxuan Li, Kuo Yang, Liyuan Wang, Fengwei Zhou, Guilin Li, Zhenguo Li, Jun Zhu

Semi-supervised domain adaptation (SSDA), which aims to learn models in a partially labeled target domain with the assistance of the fully labeled source domain, attracts increasing attention in recent years.

Domain Adaptation Semi-supervised Domain Adaptation

DiffAutoML: Differentiable Joint Optimization for Efficient End-to-End Automated Machine Learning

no code implementations1 Jan 2021 Kaichen Zhou, Lanqing Hong, Fengwei Zhou, Binxin Ru, Zhenguo Li, Trigoni Niki, Jiashi Feng

Our method performs co-optimization of the neural architectures, training hyper-parameters and data augmentation policies in an end-to-end fashion without the need of model retraining.

BIG-bench Machine Learning Computational Efficiency +2

Multi-objective Neural Architecture Search via Non-stationary Policy Gradient

no code implementations23 Jan 2020 Zewei Chen, Fengwei Zhou, George Trimponias, Zhenguo Li

Despite recent progress, the problem of approximating the full Pareto front accurately and efficiently remains challenging.

Neural Architecture Search Reinforcement Learning (RL)

Formulating Camera-Adaptive Color Constancy as a Few-shot Meta-Learning Problem

no code implementations28 Nov 2018 Steven McDonagh, Sarah Parisot, Fengwei Zhou, Xing Zhang, Ales Leonardis, Zhenguo Li, Gregory Slabaugh

In this work, we propose a new approach that affords fast adaptation to previously unseen cameras, and robustness to changes in capture device by leveraging annotated samples across different cameras and datasets.

Few-Shot Camera-Adaptive Color Constancy Meta-Learning

Deep Meta-Learning: Learning to Learn in the Concept Space

no code implementations10 Feb 2018 Fengwei Zhou, Bin Wu, Zhenguo Li

Few-shot learning remains challenging for meta-learning that learns a learning algorithm (meta-learner) from many related tasks.

Few-Shot Learning

Meta-SGD: Learning to Learn Quickly for Few-Shot Learning

9 code implementations31 Jul 2017 Zhenguo Li, Fengwei Zhou, Fei Chen, Hang Li

In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial.

Few-Shot Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.