Search Results for author: Hong-You Chen

Found 24 papers, 11 papers with code

Contrastive Localized Language-Image Pre-Training

no code implementations3 Oct 2024 Hong-You Chen, Zhengfeng Lai, Haotian Zhang, Xinze Wang, Marcin Eichner, Keen You, Meng Cao, BoWen Zhang, Yinfei Yang, Zhe Gan

Contrastive Language-Image Pre-training (CLIP) has been a celebrated method for training vision encoders to generate image/text representations facilitating various applications.

Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models

no code implementations3 Oct 2024 Zhengfeng Lai, Vasileios Saveris, Chen Chen, Hong-You Chen, Haotian Zhang, BoWen Zhang, Juan Lao Tebar, Wenze Hu, Zhe Gan, Peter Grasch, Meng Cao, Yinfei Yang

Our findings reveal that a hybrid approach that keeps both synthetic captions and AltTexts can outperform the use of synthetic captions alone, improving both alignment and performance, with each model demonstrating preferences for particular caption formats.

Lessons Learned from a Unifying Empirical Study of Parameter-Efficient Transfer Learning (PETL) in Visual Recognition

1 code implementation24 Sep 2024 Zheda Mai, Ping Zhang, Cheng-Hao Tu, Hong-You Chen, Li Zhang, Wei-Lun Chao

Last but not least, we investigate PETL's ability to preserve a pre-trained model's robustness to distribution shifts (e. g., a CLIP backbone).

Transfer Learning

Fine-Tuning is Fine, if Calibrated

1 code implementation24 Sep 2024 Zheda Mai, Arpita Chowdhury, Ping Zhang, Cheng-Hao Tu, Hong-You Chen, Vardaan Pahuja, Tanya Berger-Wolf, Song Gao, Charles Stewart, Yu Su, Wei-Lun Chao

For example, fine-tuning a pre-trained classifier capable of recognizing a large number of classes to master a subset of classes at hand is shown to drastically degrade the model's accuracy in the other classes it had previously learned.

FedNE: Surrogate-Assisted Federated Neighbor Embedding for Dimensionality Reduction

no code implementations17 Sep 2024 Ziwei Li, Xiaoqi Wang, Hong-You Chen, Han-Wei Shen, Wei-Lun Chao

Federated learning (FL) has rapidly evolved as a promising paradigm that enables collaborative model training across distributed participants without exchanging their local data.

Dimensionality Reduction Federated Learning +1

Jigsaw Game: Federated Clustering

no code implementations17 Jul 2024 Jinxuan Xu, Hong-You Chen, Wei-Lun Chao, Yuqian Zhang

Federated learning has recently garnered significant attention, especially within the domain of supervised learning.

Clustering Federated Learning +1

Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models

1 code implementation11 Apr 2024 Haotian Zhang, Haoxuan You, Philipp Dufter, BoWen Zhang, Chen Chen, Hong-You Chen, Tsu-Jui Fu, William Yang Wang, Shih-Fu Chang, Zhe Gan, Yinfei Yang

While Ferret seamlessly integrates regional understanding into the Large Language Model (LLM) to facilitate its referring and grounding capability, it poses certain limitations: constrained by the pre-trained fixed visual encoder and failed to perform well on broader tasks.

Language Modeling Language Modelling +2

Reviving the Context: Camera Trap Species Classification as Link Prediction on Multimodal Knowledge Graphs

1 code implementation31 Dec 2023 Vardaan Pahuja, Weidi Luo, Yu Gu, Cheng-Hao Tu, Hong-You Chen, Tanya Berger-Wolf, Charles Stewart, Song Gao, Wei-Lun Chao, Yu Su

In this work, we exploit the structured context linked to camera trap images to boost out-of-distribution generalization for species classification tasks in camera traps.

 Ranked #1 on Image Classification on iWildCam2020-WILDS (using extra training data)

Image Classification Knowledge Graphs +2

Learning Fractals by Gradient Descent

1 code implementation14 Mar 2023 Cheng-Hao Tu, Hong-You Chen, David Carlyn, Wei-Lun Chao

Fractals are geometric shapes that can display complex and self-similar patterns found in nature (e. g., clouds and plants).

Making Batch Normalization Great in Federated Deep Learning

no code implementations12 Mar 2023 Jike Zhong, Hong-You Chen, Wei-Lun Chao

We reinvestigate factors that are believed to cause this problem, including the mismatch of BN statistics across clients and the deviation of gradients during local training.

Deep Learning Federated Learning

Train-Once-for-All Personalization

no code implementations CVPR 2023 Hong-You Chen, Yandong Li, Yin Cui, Mingda Zhang, Wei-Lun Chao, Li Zhang

We study the problem of how to train a "personalization-friendly" model such that given only the task descriptions, the model can be adapted to different end-users' needs, e. g., for accurately classifying different subsets of objects.

Gradual Domain Adaptation without Indexed Intermediate Domains

1 code implementation NeurIPS 2021 Hong-You Chen, Wei-Lun Chao

This coarse domain sequence then undergoes a fine indexing step via a novel cycle-consistency loss, which encourages the next intermediate domain to preserve sufficient discriminative knowledge of the current intermediate domain.

Unsupervised Domain Adaptation

On the Importance and Applicability of Pre-Training for Federated Learning

1 code implementation23 Jun 2022 Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han-Wei Shen, Wei-Lun Chao

To make our findings applicable to situations where pre-trained models are not directly available, we explore pre-training with synthetic data or even with clients' data in a decentralized manner, and found that they can already improve FL notably.

Federated Learning

On Bridging Generic and Personalized Federated Learning for Image Classification

3 code implementations ICLR 2022 Hong-You Chen, Wei-Lun Chao

On the one hand, we introduce a family of losses that are robust to non-identical class distributions, enabling clients to train a generic predictor with a consistent objective across them.

Classification Image Classification +1

FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning

2 code implementations ICLR 2021 Hong-You Chen, Wei-Lun Chao

Federated learning aims to collaboratively train a strong global model by accessing users' locally trained models but not their own data.

Bayesian Inference Federated Learning

Glyph2Vec: Learning Chinese Out-of-Vocabulary Word Embedding from Glyphs

no code implementations ACL 2020 Hong-You Chen, Sz-Han Yu, Shou-De Lin

Chinese NLP applications that rely on large text often contain huge amounts of vocabulary which are sparse in corpus.

Identifying and Compensating for Feature Deviation in Imbalanced Deep Learning

1 code implementation6 Jan 2020 Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, Wei-Lun Chao

Classifiers trained with class-imbalanced data are known to perform poorly on test data of the "minor" classes, of which we have insufficient training data.

Deep Learning

Multiple Text Style Transfer by using Word-level Conditional Generative Adversarial Network with Two-Phase Training

no code implementations IJCNLP 2019 Chih-Te Lai, Yi-Te Hong, Hong-You Chen, Chi-Jen Lu, Shou-De Lin

The objective of non-parallel text style transfer, or controllable text generation, is to alter specific attributes (e. g. sentiment, mood, tense, politeness, etc) of a given text while preserving its remaining attributes and content.

Attribute Generative Adversarial Network +2

DEEP-TRIM: REVISITING L1 REGULARIZATION FOR CONNECTION PRUNING OF DEEP NETWORK

no code implementations ICLR 2019 Chih-Kuan Yeh, Ian E. H. Yen, Hong-You Chen, Chun-Pei Yang, Shou-De Lin, Pradeep Ravikumar

State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.

Cannot find the paper you are looking for? You can Submit a new open access paper.