Search Results for author: Hong-You Chen

Found 16 papers, 6 papers with code

Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models

no code implementations11 Apr 2024 Haotian Zhang, Haoxuan You, Philipp Dufter, BoWen Zhang, Chen Chen, Hong-You Chen, Tsu-Jui Fu, William Yang Wang, Shih-Fu Chang, Zhe Gan, Yinfei Yang

While Ferret seamlessly integrates regional understanding into the Large Language Model (LLM) to facilitate its referring and grounding capability, it poses certain limitations: constrained by the pre-trained fixed visual encoder and failed to perform well on broader tasks.

Language Modelling Large Language Model +1

Bringing Back the Context: Camera Trap Species Identification as Link Prediction on Multimodal Knowledge Graphs

no code implementations31 Dec 2023 Vardaan Pahuja, Weidi Luo, Yu Gu, Cheng-Hao Tu, Hong-You Chen, Tanya Berger-Wolf, Charles Stewart, Song Gao, Wei-Lun Chao, Yu Su

In this work, we leverage the structured context associated with the camera trap images to improve out-of-distribution generalization for the task of species identification in camera traps.

Knowledge Graphs Link Prediction +1

Learning Fractals by Gradient Descent

1 code implementation14 Mar 2023 Cheng-Hao Tu, Hong-You Chen, David Carlyn, Wei-Lun Chao

Fractals are geometric shapes that can display complex and self-similar patterns found in nature (e. g., clouds and plants).

Making Batch Normalization Great in Federated Deep Learning

no code implementations12 Mar 2023 Jike Zhong, Hong-You Chen, Wei-Lun Chao

We reinvestigate factors that are believed to cause this problem, including the mismatch of BN statistics across clients and the deviation of gradients during local training.

Federated Learning

Train-Once-for-All Personalization

no code implementations CVPR 2023 Hong-You Chen, Yandong Li, Yin Cui, Mingda Zhang, Wei-Lun Chao, Li Zhang

We study the problem of how to train a "personalization-friendly" model such that given only the task descriptions, the model can be adapted to different end-users' needs, e. g., for accurately classifying different subsets of objects.

Gradual Domain Adaptation without Indexed Intermediate Domains

1 code implementation NeurIPS 2021 Hong-You Chen, Wei-Lun Chao

This coarse domain sequence then undergoes a fine indexing step via a novel cycle-consistency loss, which encourages the next intermediate domain to preserve sufficient discriminative knowledge of the current intermediate domain.

Unsupervised Domain Adaptation

On the Importance and Applicability of Pre-Training for Federated Learning

1 code implementation23 Jun 2022 Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han-Wei Shen, Wei-Lun Chao

To make our findings applicable to situations where pre-trained models are not directly available, we explore pre-training with synthetic data or even with clients' data in a decentralized manner, and found that they can already improve FL notably.

Federated Learning

On Bridging Generic and Personalized Federated Learning for Image Classification

3 code implementations ICLR 2022 Hong-You Chen, Wei-Lun Chao

On the one hand, we introduce a family of losses that are robust to non-identical class distributions, enabling clients to train a generic predictor with a consistent objective across them.

Classification Image Classification +1

FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning

2 code implementations ICLR 2021 Hong-You Chen, Wei-Lun Chao

Federated learning aims to collaboratively train a strong global model by accessing users' locally trained models but not their own data.

Bayesian Inference Federated Learning

Glyph2Vec: Learning Chinese Out-of-Vocabulary Word Embedding from Glyphs

no code implementations ACL 2020 Hong-You Chen, Sz-Han Yu, Shou-De Lin

Chinese NLP applications that rely on large text often contain huge amounts of vocabulary which are sparse in corpus.

Identifying and Compensating for Feature Deviation in Imbalanced Deep Learning

1 code implementation6 Jan 2020 Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, Wei-Lun Chao

Classifiers trained with class-imbalanced data are known to perform poorly on test data of the "minor" classes, of which we have insufficient training data.

Multiple Text Style Transfer by using Word-level Conditional Generative Adversarial Network with Two-Phase Training

no code implementations IJCNLP 2019 Chih-Te Lai, Yi-Te Hong, Hong-You Chen, Chi-Jen Lu, Shou-De Lin

The objective of non-parallel text style transfer, or controllable text generation, is to alter specific attributes (e. g. sentiment, mood, tense, politeness, etc) of a given text while preserving its remaining attributes and content.

Attribute Generative Adversarial Network +2

DEEP-TRIM: REVISITING L1 REGULARIZATION FOR CONNECTION PRUNING OF DEEP NETWORK

no code implementations ICLR 2019 Chih-Kuan Yeh, Ian E. H. Yen, Hong-You Chen, Chun-Pei Yang, Shou-De Lin, Pradeep Ravikumar

State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.

Cannot find the paper you are looking for? You can Submit a new open access paper.