Search Results for author: Chao Xue

Found 16 papers, 2 papers with code

Domain Adaptation from Generated Multi-Weather Images for Unsupervised Maritime Object Classification

no code implementations26 Jan 2025 Dan Song, Shumeng Huo, Wenhui Li, Lanjun Wang, Chao Xue, An-An Liu

The classification and recognition of maritime objects are crucial for enhancing maritime safety, monitoring, and intelligent sea environment prediction.

Domain Adaptation Object

Modeling All Response Surfaces in One for Conditional Search Spaces

no code implementations8 Jan 2025 Jiaxing Li, Wei Liu, Chao Xue, Yibing Zhan, Xiaoxing Wang, Weifeng Liu, DaCheng Tao

Bayesian Optimization (BO) is a sample-efficient black-box optimizer commonly used in search spaces where hyperparameters are independent.

All AutoML +1

Beyond Human Data: Aligning Multimodal Large Language Models by Iterative Self-Evolution

1 code implementation20 Dec 2024 Wentao Tan, Qiong Cao, Yibing Zhan, Chao Xue, Changxing Ding

To address these issues, we propose a novel multimodal self-evolution framework that enables the model to autonomously generate high-quality questions and answers using only unannotated images.

Answer Generation Image Captioning

Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models

no code implementations13 Oct 2024 Fei Wang, Li Shen, Liang Ding, Chao Xue, Ye Liu, Changxing Ding

By revisiting the Memory-efficient ZO (MeZO) optimizer, we discover that the full-parameter perturbation and updating processes consume over 50% of its overall fine-tuning time cost.

SST-2

Poisson Process for Bayesian Optimization

no code implementations5 Feb 2024 Xiaoxing Wang, Jiaxing Li, Chao Xue, Wei Liu, Weifeng Liu, Xiaokang Yang, Junchi Yan, DaCheng Tao

BayesianOptimization(BO) is a sample-efficient black-box optimizer, and extensive methods have been proposed to build the absolute function response of the black-box function through a probabilistic surrogate model, including Tree-structured Parzen Estimator (TPE), random forest (SMAC), and Gaussian process (GP).

Bayesian Optimization Hyperparameter Optimization +2

Dual Path Modeling for Semantic Matching by Perceiving Subtle Conflicts

no code implementations24 Feb 2023 Chao Xue, Di Liang, Sirui Wang, Wei Wu, Jing Zhang

To alleviate this problem, we propose a novel Dual Path Modeling Framework to enhance the model's ability to perceive subtle differences in sentence pairs by separately modeling affinity and difference semantics.

Sentence

Deep Transformers Thirst for Comprehensive-Frequency Data

1 code implementation14 Mar 2022 Rui Xia, Chao Xue, Boyu Deng, Fang Wang, JingChao Wang

We study an NLP model called LSRA, which introduces IB with a pyramid-free structure.

Inductive Bias

Universal Semi-Supervised Learning

no code implementations NeurIPS 2021 Zhuo Huang, Chao Xue, Bo Han, Jian Yang, Chen Gong

Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i. e., class set) and feature distribution (i. e., feature domain) are different between labeled dataset and unlabeled dataset.

Domain Adaptation

Automatic low-bit hybrid quantization of neural networks through meta learning

no code implementations24 Apr 2020 Tao Wang, Junsong Wang, Chang Xu, Chao Xue

With the best searched quantization policy, we subsequently retrain or finetune to further improve the performance of the quantized target network.

Meta-Learning Quantization +2

MetAdapt: Meta-Learned Task-Adaptive Architecture for Few-Shot Classification

no code implementations1 Dec 2019 Sivan Doveh, Eli Schwartz, Chao Xue, Rogerio Feris, Alex Bronstein, Raja Giryes, Leonid Karlinsky

In this work, we propose to employ tools inspired by the Differentiable Neural Architecture Search (D-NAS) literature in order to optimize the architecture for FSL without over-fitting.

Classification Few-Shot Learning +2

Transferable AutoML by Model Sharing Over Grouped Datasets

no code implementations CVPR 2019 Chao Xue, Junchi Yan, Rong Yan, Stephen M. Chu, Yonggang Hu, Yonghua Lin

This paper presents a so-called transferable AutoML approach that leverages previously trained models to speed up the search process for new tasks and datasets.

AutoML BIG-bench Machine Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.