Search Results for author: Chao Xue

Found 12 papers, 1 papers with code

Question Calibration and Multi-Hop Modeling for Temporal Question Answering

no code implementations20 Feb 2024 Chao Xue, Di Liang, Pengfei Wang, Jing Zhang

In the real world, many facts contained in KGs are time-constrained thus temporal KGQA has received increasing attention.

Knowledge Graphs Multi-hop Question Answering +1

Poisson Process for Bayesian Optimization

no code implementations5 Feb 2024 Xiaoxing Wang, Jiaxing Li, Chao Xue, Wei Liu, Weifeng Liu, Xiaokang Yang, Junchi Yan, DaCheng Tao

BayesianOptimization(BO) is a sample-efficient black-box optimizer, and extensive methods have been proposed to build the absolute function response of the black-box function through a probabilistic surrogate model, including Tree-structured Parzen Estimator (TPE), random forest (SMAC), and Gaussian process (GP).

Bayesian Optimization Hyperparameter Optimization +2

Dual Path Modeling for Semantic Matching by Perceiving Subtle Conflicts

no code implementations24 Feb 2023 Chao Xue, Di Liang, Sirui Wang, Wei Wu, Jing Zhang

To alleviate this problem, we propose a novel Dual Path Modeling Framework to enhance the model's ability to perceive subtle differences in sentence pairs by separately modeling affinity and difference semantics.

Sentence

Deep Transformers Thirst for Comprehensive-Frequency Data

1 code implementation14 Mar 2022 Rui Xia, Chao Xue, Boyu Deng, Fang Wang, JingChao Wang

We study an NLP model called LSRA, which introduces IB with a pyramid-free structure.

Inductive Bias

Universal Semi-Supervised Learning

no code implementations NeurIPS 2021 Zhuo Huang, Chao Xue, Bo Han, Jian Yang, Chen Gong

Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i. e., class set) and feature distribution (i. e., feature domain) are different between labeled dataset and unlabeled dataset.

Domain Adaptation

Automatic low-bit hybrid quantization of neural networks through meta learning

no code implementations24 Apr 2020 Tao Wang, Junsong Wang, Chang Xu, Chao Xue

With the best searched quantization policy, we subsequently retrain or finetune to further improve the performance of the quantized target network.

Meta-Learning Quantization +1

MetAdapt: Meta-Learned Task-Adaptive Architecture for Few-Shot Classification

no code implementations1 Dec 2019 Sivan Doveh, Eli Schwartz, Chao Xue, Rogerio Feris, Alex Bronstein, Raja Giryes, Leonid Karlinsky

In this work, we propose to employ tools inspired by the Differentiable Neural Architecture Search (D-NAS) literature in order to optimize the architecture for FSL without over-fitting.

Classification Few-Shot Learning +2

Transferable AutoML by Model Sharing Over Grouped Datasets

no code implementations CVPR 2019 Chao Xue, Junchi Yan, Rong Yan, Stephen M. Chu, Yonggang Hu, Yonghua Lin

This paper presents a so-called transferable AutoML approach that leverages previously trained models to speed up the search process for new tasks and datasets.

AutoML BIG-bench Machine Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.