Search Results for author: Xue Jiang

Found 25 papers, 10 papers with code

Negative Label Guided OOD Detection with Pretrained Vision-Language Models

1 code implementation29 Mar 2024 Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han

In this paper, we propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.

Out of Distribution (OOD) Detection

SEED: Customize Large Language Models with Sample-Efficient Adaptation for Code Generation

no code implementations29 Feb 2024 Xue Jiang, Yihong Dong, Zhi Jin, Ge Li

Specifically, SEED involves identifying error code generated by LLMs, employing Self-revise for code revision, optimizing the model with revised code, and iteratively adapting the process for continuous improvement.

Code Generation

Generalization or Memorization: Data Contamination and Trustworthy Evaluation for Large Language Models

1 code implementation24 Feb 2024 Yihong Dong, Xue Jiang, Huanyu Liu, Zhi Jin, Ge Li

CDD necessitates only the sampled texts to detect data contamination, by identifying the peakedness of LLM's output distribution.

Memorization

Exact Tensor Completion Powered by Arbitrary Linear Transforms

no code implementations2 Feb 2024 Li Ge, Xue Jiang, Lin Chen

In this work, a tensor completion problem is studied, which aims to perfectly recover the tensor from partial observations.

PACE: Improving Prompt with Actor-Critic Editing for Large Language Model

no code implementations19 Aug 2023 Yihong Dong, Kangcheng Luo, Xue Jiang, Zhi Jin, Ge Li

Large language models (LLMs) have showcased remarkable potential across various tasks by conditioning on prompts.

Language Modelling Large Language Model

Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search

no code implementations2 Aug 2023 Kun-Peng Ning, Ming Pang, Zheng Fang, Xue Jiang, Xi-Wei Zhao, Chang-Ping Peng, Zhan-Gang Lin, Jing-He Hu, Jing-Ping Shao

To overcome this challenge, in this paper, we propose knowledge condensation (KC), a simple yet effective knowledge distillation framework to boost the classification performance of the online FastText model under strict low latency constraints.

Knowledge Distillation

Detecting Out-of-distribution Data through In-distribution Class Prior

1 code implementation ICML 2023 Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han

In this paper, we show that this assumption makes the above methods incapable when the ID model is trained with class-imbalanced data. Fortunately, by analyzing the causal relations between ID/OOD classes and features, we identify several common scenarios where the OOD-to-ID probabilities should be the ID-class-prior distribution and propose two strategies to modify existing inference-time detection methods: 1) replace the uniform distribution with the ID-class-prior distribution if they explicitly use the uniform distribution; 2) otherwise, reweight their scores according to the similarity between the ID-class-prior distribution and the softmax outputs of the pre-trained model.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

On the Stability and Generalization of Triplet Learning

no code implementations20 Feb 2023 Jun Chen, Hong Chen, Xue Jiang, Bin Gu, Weifu Li, Tieliang Gong, Feng Zheng

Triplet learning, i. e. learning from triplet data, has attracted much attention in computer vision tasks with an extremely large number of categories, e. g., face recognition and person re-identification.

Face Recognition Metric Learning +1

Disentangled Feature Learning for Real-Time Neural Speech Coding

no code implementations22 Nov 2022 Xue Jiang, Xiulian Peng, Yuan Zhang, Yan Lu

Recently end-to-end neural audio/speech coding has shown its great potential to outperform traditional signal analysis based audio codecs.

Disentanglement Voice Conversion

CodePAD: Sequence-based Code Generation with Pushdown Automaton

1 code implementation2 Nov 2022 Yihong Dong, Xue Jiang, Yuchen Liu, Ge Li, Zhi Jin

CodePAD can leverage existing sequence-based models, and we show that it can achieve 100\% grammatical correctness percentage on these benchmark datasets.

Code Generation Text Generation

Wideband Channel Estimation for mmWave MIMO Systems with Beam Squint

no code implementations6 Sep 2022 Li Ge, Xue Jiang, Lin Chen, Qibo Qin, Xingzhao Liu

With the scale of antenna arrays and the bandwidth increasing, many existing narrowband channel estimation methods ignoring the effect of beam squint may face severe performance degradation in wideband millimeter-wave (mmWave) communication systems.

Antecedent Predictions Are More Important Than You Think: An Effective Method for Tree-Based Code Generation

no code implementations22 Aug 2022 Yihong Dong, Ge Li, Xue Jiang, Zhi Jin

To evaluate the effectiveness of our proposed loss, we implement and train an Antecedent Prioritized Tree-based code generation model called APT.

Code Generation Position

Latent-Domain Predictive Neural Speech Coding

no code implementations18 Jul 2022 Xue Jiang, Xiulian Peng, Huaying Xue, Yuan Zhang, Yan Lu

Neural audio/speech coding has recently demonstrated its capability to deliver high quality at much lower bitrates than traditional methods.

Quantization

Cross-Scale Vector Quantization for Scalable Neural Speech Coding

no code implementations7 Jul 2022 Xue Jiang, Xiulian Peng, Huaying Xue, Yuan Zhang, Yan Lu

In this paper, we introduce a cross-scale scalable vector quantization scheme (CSVQ), in which multi-scale features are encoded progressively with stepwise feature fusion and refinement.

Quantization

PyTSK: A Python Toolbox for TSK Fuzzy Systems

1 code implementation7 Jun 2022 Yuqi Cui, Dongrui Wu, Xue Jiang, Yifan Xu

This paper presents PyTSK, a Python toolbox for developing Takagi-Sugeno-Kang (TSK) fuzzy systems.

Clustering

MMRotate: A Rotated Object Detection Benchmark using PyTorch

1 code implementation28 Apr 2022 Yue Zhou, Xue Yang, Gefan Zhang, Jiabao Wang, Yanyi Liu, Liping Hou, Xue Jiang, Xingzhao Liu, Junchi Yan, Chengqi Lyu, Wenwei Zhang, Kai Chen

We present an open-source toolbox, named MMRotate, which provides a coherent algorithm framework of training, inferring, and evaluation for the popular rotated object detection algorithm based on deep learning.

Object object-detection +1

Cross-Image Relational Knowledge Distillation for Semantic Segmentation

1 code implementation CVPR 2022 Chuanguang Yang, Helong Zhou, Zhulin An, Xue Jiang, Yongjun Xu, Qian Zhang

Current Knowledge Distillation (KD) methods for semantic segmentation often guide the student to mimic the teacher's structured information generated from individual data samples.

Knowledge Distillation Segmentation +1

End-to-End Neural Speech Coding for Real-Time Communications

no code implementations24 Jan 2022 Xue Jiang, Xiulian Peng, Chengyu Zheng, Huaying Xue, Yuan Zhang, Yan Lu

Deep-learning based methods have shown their advantages in audio coding over traditional ones but limited attention has been paid on real-time communications (RTC).

Packet Loss Concealment

Self-supervised Contrastive Learning for EEG-based Sleep Staging

1 code implementation16 Sep 2021 Xue Jiang, Jianhui Zhao, Bo Du, Zhiyong Yuan

In detail, the network's performance depends on the choice of transformations and the amount of unlabeled data used in the training process of self-supervised learning.

Contrastive Learning EEG +2

TreeBERT: A Tree-Based Pre-Trained Model for Programming Language

1 code implementation26 May 2021 Xue Jiang, Zhuoran Zheng, Chen Lyu, Liang Li, Lei Lyu

In this paper, we present TreeBERT, a tree-based pre-trained model for improving programming language-oriented generation tasks.

Code Summarization Language Modelling +1

MRNet: a Multi-scale Residual Network for EEG-based Sleep Staging

no code implementations7 Jan 2021 Xue Jiang

To address this problem, we propose a new framework, called MRNet, for data-driven sleep staging by integrating a multi-scale feature fusion model and a Markov-based sequential correction algorithm.

EEG EEG based sleep staging

Transfer Learning for Motor Imagery Based Brain-Computer Interfaces: A Complete Pipeline

1 code implementation3 Jul 2020 Dongrui Wu, Xue Jiang, Ruimin Peng, Wanzeng Kong, Jian Huang, Zhigang Zeng

Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject, and demonstrated promising performance.

Classification EEG +4

Pool-Based Unsupervised Active Learning for Regression Using Iterative Representativeness-Diversity Maximization (iRDM)

no code implementations17 Mar 2020 Ziang Liu, Xue Jiang, Hanbin Luo, Weili Fang, Jiajing Liu, Dongrui Wu

Active learning (AL) selects the most beneficial unlabeled samples to label, and hence a better machine learning model can be trained from the same number of labeled samples.

Active Learning regression

Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces

no code implementations7 Nov 2019 Xue Jiang, Xiao Zhang, Dongrui Wu

Learning a good substitute model is critical to the success of these attacks, but it requires a large number of queries to the target model.

Active Learning EEG

Cannot find the paper you are looking for? You can Submit a new open access paper.