Search Results for author: Xuhong LI

Found 27 papers, 6 papers with code

A Wideband Distributed Massive MIMO Channel Sounder for Communication and Sensing

no code implementations18 Mar 2024 Michiel Sandra, Christian Nelson, Xuhong LI, Xuesong Cai, Fredrik Tufvesson, Anders J Johansson

The results demonstrate the great potential of the presented sounding system for providing high-quality radio channel measurements, contributing to high-resolution channel estimation, characterization, and active and passive sensing in realistic and dynamic scenarios.

Super-Resolution

A Belief Propagation Algorithm for Multipath-based SLAM with Multiple Map Features: A mmWave MIMO Application

no code implementations15 Mar 2024 Xuhong LI, Xuesong Cai, Erik Leitinger, Fredrik Tufvesson

We develop a Bayesian model for sequential detection and estimation of interacting MF model parameters, MF states and mobile agent's state including position and orientation.

Simultaneous Localization and Mapping

HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization

1 code implementation26 Feb 2024 Qiwei Peng, Yekun Chai, Xuhong LI

These benchmarks have overlooked the vast landscape of massively multilingual NL to multilingual code, leaving a critical gap in the evaluation of multilingual LLMs.

Code Generation

Explanations of Classifiers Enhance Medical Image Segmentation via End-to-end Pre-training

no code implementations16 Jan 2024 Jiamin Chen, Xuhong LI, Yanwu Xu, Mengnan Du, Haoyi Xiong

Based on a large-scale medical image classification dataset, our work collects explanations from well-trained classifiers to generate pseudo labels of segmentation tasks.

Image Classification Image Segmentation +4

Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective

no code implementations9 Jan 2024 Haoyi Xiong, Xuhong LI, Xiaofei Zhang, Jiamin Chen, Xinhao Sun, Yuchen Li, Zeyi Sun, Mengnan Du

Given the complexity and lack of transparency in deep neural networks (DNNs), extensive efforts have been made to make these systems more interpretable or explain their behaviors in accessible terms.

Data Valuation Decision Making +2

CUPre: Cross-domain Unsupervised Pre-training for Few-Shot Cell Segmentation

no code implementations6 Oct 2023 Weibin Liao, Xuhong LI, Qingzhong Wang, Yanwu Xu, Zhaozheng Yin, Haoyi Xiong

While pre-training on object detection tasks, such as Common Objects in Contexts (COCO) [1], could significantly boost the performance of cell segmentation, it still consumes on massive fine-annotated cell images [2] with bounding boxes, masks, and cell types for every cell in every image, to fine-tune the pre-trained model.

Cell Segmentation Contrastive Learning +6

MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep Models for X-ray Images of Multiple Body Parts

no code implementations3 Oct 2023 Weibin Liao, Haoyi Xiong, Qingzhong Wang, Yan Mo, Xuhong LI, Yi Liu, Zeyu Chen, Siyu Huang, Dejing Dou

In this work, we study a novel self-supervised pre-training pipeline, namely Multi-task Self-super-vised Continual Learning (MUSCLE), for multiple medical imaging tasks, such as classification and segmentation, using X-ray images collected from multiple body parts, including heads, lungs, and bones.

Continual Learning Representation Learning +1

Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability

no code implementations1 Apr 2023 Haoyi Xiong, Xuhong LI, Boyang Yu, Zhanxing Zhu, Dongrui Wu, Dejing Dou

While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased.

Learning from Training Dynamics: Identifying Mislabeled Data Beyond Manually Designed Features

1 code implementation19 Dec 2022 Qingrui Jia, Xuhong LI, Lei Yu, Jiang Bian, Penghao Zhao, Shupeng Li, Haoyi Xiong, Dejing Dou

While mislabeled or ambiguously-labeled samples in the training set could negatively affect the performance of deep models, diagnosing the dataset and identifying mislabeled samples helps to improve the generalization power.

High-Resolution Channel Sounding and Parameter Estimation in Multi-Site Cellular Networks

no code implementations17 Nov 2022 Junshi Chen, Russ Whiton, Xuhong LI, Fredrik Tufvesson

Accurate understanding of electromagnetic propagation properties in real environments is necessary for efficient design and deployment of cellular systems.

P2ANet: A Dataset and Benchmark for Dense Action Detection from Table Tennis Match Broadcasting Videos

no code implementations26 Jul 2022 Jiang Bian, Xuhong LI, Tao Wang, Qingzhong Wang, Jun Huang, Chen Liu, Jun Zhao, Feixiang Lu, Dejing Dou, Haoyi Xiong

While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging.

Action Detection Action Localization +2

Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training of Image Segmentation Models

2 code implementations4 Jul 2022 Xuhong LI, Haoyi Xiong, Yi Liu, Dingfu Zhou, Zeyu Chen, Yaqing Wang, Dejing Dou

Though image classification datasets could provide the backbone networks with rich visual features and discriminative ability, they are incapable of fully pre-training the target model (i. e., backbone+segmentation modules) in an end-to-end manner.

Classification Image Classification +3

Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study

no code implementations2 Sep 2021 Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Dejing Dou

Existing interpretation algorithms have found that, even deep models make the same and right predictions on the same image, they might rely on different sets of input features for classification.

Attribute Image Classification +2

Practical Assessment of Generalization Performance Robustness for Deep Networks via Contrastive Examples

no code implementations20 Jun 2021 Xuanyu Wu, Xuhong LI, Haoyi Xiong, Xiao Zhang, Siyu Huang, Dejing Dou

Incorporating with a set of randomized strategies for well-designed data transformations over the training set, ContRE adopts classification errors and Fisher ratios on the generated contrastive examples to assess and analyze the generalization performance of deep models in complement with a testing set.

Contrastive Learning

From Distributed Machine Learning to Federated Learning: A Survey

no code implementations29 Apr 2021 Ji Liu, Jizhou Huang, Yang Zhou, Xuhong LI, Shilei Ji, Haoyi Xiong, Dejing Dou

Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks.

BIG-bench Machine Learning Federated Learning

Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond

1 code implementation19 Mar 2021 Xuhong LI, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou

Then, to understand the interpretation results, we also survey the performance metrics for evaluating interpretation algorithms.

Adversarial Robustness

Implicit Regularization Effects of Unbiased Random Label Noises with SGD

no code implementations1 Jan 2021 Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu

Random label noises (or observational noises) widely exist in practical machinelearning settings.

Can We Use Gradient Norm as a Measure of Generalization Error for Model Selection in Practice?

no code implementations1 Jan 2021 Haozhe An, Haoyi Xiong, Xuhong LI, Xingjian Li, Dejing Dou, Zhanxing Zhu

The recent theoretical investigation (Li et al., 2020) on the upper bound of generalization error of deep neural networks (DNNs) demonstrates the potential of using the gradient norm as a measure that complements validation accuracy for model selection in practice.

Model Selection

Democratizing Evaluation of Deep Model Interpretability through Consensus

no code implementations1 Jan 2021 Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Yanjie Fu, Dejing Dou

Given any task/dataset, Consensus first obtains the interpretation results using existing tools, e. g., LIME (Ribeiro et al., 2016), for every model in the committee, then aggregates the results from the entire committee and approximates the “ground truth” of interpretations through voting.

Feature Importance

Towards Accurate Knowledge Transfer via Target-awareness Representation Disentanglement

no code implementations16 Oct 2020 Xingjian Li, Di Hu, Xuhong LI, Haoyi Xiong, Zhi Ye, Zhipeng Wang, Chengzhong Xu, Dejing Dou

Fine-tuning deep neural networks pre-trained on large scale datasets is one of the most practical transfer learning paradigm given limited quantity of training samples.

Disentanglement Transfer Learning

Representation Transfer by Optimal Transport

no code implementations13 Jul 2020 Xuhong Li, Yves GRANDVALET, Rémi Flamary, Nicolas Courty, Dejing Dou

We use optimal transport to quantify the match between two representations, yielding a distance that embeds some invariances inherent to the representation of deep networks.

Knowledge Distillation Model Compression +1

Cross-Task Transfer for Geotagged Audiovisual Aerial Scene Recognition

1 code implementation ECCV 2020 Di Hu, Xuhong LI, Lichao Mou, Pu Jin, Dong Chen, Liping Jing, Xiaoxiang Zhu, Dejing Dou

With the help of this dataset, we evaluate three proposed approaches for transferring the sound event knowledge to the aerial scene recognition task in a multimodal learning framework, and show the benefit of exploiting the audio information for the aerial scene recognition.

Scene Recognition

Explicit Inductive Bias for Transfer Learning with Convolutional Networks

3 code implementations ICML 2018 Xuhong Li, Yves GRANDVALET, Franck Davoine

In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.

Inductive Bias Transfer Learning

Explicit Induction Bias for Transfer Learning with Convolutional Networks

no code implementations ICLR 2018 Xuhong LI, Yves GRANDVALET, Franck Davoine

In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.

Transfer Learning

Deep Convolutional Neural Networks for Massive MIMO Fingerprint-Based Positioning

no code implementations21 Aug 2017 Joao Vieira, Erik Leitinger, Muris Sarajlic, Xuhong Li, Fredrik Tufvesson

This paper provides an initial investigation on the application of convolutional neural networks (CNNs) for fingerprint-based positioning using measured massive MIMO channels.

Cannot find the paper you are looking for? You can Submit a new open access paper.