no code implementations • 10 May 2023 • Thomas Wilding, Benjamin J. B. Deutschmann, Christian Nelson, Xuhong LI, Fredrik Tufvesson, Klaus Witrisal
Based on a geometric model of the measurement environment, we analyze the visibility of specular components.
no code implementations • 1 Apr 2023 • Haoyi Xiong, Xuhong LI, Boyang Yu, Zhanxing Zhu, Dongrui Wu, Dejing Dou
While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased.
1 code implementation • 19 Dec 2022 • Qingrui Jia, Xuhong LI, Lei Yu, Jiang Bian, Penghao Zhao, Shupeng Li, Haoyi Xiong, Dejing Dou
While mislabeled or ambiguously-labeled samples in the training set could negatively affect the performance of deep models, diagnosing the dataset and identifying mislabeled samples helps to improve the generalization power.
no code implementations • 17 Nov 2022 • Junshi Chen, Russ Whiton, Xuhong LI, Fredrik Tufvesson
Accurate understanding of electromagnetic propagation properties in real environments is necessary for efficient design and deployment of cellular systems.
no code implementations • 26 Jul 2022 • Jiang Bian, Qingzhong Wang, Haoyi Xiong, Jun Huang, Chen Liu, Xuhong LI, Jun Cheng, Jun Zhao, Feixiang Lu, Dejing Dou
While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging.
1 code implementation • 4 Jul 2022 • Xuhong LI, Haoyi Xiong, Yi Liu, Dingfu Zhou, Zeyu Chen, Yaqing Wang, Dejing Dou
Though image classification datasets could provide the backbone networks with rich visual features and discriminative ability, they are incapable of fully pre-training the target model (i. e., backbone+segmentation modules) in an end-to-end manner.
no code implementations • 2 Sep 2021 • Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Dejing Dou
Existing interpretation algorithms have found that, even deep models make the same and right predictions on the same image, they might rely on different sets of input features for classification.
no code implementations • 20 Jun 2021 • Xuanyu Wu, Xuhong LI, Haoyi Xiong, Xiao Zhang, Siyu Huang, Dejing Dou
Incorporating with a set of randomized strategies for well-designed data transformations over the training set, ContRE adopts classification errors and Fisher ratios on the generated contrastive examples to assess and analyze the generalization performance of deep models in complement with a testing set.
no code implementations • 29 Apr 2021 • Ji Liu, Jizhou Huang, Yang Zhou, Xuhong LI, Shilei Ji, Haoyi Xiong, Dejing Dou
Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks.
1 code implementation • 19 Mar 2021 • Xuhong LI, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou
Then, to understand the interpretation results, we also survey the performance metrics for evaluating interpretation algorithms.
no code implementations • 1 Jan 2021 • Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu
Random label noises (or observational noises) widely exist in practical machinelearning settings.
no code implementations • 1 Jan 2021 • Haozhe An, Haoyi Xiong, Xuhong LI, Xingjian Li, Dejing Dou, Zhanxing Zhu
The recent theoretical investigation (Li et al., 2020) on the upper bound of generalization error of deep neural networks (DNNs) demonstrates the potential of using the gradient norm as a measure that complements validation accuracy for model selection in practice.
no code implementations • 1 Jan 2021 • Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Yanjie Fu, Dejing Dou
Given any task/dataset, Consensus first obtains the interpretation results using existing tools, e. g., LIME (Ribeiro et al., 2016), for every model in the committee, then aggregates the results from the entire committee and approximates the “ground truth” of interpretations through voting.
no code implementations • 16 Oct 2020 • Xingjian Li, Di Hu, Xuhong LI, Haoyi Xiong, Zhi Ye, Zhipeng Wang, Chengzhong Xu, Dejing Dou
Fine-tuning deep neural networks pre-trained on large scale datasets is one of the most practical transfer learning paradigm given limited quantity of training samples.
no code implementations • 13 Jul 2020 • Xuhong Li, Yves GRANDVALET, Rémi Flamary, Nicolas Courty, Dejing Dou
We use optimal transport to quantify the match between two representations, yielding a distance that embeds some invariances inherent to the representation of deep networks.
1 code implementation • ECCV 2020 • Di Hu, Xuhong LI, Lichao Mou, Pu Jin, Dong Chen, Liping Jing, Xiaoxiang Zhu, Dejing Dou
With the help of this dataset, we evaluate three proposed approaches for transferring the sound event knowledge to the aerial scene recognition task in a multimodal learning framework, and show the benefit of exploiting the audio information for the aerial scene recognition.
3 code implementations • ICML 2018 • Xuhong Li, Yves GRANDVALET, Franck Davoine
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.
no code implementations • ICLR 2018 • Xuhong LI, Yves GRANDVALET, Franck Davoine
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.
no code implementations • 21 Aug 2017 • Joao Vieira, Erik Leitinger, Muris Sarajlic, Xuhong Li, Fredrik Tufvesson
This paper provides an initial investigation on the application of convolutional neural networks (CNNs) for fingerprint-based positioning using measured massive MIMO channels.