no code implementations • 21 Jul 2024 • Yuan Liao, Jiang Bian, Yuhui Yun, Shuo Wang, Yubo Zhang, Jiaming Chu, Tao Wang, Kewei Li, Yuchen Li, Xuhong LI, Shilei Ji, Haoyi Xiong
While the field of NL2SQL has made significant advancements in translating natural language instructions into executable SQL scripts for data querying and processing, achieving full automation within the broader data science pipeline - encompassing data querying, analysis, visualization, and reporting - remains a complex challenge.
no code implementations • 11 Jul 2024 • Haoyi Xiong, Zhiyuan Wang, Xuhong LI, Jiang Bian, Zeke Xie, Shahid Mumtaz, Laura E. Barnes
This article explores the convergence of connectionist and symbolic artificial intelligence (AI), from historical debates to contemporary advancements.
no code implementations • 28 Jun 2024 • Haoyi Xiong, Jiang Bian, Yuchen Li, Xuhong LI, Mengnan Du, Shuaiqiang Wang, Dawei Yin, Sumi Helal
Combining Large Language Models (LLMs) with search engine services marks a significant shift in the field of services computing, opening up new possibilities to enhance how we search for and retrieve information, understand content, and interact with internet services.
no code implementations • 17 Jun 2024 • Yekun Chai, Yewei Fang, Qiwei Peng, Xuhong LI
Our findings reveal that scaling model parameters can mitigate the issue of tokenization; however, LLMs still suffer from biases induced by typos and other text format variations.
no code implementations • 24 Mar 2024 • Ruyi Yang, Jingyu Hu, Zihao Li, Jianli Mu, Tingzhao Yu, Jiangjiang Xia, Xuhong LI, Aritra Dasgupta, Haoyi Xiong
Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction.
no code implementations • 18 Mar 2024 • Michiel Sandra, Christian Nelson, Xuhong LI, Xuesong Cai, Fredrik Tufvesson, Anders J Johansson
The results demonstrate the great potential of the presented sounding system for providing high-quality radio channel measurements, contributing to high-resolution channel estimation, characterization, and active and passive sensing in realistic and dynamic scenarios.
no code implementations • 15 Mar 2024 • Xuhong LI, Xuesong Cai, Erik Leitinger, Fredrik Tufvesson
We develop a Bayesian model for sequential detection and estimation of interacting MF model parameters, MF states and mobile agent's state including position and orientation.
1 code implementation • 26 Feb 2024 • Qiwei Peng, Yekun Chai, Xuhong LI
These benchmarks have overlooked the vast landscape of massively multilingual NL to multilingual code, leaving a critical gap in the evaluation of multilingual LLMs.
no code implementations • 16 Jan 2024 • Jiamin Chen, Xuhong LI, Yanwu Xu, Mengnan Du, Haoyi Xiong
Based on a large-scale medical image classification dataset, our work collects explanations from well-trained classifiers to generate pseudo labels of segmentation tasks.
no code implementations • 9 Jan 2024 • Haoyi Xiong, Xuhong LI, Xiaofei Zhang, Jiamin Chen, Xinhao Sun, Yuchen Li, Zeyi Sun, Mengnan Du
Given the complexity and lack of transparency in deep neural networks (DNNs), extensive efforts have been made to make these systems more interpretable or explain their behaviors in accessible terms.
no code implementations • 6 Oct 2023 • Weibin Liao, Xuhong LI, Qingzhong Wang, Yanwu Xu, Zhaozheng Yin, Haoyi Xiong
While pre-training on object detection tasks, such as Common Objects in Contexts (COCO) [1], could significantly boost the performance of cell segmentation, it still consumes on massive fine-annotated cell images [2] with bounding boxes, masks, and cell types for every cell in every image, to fine-tune the pre-trained model.
no code implementations • 3 Oct 2023 • Weibin Liao, Haoyi Xiong, Qingzhong Wang, Yan Mo, Xuhong LI, Yi Liu, Zeyu Chen, Siyu Huang, Dejing Dou
In this work, we study a novel self-supervised pre-training pipeline, namely Multi-task Self-super-vised Continual Learning (MUSCLE), for multiple medical imaging tasks, such as classification and segmentation, using X-ray images collected from multiple body parts, including heads, lungs, and bones.
no code implementations • 10 May 2023 • Thomas Wilding, Benjamin J. B. Deutschmann, Christian Nelson, Xuhong LI, Fredrik Tufvesson, Klaus Witrisal
Based on a geometric model of the measurement environment, we analyze the visibility of specular components.
no code implementations • 1 Apr 2023 • Haoyi Xiong, Xuhong LI, Boyang Yu, Zhanxing Zhu, Dongrui Wu, Dejing Dou
While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased.
1 code implementation • 19 Dec 2022 • Qingrui Jia, Xuhong LI, Lei Yu, Jiang Bian, Penghao Zhao, Shupeng Li, Haoyi Xiong, Dejing Dou
While mislabeled or ambiguously-labeled samples in the training set could negatively affect the performance of deep models, diagnosing the dataset and identifying mislabeled samples helps to improve the generalization power.
no code implementations • 17 Nov 2022 • Junshi Chen, Russ Whiton, Xuhong LI, Fredrik Tufvesson
Accurate understanding of electromagnetic propagation properties in real environments is necessary for efficient design and deployment of cellular systems.
no code implementations • 26 Jul 2022 • Jiang Bian, Xuhong LI, Tao Wang, Qingzhong Wang, Jun Huang, Chen Liu, Jun Zhao, Feixiang Lu, Dejing Dou, Haoyi Xiong
While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging.
2 code implementations • 4 Jul 2022 • Xuhong LI, Haoyi Xiong, Yi Liu, Dingfu Zhou, Zeyu Chen, Yaqing Wang, Dejing Dou
Though image classification datasets could provide the backbone networks with rich visual features and discriminative ability, they are incapable of fully pre-training the target model (i. e., backbone+segmentation modules) in an end-to-end manner.
no code implementations • 2 Sep 2021 • Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Dejing Dou
Existing interpretation algorithms have found that, even deep models make the same and right predictions on the same image, they might rely on different sets of input features for classification.
no code implementations • 20 Jun 2021 • Xuanyu Wu, Xuhong LI, Haoyi Xiong, Xiao Zhang, Siyu Huang, Dejing Dou
Incorporating with a set of randomized strategies for well-designed data transformations over the training set, ContRE adopts classification errors and Fisher ratios on the generated contrastive examples to assess and analyze the generalization performance of deep models in complement with a testing set.
no code implementations • 29 Apr 2021 • Ji Liu, Jizhou Huang, Yang Zhou, Xuhong LI, Shilei Ji, Haoyi Xiong, Dejing Dou
Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks.
1 code implementation • 19 Mar 2021 • Xuhong LI, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou
Then, to understand the interpretation results, we also survey the performance metrics for evaluating interpretation algorithms.
no code implementations • 1 Jan 2021 • Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Yanjie Fu, Dejing Dou
Given any task/dataset, Consensus first obtains the interpretation results using existing tools, e. g., LIME (Ribeiro et al., 2016), for every model in the committee, then aggregates the results from the entire committee and approximates the “ground truth” of interpretations through voting.
no code implementations • 1 Jan 2021 • Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu
Random label noises (or observational noises) widely exist in practical machinelearning settings.
no code implementations • 1 Jan 2021 • Haozhe An, Haoyi Xiong, Xuhong LI, Xingjian Li, Dejing Dou, Zhanxing Zhu
The recent theoretical investigation (Li et al., 2020) on the upper bound of generalization error of deep neural networks (DNNs) demonstrates the potential of using the gradient norm as a measure that complements validation accuracy for model selection in practice.
no code implementations • 16 Oct 2020 • Xingjian Li, Di Hu, Xuhong LI, Haoyi Xiong, Zhi Ye, Zhipeng Wang, Chengzhong Xu, Dejing Dou
Fine-tuning deep neural networks pre-trained on large scale datasets is one of the most practical transfer learning paradigm given limited quantity of training samples.
no code implementations • 13 Jul 2020 • Xuhong Li, Yves GRANDVALET, Rémi Flamary, Nicolas Courty, Dejing Dou
We use optimal transport to quantify the match between two representations, yielding a distance that embeds some invariances inherent to the representation of deep networks.
1 code implementation • ECCV 2020 • Di Hu, Xuhong LI, Lichao Mou, Pu Jin, Dong Chen, Liping Jing, Xiaoxiang Zhu, Dejing Dou
With the help of this dataset, we evaluate three proposed approaches for transferring the sound event knowledge to the aerial scene recognition task in a multimodal learning framework, and show the benefit of exploiting the audio information for the aerial scene recognition.
3 code implementations • ICML 2018 • Xuhong Li, Yves GRANDVALET, Franck Davoine
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.
no code implementations • ICLR 2018 • Xuhong LI, Yves GRANDVALET, Franck Davoine
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.
no code implementations • 21 Aug 2017 • Joao Vieira, Erik Leitinger, Muris Sarajlic, Xuhong Li, Fredrik Tufvesson
This paper provides an initial investigation on the application of convolutional neural networks (CNNs) for fingerprint-based positioning using measured massive MIMO channels.