1 code implementation • ECCV 2020 • Miao Zhang, Sun Xiao Fei, Jie Liu, Shuang Xu, Yongri Piao, Huchuan Lu
In this paper, we propose an asymmetric two-stream architecture taking account of the inherent differences between RGB and depth data for saliency detection.
Ranked #19 on Thermal Image Segmentation on RGB-T-Glass-Segmentation
no code implementations • 6 Nov 2024 • Xinle Wu, Xingjian Wu, Dalin Zhang, Miao Zhang, Chenjuan Guo, Bin Yang, Christian S. Jensen
Given a forecasting task, which includes a dataset and a forecasting horizon, automated design methods automatically search for an optimal forecasting model for the task in a manually designed search space, and then train the identified model using the dataset to enable the forecasting.
no code implementations • 9 Oct 2024 • Qinglun Li, Miao Zhang, Mengzhu Wang, Quanjun Yin, Li Shen
Decentralized Federated Learning (DFL) surpasses Centralized Federated Learning (CFL) in terms of faster training, privacy preservation, and light communication, making it a promising alternative in the field of federated learning.
no code implementations • 9 Oct 2024 • Qinglun Li, Miao Zhang, Yingqi Liu, Quanjun Yin, Li Shen, Xiaochun Cao
In decentralized communication, the server aggregation phase in Centralized Federated Learning shifts to the client side, which means that clients connect with each other in a peer-to-peer manner.
no code implementations • 28 Sep 2024 • Jiarui Jiang, Wei Huang, Miao Zhang, Taiji Suzuki, Liqiang Nie
To address this gap, this work delves deeply into the benign overfitting perspective of transformers in vision.
no code implementations • 5 Sep 2024 • Qianlong Xiang, Miao Zhang, Yuzhang Shang, Jianlong Wu, Yan Yan, Liqiang Nie
Furthermore, considering that the source data is either unaccessible or too enormous to store for current generative models, we introduce a new paradigm for their distillation without source data, termed Data-Free Knowledge Distillation for Diffusion Models (DKDM).
no code implementations • 3 Sep 2024 • Junpeng Jiang, Gangyi Hong, Lijun Zhou, Enhui Ma, Hengtong Hu, Xia Zhou, Jie Xiang, Fan Liu, Kaicheng Yu, Haiyang Sun, Kun Zhan, Peng Jia, Miao Zhang
Generating high-fidelity, temporally consistent videos in autonomous driving scenarios faces a significant challenge, e. g. problematic maneuvers in corner cases.
no code implementations • 25 Aug 2024 • Lanhu Wu, Miao Zhang, Yongri Piao, Zhenyan Yao, Weibing Sun, Feng Tian, Huchuan Lu
We also propose a class-aware feature-wise collaborative learning (CFCL) strategy to achieve effective knowledge transfer between CNN-based and Transformer-based models in the feature space by granting their intermediate features the similar capability of category perception.
no code implementations • 13 Aug 2024 • Miao Zhang, Sherif Abdulatif, Benedikt Loesch, Marco Altmann, Marius Schwarz, Bin Yang
The rapid evolution of deep learning and its integration with autonomous driving systems have led to substantial advancements in 3D perception using multimodal sensors.
no code implementations • 16 Jul 2024 • Hongrong Cheng, Miao Zhang, Javen Qinfeng Shi
As Large Language Models (LLMs) grow dramatically in size, there is an increasing trend in compressing and speeding up these models.
1 code implementation • 20 Jun 2024 • TingWei Liu, Miao Zhang, Leiye Liu, Jialong Zhong, Shuyao Wang, Yongri Piao, Huchuan Lu
Additionally, the domain gap between the images features and the diffusion model features poses a great challenge to prostate segmentation.
1 code implementation • 25 May 2024 • Miao Zhang, ZiMing Wang, Runtian Xing, Kui Xiao, Zhifei Li, Yan Zhang, Chang Tang
Finally, the embeddings will be applied to multiple existing cognitive diagnosis models to infer students' proficiency on UKCs.
1 code implementation • 22 Apr 2024 • David Campos, Bin Yang, Tung Kieu, Miao Zhang, Chenjuan Guo, Christian S. Jensen
The first difficulty in enabling continual calibration on the edge is that the full training data may be too large and thus not always available on edge devices.
no code implementations • 24 Mar 2024 • Xiufei Li, Miao Zhang, Yuanxin Qi, Miao Yang
This study introduces a novel approach utilizing Gaussian process model predictive control (MPC) to stabilize the output voltage of a polymer electrolyte fuel cell (PEFC) system by simultaneously regulating hydrogen and airflow rates.
no code implementations • 16 Mar 2024 • Zhuowei Li, Miao Zhang, Xiaotian Lin, Meng Yin, Shuai Lu, Xueqian Wang
This paper introduces GAgent: an Gripping Agent designed for open-world environments that provides advanced cognitive abilities via VLM agents and flexible grasping abilities with variable stiffness soft grippers.
no code implementations • 15 Mar 2024 • Miao Zhang, Rumi Chunara
Performance disparities of image recognition across different demographic populations are known to exist in deep learning-based models, but previous work has largely addressed such fairness problems assuming knowledge of sensitive attribute labels.
no code implementations • 13 Mar 2024 • Ming Dong, Yujing Chen, Miao Zhang, Hao Sun, Tingting He
We found that by introducing a small number of specific Chinese rich semantic structures, LLMs achieve better performance than the BERT-based model on few-shot CSC task.
no code implementations • 8 Feb 2024 • Miao Zhang, Salman Rahman, Vishwali Mhasawade, Rumi Chunara
Relevant to such uses, important examples of bias in the use of AI are evident when decision-making based on data fails to account for the robustness of the data, or predictions are based on spurious correlations.
no code implementations • 2 Feb 2024 • Wentao Chen, Jiwei Li, Xichen Xu, Hui Huang, Siyu Yuan, Miao Zhang, Tianming Xu, Jie Luo, Weimin Zhou
In this study, we investigated unsupervised learning methods for unpaired MRI to PET translation for generating pseudo normal FDG PET for epileptic focus localization.
no code implementations • 25 Jan 2024 • Mengyao Du, Miao Zhang, Yuwen Pu, Kai Xu, Shouling Ji, Quanjun Yin
To tackle the scarcity and privacy issues associated with domain-specific datasets, the integration of federated learning in conjunction with fine-tuning has emerged as a practical solution.
no code implementations • 24 Jan 2024 • Miao Zhang, Zee Fryer, Ben Colman, Ali Shahriyari, Gaurav Bharaj
Machine learning model bias can arise from dataset composition: sensitive features correlated to the learning target disturb the model decision rule and lead to performance differences along the features.
no code implementations • CVPR 2024 • Junyuan Zhang, Shuang Zeng, Miao Zhang, Runxi Wang, Feifei Wang, Yuyin Zhou, Paul Pu Liang, Liangqiong Qu
Federated learning (FL) is a powerful technology that enables collaborative training of machine learning models without sharing private data among clients.
no code implementations • 23 Dec 2023 • Xiong Zhang, Miao Zhang
Deep learning enhances earthquake monitoring capabilities by mining seismic waveforms directly.
no code implementations • 30 Nov 2023 • Miao Zhang, Peng Jia, Zhengyang Li, Wennan Xiang, Jiameng Lv, Rui Sun
To address this, we need a method to obtain misalignment states, aiding in the reconstruction of accurate point spread functions for data processing methods or facilitating adjustments of optical components for improved image quality.
2 code implementations • 22 Nov 2023 • Yilun Liu, Shimin Tao, Xiaofeng Zhao, Ming Zhu, Wenbing Ma, Junhao Zhu, Chang Su, Yutai Hou, Miao Zhang, Min Zhang, Hongxia Ma, Li Zhang, Hao Yang, Yanfei Jiang
Instruction tuning is crucial for enabling Language Learning Models (LLMs) in responding to human instructions.
no code implementations • 8 Oct 2023 • Qinglun Li, Miao Zhang, Nan Yin, Quanjun Yin, Li Shen
To further improve algorithm performance and alleviate local heterogeneous overfitting in Federated Learning (FL), our algorithm combines the Sharpness Aware Minimization (SAM) optimizer and local momentum.
no code implementations • 27 Sep 2023 • Weishi Li, Yong Peng, Miao Zhang, Liang Ding, Han Hu, Li Shen
Specifically, we categorize existing deep model fusion methods as four-fold: (1) "Mode connectivity", which connects the solutions in weight space via a path of non-increasing loss, in order to obtain better initialization for model fusion; (2) "Alignment" matches units between neural networks to create better conditions for fusion; (3) "Weight average", a classical model fusion method, averages the weights of multiple models to obtain more accurate results closer to the optimal solution; (4) "Ensemble learning" combines the outputs of diverse models, which is a foundational technique for improving the accuracy and robustness of the final model.
no code implementations • 19 Aug 2023 • Duo Wu, Dayou Zhang, Miao Zhang, Ruoyu Zhang, Fangxin Wang, Shuguang Cui
The high-accuracy and resource-intensive deep neural networks (DNNs) have been widely adopted by live video analytics (VA), where camera videos are streamed over the network to resource-rich edge/cloud servers for DNN inference.
1 code implementation • 13 Aug 2023 • Hongrong Cheng, Miao Zhang, Javen Qinfeng Shi
Modern deep neural networks, particularly recent large language models, come with massive model sizes that require significant computational and storage resources.
1 code implementation • 13 Aug 2023 • Hongrong Cheng, Miao Zhang, Javen Qinfeng Shi
It motivates us to develop a technique to evaluate true loss changes without retraining, with which channels to prune can be selected more reliably and confidently.
1 code implementation • NeurIPS 2023 • Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan
Specifically, SFGC contains two collaborative components: (1) a training trajectory meta-matching scheme for effectively synthesizing small-scale graph-free data; (2) a graph neural feature score metric for dynamically evaluating the quality of the condensed data.
no code implementations • 23 May 2023 • Jing Wang, Hairun Xie, Miao Zhang, Hui Xu
The dominant latent space further reveals a strong relevance with the key flow features located in the boundary layers downstream of shock.
no code implementations • 14 May 2023 • Miao Zhang, Yiqing Shen, Shenghui Zhong
Images captured under low-light conditions are often plagued by several challenges, including diminished contrast, increased noise, loss of fine details, and unnatural color reproduction.
no code implementations • 6 Mar 2023 • Hairun Xie, Jing Wang, Miao Zhang
In the proposed model, a primary network is responsible for representing the relationship between the lift and angle of attack, while the geometry information is encoded into a hyper network to predict the unknown parameters involved in the primary network.
1 code implementation • 24 Feb 2023 • David Campos, Miao Zhang, Bin Yang, Tung Kieu, Chenjuan Guo, Christian S. Jensen
First, we propose adaptive ensemble distillation that assigns adaptive weights to different base models such that their varying classification capabilities contribute purposefully to the training of the lightweight model.
no code implementations • 23 Feb 2023 • Xin Zheng, Miao Zhang, Chunyang Chen, Qin Zhang, Chuan Zhou, Shirui Pan
Therefore, in this paper, we propose a novel automated graph neural network on heterophilic graphs, namely Auto-HeG, to automatically build heterophilic GNN models with expressive learning abilities.
no code implementations • 8 Dec 2022 • Xinle Wu, Dalin Zhang, Miao Zhang, Chenjuan Guo, Shuai Zhao, Yi Zhang, Huai Wang, Bin Yang
We then propose a resource-aware search strategy to explore the search space to find the best PINN model under different resource constraints.
no code implementations • 29 Nov 2022 • Xinle Wu, Dalin Zhang, Miao Zhang, Chenjuan Guo, Bin Yang, Christian S. Jensen
To overcome these limitations, we propose SEARCH, a joint, scalable framework, to automatically devise effective CTS forecasting models.
1 code implementation • 16 Nov 2022 • Miao Zhang, Rumi Chunara
We propose fair dense representation with contrastive learning (FairDCL) as a method for de-biasing the multi-level latent space of convolution neural network models.
1 code implementation • 9 Oct 2022 • Shwai He, Liang Ding, Daize Dong, Miao Zhang, DaCheng Tao
Adapter Tuning, which freezes the pretrained language models (PLMs) and only fine-tunes a few extra modules, becomes an appealing efficient alternative to the full model fine-tuning.
no code implementations • Journal of Petroleum Science and Engineering 2022 • Chunhua Lu, Hanqiao Jiang, Jinlong Yang, Zhiqiang Wang, Miao Zhang, Junjian Li *
The results reveal that DNN exhibit best production prediction accuracy compared to RF and SVM.
no code implementations • 21 Jun 2022 • Guanghao Li, Yue Hu, Miao Zhang, Ji Liu, Quanjun Yin, Yong Peng, Dejing Dou
As the efficiency of training in the ring topology prefers devices with homogeneous resources, the classification based on the computing capacity mitigates the impact of straggler effects.
no code implementations • 5 May 2022 • Hairun Xie, Jing Wang, Miao Zhang
In contrast, the hard-constrained scheme produces airfoils with a wider range of geometric diversity while strictly adhering to the geometric constraints.
no code implementations • 9 Apr 2022 • Miao Zhang, Harvineet Singh, Lazarus Chok, Rumi Chunara
This work highlights the need to conduct fairness analysis for satellite imagery segmentation models and motivates the development of methods for fair transfer learning in order not to introduce disparities between places, particularly urban and rural locations.
no code implementations • 14 Feb 2022 • Xin Zheng, Yi Wang, Yixin Liu, Ming Li, Miao Zhang, Di Jin, Philip S. Yu, Shirui Pan
In the end, we point out the potential directions to advance and stimulate more future research and applications on heterophilic graph learning with GNNs.
1 code implementation • ICCV 2021 • Yongri Piao, Jian Wang, Miao Zhang, Huchuan Lu
The multiple accurate cues from multiple DFs are then simultaneously propagated to the saliency network with a multi-guidance loss.
1 code implementation • NeurIPS 2021 • Jingjing Li, Wei Ji, Qi Bi, Cheng Yan, Miao Zhang, Yongri Piao, Huchuan Lu, Li Cheng
As a by-product, a CapS dataset is constructed by augmenting existing benchmark training set with additional image tags and captions.
no code implementations • CVPR 2022 • Miao Zhang, Jilin Hu, Steven Su, Shirui Pan, Xiaojun Chang, Bin Yang, Gholamreza Haffari
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost through weight sharing and continuous relaxation.
no code implementations • 30 Oct 2021 • Miao Zhang, Miaojing Shi, Li Li
Last, to enhance the embedding space learning, an additional pixel-wise metric learning module is introduced with triplet loss formulated on the pixel-level embedding of the input image.
no code implementations • 8 Oct 2021 • Shengran Hu, Ran Cheng, Cheng He, Zhichao Lu, Jing Wang, Miao Zhang
For the goal of automated design of high-performance deep convolutional neural networks (CNNs), Neural Architecture Search (NAS) methodology is becoming increasingly important for both academia and industries. Due to the costly stochastic gradient descent (SGD) training of CNNs for performance evaluation, most existing NAS methods are computationally expensive for real-world deployments.
no code implementations • 4 Sep 2021 • Yongri Piao, Jian Wang, Miao Zhang, Zhengxuan Ma, Huchuan Lu
Despite of the success of previous works, explorations on an effective training strategy for the saliency network and accurate matches between image-level annotations and salient objects are still inadequate.
1 code implementation • 6 Jul 2021 • Miao Zhang, Liangqiong Qu, Praveer Singh, Jayashree Kalpathy-Cramer, Daniel L. Rubin
In this study, we propose a novel heterogeneity-aware federated learning method, SplitAVG, to overcome the performance drops from data heterogeneity in federated learning.
no code implementations • 24 Jun 2021 • Liangqiong Qu, Niranjan Balachandar, Miao Zhang, Daniel Rubin
Specifically, instead of directly training a model for task performance, we develop a novel dual model architecture: a primary model learns the desired task, and an auxiliary "generative replay model" allows aggregating knowledge from the heterogenous clients.
no code implementations • 22 Jun 2021 • Miao Zhang, Wei Huang, Li Wang
We investigate this question through the lens of edge connectivity, and provide an affirmative answer by defining a connectivity concept, ZERo-cost Operation Sensitivity (ZEROS), to score the importance of candidate operations in DARTS at initialization.
1 code implementation • 21 Jun 2021 • Miao Zhang, Steven Su, Shirui Pan, Xiaojun Chang, Ehsan Abbasnejad, Reza Haffari
A key challenge to the scalability and quality of the learned architectures is the need for differentiating through the inner-loop optimisation.
Ranked #23 on Neural Architecture Search on NAS-Bench-201, CIFAR-10
1 code implementation • CVPR 2021 • Wei Ji, Jingjing Li, Shuang Yu, Miao Zhang, Yongri Piao, Shunyu Yao, Qi Bi, Kai Ma, Yefeng Zheng, Huchuan Lu, Li Cheng
Complex backgrounds and similar appearances between objects and their surroundings are generally recognized as challenging scenarios in Salient Object Detection (SOD).
Ranked #3 on Object Detection on PKU-DDD17-Car
1 code implementation • 13 Apr 2021 • Yongri Piao, Xinxin Ji, Miao Zhang, Yukun Zhang
We first excavate the internal spatial correlation by designing a context reasoning unit which separately extracts comprehensive contextual information from the focal stack and RGB images.
no code implementations • 13 Apr 2021 • Yongri Piao, Yukun Zhang, Miao Zhang, Xinxin Ji
Focus based methods have shown promising results for the task of depth estimation.
no code implementations • 19 Mar 2021 • Zheng Chu, Zhengyu Zhu, Miao Zhang, Fuhui Zhou, Li Zhen, Xueqian Fu, and Naofal Al-Dhahir
To evaluate the performance of this IRS assisted WPSN, we are interested in maximizing its system sum throughput to jointly optimize the energy beamforming of the PS, the transmission time allocation, as well as the phase shifts of the WET and WIT phases.
no code implementations • ICLR 2022 • Wei Huang, Yayong Li, Weitao Du, Jie Yin, Richard Yi Da Xu, Ling Chen, Miao Zhang
Inspired by our theoretical insights on trainability, we propose Critical DropEdge, a connectivity-aware and graph-adaptive sampling method, to alleviate the exponential decay problem more fundamentally.
1 code implementation • ICCV 2021 • Miao Zhang, Jie Liu, Yifei Wang, Yongri Piao, Shunyu Yao, Wei Ji, Jingjing Li, Huchuan Lu, Zhongxuan Luo
Our bidirectional dynamic fusion strategy encourages the interaction of spatial and temporal information in a dynamic manner.
Ranked #15 on Video Polyp Segmentation on SUN-SEG-Easy (Unseen)
no code implementations • 30 Dec 2020 • Yongri Piao, Zhengkun Rong, Shuang Xu, Miao Zhang, Huchuan Lu
The success of learning-based light field saliency detection is heavily dependent on how a comprehensive dataset can be constructed for higher generalizability of models, how high dimensional light field data can be effectively exploited, and how a flexible model can be designed to achieve versatility for desktop computers and mobile devices.
1 code implementation • NeurIPS 2020 • Miao Zhang, Huiqi Li, Shirui Pan, Xiaojun Chang, ZongYuan Ge, Steven Su
A probabilistic exploration enhancement method is accordingly devised to encourage intelligent exploration during the architecture search in the latent space, to avoid local optimal in architecture search.
1 code implementation • 19 Oct 2020 • Jie Lian, Jingyu Liu, Yizhou Yu, Mengyuan Ding, Yaoci Lu, Yi Lu, Jie Cai, Deshou Lin, Miao Zhang, Zhe Wang, Kai He, Yijie Yu
The detection of thoracic abnormalities challenge is organized by the Deepwise AI Lab.
no code implementations • 3 Sep 2020 • Holger R. Roth, Ken Chang, Praveer Singh, Nir Neumark, Wenqi Li, Vikash Gupta, Sharut Gupta, Liangqiong Qu, Alvin Ihsani, Bernardo C. Bizzo, Yuhong Wen, Varun Buch, Meesam Shah, Felipe Kitamura, Matheus Mendonça, Vitor Lavor, Ahmed Harouni, Colin Compas, Jesse Tetreault, Prerna Dogra, Yan Cheng, Selnur Erdal, Richard White, Behrooz Hashemian, Thomas Schultz, Miao Zhang, Adam McCarthy, B. Min Yun, Elshaimaa Sharaf, Katharina V. Hoebel, Jay B. Patel, Bryan Chen, Sean Ko, Evan Leibovitz, Etta D. Pisano, Laura Coombs, Daguang Xu, Keith J. Dreyer, Ittai Dayan, Ram C. Naidu, Mona Flores, Daniel Rubin, Jayashree Kalpathy-Cramer
Building robust deep learning-based models requires large quantities of diverse training data.
2 code implementations • ECCV 2020 • Wei Ji, Jingjing Li, Miao Zhang, Yongri Piao, Huchuan Lu
The explicitly extracted edge information goes together with saliency to give more emphasis to the salient regions and object boundaries.
Ranked #20 on RGB-D Salient Object Detection on NJU2K
no code implementations • 2 Jun 2020 • Xiong Zhang, Miao Zhang, Xiao Tian
Earthquake early warning systems are required to report earthquake locations and magnitudes as quickly as possible before the damaging S wave arrival to mitigate seismic hazards.
1 code implementation • CVPR 2020 • Miao Zhang, Weisong Ren, Yongri Piao, Zhengkun Rong, Huchuan Lu
Depth data containing a preponderance of discriminative power in location have been proven beneficial for accurate saliency prediction.
Ranked #16 on RGB-D Salient Object Detection on NJU2K (using extra training data)
1 code implementation • CVPR 2020 • Miao Zhang, Huiqi Li, Shirui Pan, Xiaojun Chang, Steven Su
In this paper, we formulate the supernet training in the One-Shot NAS as a constrained optimization problem of continual learning that the learning of current architecture should not degrade the performance of previous architectures during the supernet training.
2 code implementations • 22 Apr 2020 • Shuting He, Hao Luo, Weihua Chen, Miao Zhang, Yuqi Zhang, Fan Wang, Hao Li, Wei Jiang
Our solution is based on a strong baseline with bag of tricks (BoT-BS) proposed in person ReID.
1 code implementation • NeurIPS 2019 • Miao Zhang, Jingjing Li, Ji Wei, Yongri Piao, Huchuan Lu
In this paper, we present a deep-learning-based method where a novel memory-oriented decoder is tailored for light field saliency detection.
no code implementations • 22 Jul 2019 • Miao Zhang, Huiqi Li, Shirui Pan, Taoping Liu, Steven Su
The best architecture obtained by our algorithm with the same search space achieves the state-of-the-art test error rate of 2. 51\% on CIFAR-10 with only 7. 5 hours search time in a single GPU, and a validation perplexity of 60. 02 and a test perplexity of 57. 36 on PTB.
1 code implementation • 21 Jul 2019 • Miao Zhang, Huiqi Li, Steven Su
Furthermore, a kernel trick is developed to reduce computational complexity and learn nonlinear subset of the unknowing function when applying SIR to extremely high dimensional BO.
1 code implementation • 2 Jan 2019 • Tingting Qiao, Weijing Zhang, Miao Zhang, Zixuan Ma, Duanqing Xu
By doing so, the ancient painting processing problems become natural image processing problems and models trained on natural images can be directly applied to the transferred paintings.
1 code implementation • 2 Jan 2019 • Miao Zhang, Huiqi Li, Juan Lyu, Sai Ho Ling, Steven Su
In this paper, a non-stationary kernel is proposed which allows the surrogate model to adapt to functions whose smoothness varies with the spatial location of inputs, and a multi-level convolutional neural network (ML-CNN) is built for lung nodule classification whose hyperparameter configuration is optimized by using the proposed non-stationary kernel based Gaussian surrogate model.
no code implementations • 18 Mar 2018 • Xin Zhang, Bingfang Wu, Liang Zhu, Fuyou Tian, Miao Zhang, Yuanzeng
In this paper, we first test the state of the art semantic segmentation deep learning classifiers for LUCC mapping with 7 categories in the TGRA area with rapideye 5m resolution data.
no code implementations • 15 Nov 2017 • Miao Zhang, Xiaofei Kang, Yanqing Wang, Lantian Li, Zhiyuan Tang, Haisheng Dai, Dong Wang
Trivial events are ubiquitous in human to human conversations, e. g., cough, laugh and sniff.
1 code implementation • PLOS ONE 2017 • Xinyu Yang, Guoai Xu, Qi Li, Yanhui Guo, Miao Zhang
Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm.
no code implementations • 22 Jun 2017 • Miao Zhang, Yixiang Chen, Lantian Li, Dong Wang
This paper proposes a speaker recognition (SRE) task with trivial speech events, such as cough and laugh.