no code implementations • 12 Dec 2024 • Huanyu Wu, Siyang Li, Dongrui Wu
Motor imagery (MI) based brain-computer interfaces (BCIs) enable the direct control of external devices through the imagined movements of various body parts.
1 code implementation • 10 Dec 2024 • Siyang Li, Ziwei Wang, Hanbin Luo, Lieyun Ding, Dongrui Wu
Significance: To our knowledge, this is the first work on test time adaptation for calibration-free EEG-based BCIs, making plug-and-play BCIs possible.
1 code implementation • 10 Dec 2024 • Lubin Meng, Xue Jiang, Xiaoqing Chen, Wenzhong Liu, Hanbin Luo, Dongrui Wu
To our knowledge, this is the first study on adversarial filtering for EEG-based BCIs, raising a new security concern and calling for more attention on the security of BCIs.
no code implementations • 4 Dec 2024 • Ziwei Wang, Siyang Li, Jingwei Luo, Jiajing Liu, Dongrui Wu
A brain-computer interface (BCI) enables direct communication between the human brain and external devices.
no code implementations • 4 Dec 2024 • Lingfei Deng, Changming Zhao, Zhenbang Du, Kun Xia, Dongrui Wu
Semi-supervised domain adaptation (SSDA) aims at training a high-performance model for a target domain using few labeled target data, many unlabeled target data, and plenty of auxiliary data from a source domain.
Semi-supervised Domain Adaptation Source-Free Domain Adaptation +1
1 code implementation • 2 Dec 2024 • Tianwang Jia, Lubin Meng, Siyang Li, Jiajing Liu, Dongrui Wu
Training an accurate classifier for EEG-based brain-computer interface (BCI) requires EEG data from a large number of users, whereas protecting their data privacy is a critical consideration.
no code implementations • 2 Dec 2024 • Haoran Wang, Herui Zhang, Siyang Li, Dongrui Wu
Compared with the LIF neuron, the GPN has two distinguishing advantages: 1) it copes well with the vanishing gradients by improving the flow of gradient propagation; and, 2) it learns spatio-temporal heterogeneous neuronal parameters automatically.
no code implementations • 2 Dec 2024 • Yifan Xu, Xue Jiang, Dongrui Wu
Affective norms are utilized as prior knowledge to connect the label spaces of categorical and dimensional emotions.
no code implementations • 29 Nov 2024 • Lubin Meng, Xue Jiang, Tianwang Jia, Dongrui Wu
A brain-computer interface (BCI) enables direct communication between the brain and an external device.
no code implementations • 29 Nov 2024 • Ruimin Peng, Jiayu An, Dongrui Wu
Source-free semi-supervised domain adaptation (SF-SSDA), which transfers a pre-trained model to a new dataset with no source data and limited labeled target data, can be used for privacy-preserving seizure subtype classification.
no code implementations • 4 Nov 2024 • Xiaoqing Chen, Siyang Li, Yunlu Tu, Ziwei Wang, Dongrui Wu
After adding the proposed perturbations to EEG training data, the user identity information in the data becomes unlearnable, while the BCI task information remains unaffected.
1 code implementation • 4 Nov 2024 • Xiaoqing Chen, Ziwei Wang, Dongrui Wu
Data alignment aligns EEG trials from different domains to reduce their distribution discrepancies, and adversarial training further robustifies the classification boundary.
no code implementations • 1 Nov 2023 • Zhenbang Du, Jiayu An, Yunlu Tu, Jiahao Hong, Dongrui Wu
Within a MoE, different experts handle distinct input features, producing unique expert routing patterns for various classes in a routing feature space.
no code implementations • 1 Apr 2023 • Haoyi Xiong, Xuhong LI, Boyang Yu, Zhanxing Zhu, Dongrui Wu, Dejing Dou
While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased.
no code implementations • 28 Nov 2022 • Xiaoqing Chen, Dongrui Wu
Detection of adversarial examples is crucial to both the understanding of this phenomenon and the defense.
1 code implementation • 20 Jul 2022 • Siyang Li, Yifan Xu, Huanyu Wu, Dongrui Wu, Yingjie Yin, Jiajiong Cao, Jingting Ding
Facial affect analysis remains a challenging task with its setting transitioned from lab-controlled to in-the-wild situations.
1 code implementation • 7 Jun 2022 • Yuqi Cui, Dongrui Wu, Xue Jiang, Yifan Xu
This paper presents PyTSK, a Python toolbox for developing Takagi-Sugeno-Kang (TSK) fuzzy systems.
no code implementations • 7 Oct 2021 • Haiyan Jiang, Haoyi Xiong, Dongrui Wu, Ji Liu, Dejing Dou
Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction.
no code implementations • 6 Oct 2021 • Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou
Specifically, we design a new metric $\mathcal{P}$-vector to represent the principal subspace of deep features learned in a DNN, and propose to measure angles between the principal subspaces using $\mathcal{P}$-vectors.
1 code implementation • 3 Jun 2021 • Xiao Zhang, Dongrui Wu, Haoyi Xiong, Bo Dai
Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent.
no code implementations • 8 Feb 2021 • Yuqi Cui, Dongrui Wu, Yifan Xu
We show that two defuzzification operations, LogTSK and HTSK, the latter of which is first proposed in this paper, can avoid the saturation.
no code implementations • 4 Feb 2021 • Dongrui Wu, Jiaxin Xu, Weili Fang, Yi Zhang, Liuqing Yang, Xiaodong Xu, Hanbin Luo, Xiang Yu
Physiological computing uses human physiological data as system inputs in real time.
no code implementations • 1 Jan 2021 • Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu
Random label noises (or observational noises) widely exist in practical machinelearning settings.
no code implementations • 1 Jan 2021 • Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou
While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e. g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared.
2 code implementations • 30 Nov 2020 • Zhenhua Shi, Dongrui Wu, Chenfeng Guo, Changming Zhao, Yuqi Cui, Fei-Yue Wang
To effectively optimize Takagi-Sugeno-Kang (TSK) fuzzy systems for regression problems, a mini-batch gradient descent with regularization, DropRule, and AdaBound (MBGD-RDA) algorithm was recently proposed.
no code implementations • 30 Oct 2020 • Lubin Meng, Jian Huang, Zhigang Zeng, Xue Jiang, Shan Yu, Tzyy-Ping Jung, Chin-Teng Lin, Ricardo Chavarriaga, Dongrui Wu
Test samples with the backdoor key will then be classified into the target class specified by the attacker.
1 code implementation • 2 Sep 2020 • Wen Zhang, Lingfei Deng, Lei Zhang, Dongrui Wu
Transfer learning (TL) utilizes data or knowledge from one or more source domains to facilitate the learning in a target domain.
1 code implementation • 3 Jul 2020 • Dongrui Wu, Xue Jiang, Ruimin Peng, Wanzeng Kong, Jian Huang, Zhigang Zeng
Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject, and demonstrated promising performance.
1 code implementation • 29 Apr 2020 • Xiao Zhang, Haoyi Xiong, Dongrui Wu
Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can achieve excellent generalization performance, challenging the bias-variance trade-off in classical learning theory.
no code implementations • 13 Apr 2020 • Dongrui Wu, Yifan Xu, Bao-liang Lu
Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user unfriendly.
no code implementations • 26 Mar 2020 • Ziang Liu, Dongrui Wu
It optimally selects the best few samples to label, so that a better machine learning model can be trained from the same number of labeled samples.
1 code implementation • 21 Mar 2020 • Changming Zhao, Dongrui Wu, Jian Huang, Ye Yuan, Hai-Tao Zhang, Ruimin Peng, Zhenhua Shi
Bootstrap aggregating (Bagging) and boosting are two popular ensemble learning approaches, which combine multiple base learners to generate a composite model for more accurate and more reliable performance.
no code implementations • 17 Mar 2020 • Ziang Liu, Xue Jiang, Hanbin Luo, Weili Fang, Jiajing Liu, Dongrui Wu
Active learning (AL) selects the most beneficial unlabeled samples to label, and hence a better machine learning model can be trained from the same number of labeled samples.
no code implementations • 1 Mar 2020 • Dongrui Wu
To effectively train Takagi-Sugeno-Kang (TSK) fuzzy systems for regression problems, a Mini-Batch Gradient Descent with Regularization, DropRule, and AdaBound (MBGD-RDA) algorithm was recently proposed.
1 code implementation • 27 Feb 2020 • Yuqi Cui, Huidong Wang, Dongrui Wu
Fuzzy c-means based clustering algorithms are frequently used for Takagi-Sugeno-Kang (TSK) fuzzy classifier antecedent parameter estimation.
1 code implementation • 30 Jan 2020 • Xiao Zhang, Dongrui Wu, Lieyun Ding, Hanbin Luo, Chin-Teng Lin, Tzyy-Ping Jung, Ricardo Chavarriaga
An electroencephalogram (EEG) based brain-computer interface (BCI) speller allows a user to input text to a computer by thought.
no code implementations • 28 Jan 2020 • Xiaotong Gu, Zehong Cao, Alireza Jolfaei, Peng Xu, Dongrui Wu, Tzyy-Ping Jung, Chin-Teng Lin
Recent technological advances such as wearable sensing devices, real-time data streaming, machine learning, and deep learning approaches have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications.
1 code implementation • 14 Jan 2020 • Ziang Liu, Dongrui Wu
So, it is desirable to be able to select the optimal samples to label, so that a good machine learning model can be trained from a minimum amount of labeled data.
1 code implementation • 9 Jan 2020 • Zhenhua Shi, Dongrui Wu, Jian Huang, Yu-Kai Wang, Chin-Teng Lin
Approaches that preserve only the local data structure, such as locality preserving projections, are usually unsupervised (and hence cannot use label information) and uses a fixed similarity graph.
no code implementations • 8 Jan 2020 • Yurui Ming, Dongrui Wu, Yu-Kai Wang, Yuhui Shi, Chin-Teng Lin
To the best of our knowledge, we are the first to introduce the deep reinforcement learning method to this BCI scenario, and our method can be potentially generalized to other BCI cases.
no code implementations • ICLR 2020 • Xiao Zhang, Dongrui Wu
A deep neural network (DNN) with piecewise linear activations can partition the input space into numerous small linear regions, where different linear functions are fitted.
1 code implementation • 3 Dec 2019 • Zihan Liu, Lubin Meng, Xiao Zhang, Weili Fang, Dongrui Wu
Multiple convolutional neural network (CNN) classifiers have been proposed for electroencephalogram (EEG) based brain-computer interfaces (BCIs).
1 code implementation • 3 Dec 2019 • He He, Dongrui Wu
Currently, most domain adaptation approaches require the source domains to have the same feature space and label space as the target domain, which limits their applications, as the auxiliary data may have different feature spaces and/or different label spaces.
1 code implementation • 1 Dec 2019 • Wen Zhang, Dongrui Wu
Many existing domain adaptation approaches are based on the joint MMD, which is computed as the (weighted) sum of the marginal distribution discrepancy and the conditional distribution discrepancy; however, a more natural metric may be their joint probability distribution discrepancy.
no code implementations • 10 Nov 2019 • Bo Zhang, Yuqi Cui, Meng Wang, Jingjing Li, Lei Jin, Dongrui Wu
Tens of millions of women suffer from infertility worldwide each year.
no code implementations • 7 Nov 2019 • Lubin Meng, Chin-Teng Lin, Tzyy-Ring Jung, Dongrui Wu
Experiments on two BCI regression problems verified that both approaches are effective.
no code implementations • 7 Nov 2019 • Xue Jiang, Xiao Zhang, Dongrui Wu
Learning a good substitute model is critical to the success of these attacks, but it requires a large number of queries to the target model.
1 code implementation • 14 Oct 2019 • Wen Zhang, Dongrui Wu
Experiments on four EEG datasets from two different BCI paradigms demonstrated that MEKT outperformed several state-of-the-art transfer learning approaches, and DTE can reduce more than half of the computational cost when the number of source subjects is large, with little sacrifice of classification accuracy.
no code implementations • 25 Sep 2019 • Yuqi Cuui, Yifan Xu, Dongrui Wu
A calibration session is usually required to collect some subject-specific data and tune the model parameters before applying it to a new subject, which is very inconvenient and not user-friendly.
no code implementations • 22 Aug 2019 • Zihan Liu, Bo Huang, Yuqi Cui, Yifan Xu, Bo Zhang, Lixia Zhu, Yang Wang, Lei Jin, Dongrui Wu
Accurate classification of embryo early development stages can provide embryologists valuable information for assessing the embryo quality, and hence is critical to the success of IVF.
2 code implementations • 16 Aug 2019 • Zhenhua Shi, Xiaomo Chen, Changming Zhao, He He, Veit Stuphorn, Dongrui Wu
Multi-view learning improves the learning performance by utilizing multi-view data: data collected from multiple sources, or feature sets extracted from the same data source.
1 code implementation • 1 Aug 2019 • Yuqi Cui, Jian Huang, Dongrui Wu
Takagi-Sugeno-Kang (TSK) fuzzy systems are flexible and interpretable machine learning models; however, they may not be easily optimized when the data size is large, and/or the data dimensionality is high.
no code implementations • 3 Jul 2019 • Chenfeng Guo, Dongrui Wu
Canonical correlation analysis (CCA) is very important in MVL, whose main idea is to map data from different views onto a common space with maximum correlation.
no code implementations • 3 Jul 2019 • Dongrui Wu, Jerry Mendel
How to optimize the IT2 fuzzy system?
no code implementations • 2 Jul 2019 • Anisha Agarwal, Rafael Dowsley, Nicholas D. McKinney, Dongrui Wu, Chin-Teng Lin, Martine De Cock, Anderson C. A. Nascimento
Machine learning (ML) is revolutionizing research and industry.
1 code implementation • 1 Jun 2019 • Dongrui Wu, Jerry M. Mendel
There have been different strategies to improve the performance of a machine learning model, e. g., increasing the depth, width, and/or nonlinearity of the model, and using ensemble learning to aggregate multiple base/weak learners in parallel or in series.
no code implementations • 31 Mar 2019 • Xiao Zhang, Dongrui Wu
Deep learning has been successfully used in numerous applications because of its outstanding performance and the ability to avoid manual feature engineering.
no code implementations • 26 Mar 2019 • Dongrui Wu, Feifei Liu, Chengyu Liu
Moreover, active learning can be used to optimally select a few trials from a new subject to label, based on which a stacking ensemble regression model can be trained to aggregate the base estimators.
1 code implementation • 26 Mar 2019 • Dongrui Wu, Ye Yuan, Yihua Tan
Our final algorithm, mini-batch gradient descent with regularization, DropRule and AdaBound (MBGD-RDA), can achieve fast convergence in training TSK fuzzy systems, and also superior generalization performance in testing.
no code implementations • 25 Mar 2019 • Dongrui Wu, Chin-Teng Lin, Jian Huang, Zhigang Zeng
Fuzzy systems have achieved great success in numerous applications.
no code implementations • 2 Mar 2019 • Cheng Cheng, Beitong Zhou, Guijun Ma, Dongrui Wu, Ye Yuan
However, for diverse working conditions in the industry, deep learning suffers two difficulties: one is that the well-defined (source domain) and new (target domain) datasets are with different feature distributions; another one is the fact that insufficient or no labelled data in target domain significantly reduce the accuracy of fault diagnosis.
1 code implementation • 15 Dec 2018 • Dongrui Wu, Xianfeng Tan
Experiments on simultaneous optimization of type-1 and interval type-2 fuzzy logic controllers for couple-tank water level control demonstrated that the MTGA can find better fuzzy logic controllers than other approaches.
no code implementations • 8 Aug 2018 • Dongrui Wu, Jian Huang
Acquisition of labeled training samples for affective computing is usually costly and time-consuming, as affects are intrinsically subjective, subtle and uncertain, and hence multiple human assessors are needed to evaluate each affective sample.
1 code implementation • 8 Aug 2018 • He He, Dongrui Wu
Our approach has three desirable properties: 1) it aligns the EEG trials directly in the Euclidean space, and any signal processing, feature extraction and machine learning algorithms can then be applied to the aligned trials; 2) its computational cost is very low; and, 3) it is unsupervised and does not need any label information from the new subject.
no code implementations • 8 Aug 2018 • Chenfeng Guo, Dongrui Wu
Affective computing has become a very important research area in human-machine interaction.
1 code implementation • 8 Aug 2018 • Dongrui Wu, Chin-Teng Lin, Jian Huang
Active learning for regression (ALR) is a methodology to reduce the number of labeled samples, by selecting the most beneficial ones to label, instead of random selection.
no code implementations • 8 Aug 2018 • He He, Dongrui Wu
The electroencephalogram (EEG) is the most popular form of input for brain computer interfaces (BCIs).
no code implementations • 8 Aug 2018 • He He, Dongrui Wu
The electroencephalogram (EEG) is the most widely used input for brain computer interfaces (BCIs), and common spatial pattern (CSP) is frequently used to spatially filter it to increase its signal-to-noise ratio.
no code implementations • 23 Jul 2018 • Te Zhang, Zhaohong Deng, Dongrui Wu, Shitong Wang
Multi-view datasets are frequently encountered in learning tasks, such as web data mining and multimedia information analysis.
no code implementations • 12 May 2018 • Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance, Chin-Teng Lin
Ensemble learning is a powerful approach to construct a strong learner from multiple base learners.
no code implementations • 12 May 2018 • Dongrui Wu
Single-trial classification of event-related potentials in electroencephalogram (EEG) signals is a very important paradigm of brain-computer interface (BCI).
no code implementations • 12 May 2018 • Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance, Chin-Teng Lin
There are many important regression problems in real-world brain-computer interface (BCI) applications, e. g., driver drowsiness estimation from EEG signals.
1 code implementation • 12 May 2018 • Dongrui Wu
Given a pool of unlabeled samples, it tries to select the most useful ones to label so that a model built from them can achieve the best possible performance.
no code implementations • 30 Apr 2018 • Yuqi Cui, Xiao Zhang, Yang Wang, Chenfeng Guo, Dongrui Wu
This short paper describes our solution to the 2018 IEEE World Congress on Computational Intelligence One-Minute Gradual-Emotional Behavior Challenge, whose goal was to estimate continuous arousal and valence values from short videos.
no code implementations • 27 Apr 2017 • Dongrui Wu, Brent J. Lance, Vernon J. Lawhern, Stephen Gordon, Tzyy-Ping Jung, Chin-Teng Lin
Riemannian geometry has been successfully used in many brain-computer interface (BCI) classification problems and demonstrated superior performance.
no code implementations • 9 Feb 2017 • Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance, Chin-Teng Lin
By integrating fuzzy sets with domain adaptation, we propose a novel online weighted adaptation regularization for regression (OwARR) algorithm to reduce the amount of subject-specific calibration data, and also a source domain selection (SDS) approach to save about half of the computational cost of OwARR.
no code implementations • 9 Feb 2017 • Dongrui Wu
This paper proposes both online and offline weighted adaptation regularization (wAR) algorithms to reduce this calibration effort, i. e., to minimize the amount of labeled subject-specific EEG data required in BCI calibration, and hence to increase the utility of the BCI system.
no code implementations • 9 Feb 2017 • Dongrui Wu, Vernon J. Lawhern, W. David Hairston, Brent J. Lance
wAR makes use of labeled data from the previous headset and handles class-imbalance, and active learning selects the most informative samples from the new headset to label.
no code implementations • 9 Feb 2017 • Dongrui Wu, Jung-Tai King, Chun-Hsiang Chuang, Chin-Teng Lin, Tzyy-Ping Jung
Electroencephalogram (EEG) signals are frequently used in brain-computer interfaces (BCIs), but they are easily contaminated by artifacts and noises, so preprocessing must be done before they are fed into a machine learning algorithm for classification or regression.