no code implementations • 6 Apr 2024 • Dong Liang, Zhengyan Xu, Ling Li, Mingqiang Wei, Songcan Chen
In this paper, we propose a physics-inspired contrastive learning paradigm for low-light enhancement, called PIE.
no code implementations • 31 Jan 2024 • Chaohua Li, Enhao Zhang, Chuanxing Geng, Songcan Chen
In open-set recognition (OSR), a promising strategy is exploiting pseudo-unknown data outside given $K$ known classes as an additional $K$+$1$-th class to explicitly model potential open space.
no code implementations • 25 Dec 2023 • Jiexi Liu, Songcan Chen
Learning universal time series representations applicable to various types of downstream tasks is challenging but valuable in real applications.
1 code implementation • 31 Aug 2023 • Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li
In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
no code implementations • 14 Aug 2023 • Xiang Li, Songcan Chen
Then, by using the prior of degrees, we design a weighted scheme and verify its effectiveness.
no code implementations • 7 May 2023 • Wenhai Wan, Xinrui Wang, Ming-Kun Xie, Shao-Yuan Li, Sheng-Jun Huang, Songcan Chen
Learning from noisy data has attracted much attention, where most methods focus on closed-set label noise.
no code implementations • 28 Apr 2023 • Ling Li, Dong Liang, Yuanhang Gao, Sheng-Jun Huang, Songcan Chen
In this paper, we propose a new paradigm, i. e., aesthetics-guided low-light image enhancement (ALL-E), which introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward.
no code implementations • 21 Apr 2023 • Feihu Huang, Songcan Chen
Moreover, we provide a solid convergence analysis for our DM-GDA method, and prove that it obtains a near-optimal gradient complexity of $O(\epsilon^{-3})$ for finding an $\epsilon$-stationary solution of the nonconvex-PL stochastic minimax problems, which reaches the lower bound of nonconvex stochastic optimization.
no code implementations • 28 Feb 2023 • Xiang Li, Xinrui Wang, Songcan Chen
In Multi-Label Learning (MLL), it is extremely challenging to accurately annotate every appearing object due to expensive costs and limited knowledge.
no code implementations • 30 Jan 2023 • Xin Li, Mingqiang Wei, Songcan Chen
From the perspective of how-and-what-to-learn, PointSmile is designed to imitate human curriculum learning, i. e., starting with an easy curriculum and gradually increasing the difficulty of that curriculum.
1 code implementation • ICCV 2023 • Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li
In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
no code implementations • 14 Nov 2022 • Feihu Huang, Xinrui Wang, Junyi Li, Songcan Chen
To fill this gap, in the paper, we study a class of nonconvex minimax optimization, and propose an efficient adaptive federated minimax optimization algorithm (i. e., AdaFGDA) to solve these distributed minimax problems.
1 code implementation • 3 Sep 2022 • Zhongchen Ma, Lisha Li, Qirong Mao, Songcan Chen
However, these CL methods fail to be directly adapted to multi-label image classification due to the difficulty in defining the positive and negative instances to contrast a given anchor image in multi-label scenario, let the label missing one alone, implying that borrowing a commonly-used way from contrastive multi-class learning to define them will incur a lot of false negative instances unfavorable for learning.
no code implementations • 26 Jul 2022 • Enhao Zhang, Chuanxing Geng, Songcan Chen
For these issues, we propose the Class-aware Universum Inspired Re-balance Learning(CaUIRL) for long-tailed recognition, which endows the Universum with class-aware ability to re-balance individual minority classes from both sample quantity and quality.
no code implementations • 10 Jul 2022 • Yunyun Wang, Yao Liu, Songcan Chen
In this paper, we propose a new UniDA method with adaptive Unknown Authentication by Classifier Paradox (UACP), considering that samples with paradoxical predictions are probably unknowns belonging to none of the source classes.
no code implementations • 10 Jul 2022 • Yunyun Wang, Weiwen Zheng, Songcan Chen
Previous unsupervised domain adaptation (UDA) methods aim to promote target learning via a single-directional knowledge transfer from label-rich source domain to unlabeled target domain, while its reverse adaption from target to source has not jointly been considered yet so far.
no code implementations • 10 May 2022 • Jiaqiang Zhang, Senzhang Wang, Songcan Chen
Detecting abnormal nodes from attributed networks is of great importance in many real applications, such as financial fraud detection and cyber security.
no code implementations • 5 May 2022 • Chuanxing Geng, Aiyang Han, Songcan Chen
Consistency and complementarity are two key ingredients for boosting multi-view clustering (MVC).
no code implementations • 23 Apr 2022 • Dan Li, Songcan Chen
Decision tree (DT) attracts persistent research attention due to its impressive empirical performance and interpretability in numerous applications.
1 code implementation • 22 Apr 2022 • Aiyang Han, Chuanxing Geng, Songcan Chen
In this paper, inspired by Universum Learning which uses out-of-class samples to assist the target tasks, we investigate Mixup from a largely under-explored perspective - the potential to generate in-domain samples that belong to none of the target classes, that is, universum.
no code implementations • 11 Apr 2022 • Jiayu Yao, Qingyuan Wu, Quan Feng, Songcan Chen
Self-supervised learning (SSL), as a newly emerging unsupervised representation learning paradigm, generally follows a two-stage learning pipeline: 1) learning invariant and discriminative representations with auto-annotation pretext(s), then 2) transferring the representations to assist downstream task(s).
no code implementations • 7 Apr 2022 • Weikai Li, Meng Cao, Songcan Chen
Unsupervised Source (data) Free domain adaptation (USFDA) aims to transfer knowledge from a well-trained source model to a related but unlabeled target domain.
no code implementations • 5 Mar 2022 • Zhongchen Ma, Songcan Chen
Similarity-based method gives rise to a new class of methods for multi-label learning and also achieves promising performance.
1 code implementation • 31 Jan 2022 • Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.
no code implementations • 7 Jan 2022 • Quan Feng, Songcan Chen
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.
1 code implementation • 29 Aug 2021 • Weikai Li, Songcan Chen
Considering the difficulty of perfect alignment in solving PDA, we turn to focus on the model smoothness while discard the riskier domain alignment to enhance the adaptability of the model.
no code implementations • 6 Aug 2021 • Yunxia Lin, Songcan Chen
To eliminate the deviation, we propose two Rectified Euler k-means methods, i. e., REK1 and REK2, which retain the merits of EulerK while acquire real centroids residing on the mapped space to better characterize the data structures.
no code implementations • ICML Workshop AML 2021 • Keji Han, Yun Li, Songcan Chen
Many works have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial examples.
no code implementations • 27 Mar 2021 • Kun-Peng Ning, Lue Tao, Songcan Chen, Sheng-Jun Huang
Recently, much research has been devoted to improving the model robustness by training with noise perturbations.
2 code implementations • NeurIPS 2021 • Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.
no code implementations • 29 Jan 2021 • Quan Feng, Songcan Chen
However, to the best of our knowledge, there is limited study on twofold heterogeneous MTL (THMTL) scenario where the input and the output spaces are both inconsistent or heterogeneous.
1 code implementation • 29 Dec 2020 • Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen
In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.
1 code implementation • 24 Dec 2020 • Weikai Li, Chuanxing Geng, Songcan Chen
On the one hand, for small data cases, CV suffers a conservatively biased estimation, since some part of the limited data has to hold out for validation.
no code implementations • 28 Sep 2020 • Lue Tao, Songcan Chen
In this paper, we formalize the hypocritical risk for the first time and propose a defense method specialized for hypocritical examples by minimizing the tradeoff between natural risk and an upper bound of hypocritical risk.
no code implementations • 20 Sep 2020 • Yunxia Lin, Songcan Chen
While the latter directly or explicitly imposes the block diagonal structure prior such as block diagonal representation (BDR) to ensure so-desired block diagonalty even if the data is noisy but at the expense of losing the convexity that the former's objective possesses.
1 code implementation • 1 Sep 2020 • Weikai Li, Songcan Chen
Unsupervised Domain Adaptation (UDA) aims to classify unlabeled target domain by transferring knowledge from labeled source domain with domain shift.
no code implementations • 4 Aug 2020 • Feihu Huang, Songcan Chen, Heng Huang
Our theoretical analysis shows that the online SPIDER-ADMM has the IFO complexity of $\mathcal{O}(\epsilon^{-\frac{3}{2}})$, which improves the existing best results by a factor of $\mathcal{O}(\epsilon^{-\frac{1}{2}})$.
1 code implementation • ICML 2020 • Feihu Huang, Lue Tao, Songcan Chen
To relax the large batches required in the Acc-SZOFW, we further propose a novel accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW*) based on a new variance reduced technique of STORM, which still reaches the function query complexity of $O(d\epsilon^{-3})$ in the stochastic problem without relying on any large batches.
1 code implementation • 3 May 2020 • Xiang Li, Songcan Chen
In aligning, we characterize the global and local structures of multiple labels to be high-rank and low-rank, respectively.
no code implementations • 27 Apr 2020 • Yunxia Lin, Songcan Chen
Like k-means and Gaussian Mixture Model (GMM), fuzzy c-means (FCM) with soft partition has also become a popular clustering algorithm and still is extensively studied.
no code implementations • 8 Apr 2020 • Zhongchen Ma, Songcan Chen
In multi-label learning, the issue of missing labels brings a major challenge.
no code implementations • 22 Feb 2020 • Chuanxing Geng, Zhenghao Tan, Songcan Chen
Specifically, a simple multi-view learning framework is specially designed (SSL-MV), which assists the feature learning of downstream tasks (original view) through the same tasks on the augmented views.
no code implementations • 12 Aug 2019 • Chuanxing Geng, Lue Tao, Songcan Chen
On the other hand, for G-OSR, introducing such semantic information of known classes not only improves the recognition performance but also endows OSR with the cognitive ability of unknown classes.
no code implementations • 29 May 2019 • Feihu Huang, Shangqian Gao, Songcan Chen, Heng Huang
In particular, our methods not only reach the best convergence rate $O(1/T)$ for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints.
no code implementations • 7 Mar 2019 • Menglei Hu, Songcan Chen
Specifically, on the one hand, DAIMC utilizes the given instance alignment information to learn a common latent feature matrix for all the views.
no code implementations • 2 Mar 2019 • Menglei Hu, Songcan Chen
Real data are often with multiple modalities or from multiple heterogeneous sources, thus forming so-called multi-view data, which receives more and more attentions in machine learning.
no code implementations • 16 Feb 2019 • Feihu Huang, Bin Gu, Zhouyuan Huo, Songcan Chen, Heng Huang
Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems.
no code implementations • 21 Nov 2018 • Chuanxing Geng, Sheng-Jun Huang, Songcan Chen
A more realistic scenario is open set recognition (OSR), where incomplete knowledge of the world exists at training time, and unknown classes can be submitted to an algorithm during testing, requiring the classifiers to not only accurately classify the seen classes, but also effectively deal with the unseen ones.
no code implementations • 4 Sep 2018 • Huanhuan Yu, Menglei Hu, Songcan Chen
Unsupervised domain adaptation (UDA) aims to learn the unlabeled target domain by transferring the knowledge of the labeled source domain.
no code implementations • 29 Jun 2018 • Chuanxing Geng, Songcan Chen
In open set recognition (OSR), almost all existing methods are designed specially for recognizing individual instances, even these instances are collectively coming in batch.
no code implementations • 15 Feb 2018 • Sheng-Jun Huang, Miao Xu, Ming-Kun Xie, Masashi Sugiyama, Gang Niu, Songcan Chen
Feature missing is a serious problem in many applications, which may lead to low quality of training data and further significantly degrade the learning performance.
no code implementations • 8 Feb 2018 • Feihu Huang, Songcan Chen
Moreover, we extend the mini-batch stochastic gradient method to both the nonconvex SVRG-ADMM and SAGA-ADMM proposed in our initial manuscript \cite{huang2016stochastic}, and prove these mini-batch stochastic ADMMs also reaches the convergence rate of $O(1/T)$ without condition on the mini-batch size.
no code implementations • 26 Apr 2017 • Feihu Huang, Songcan Chen
To the best of our knowledge, it is first proved that the accelerated SGD method converges linearly to the local minimum of the nonconvex optimization.
no code implementations • 10 Oct 2016 • Feihu Huang, Songcan Chen, Zhaosong Lu
Specifically, the first class called the nonconvex stochastic variance reduced gradient ADMM (SVRG-ADMM), uses a multi-stage scheme to progressively reduce the variance of stochastic gradients.
no code implementations • 14 Sep 2016 • Qing Tian, Songcan Chen
In human face-based biometrics, gender classification and age estimation are two typical learning tasks.
no code implementations • 13 Sep 2016 • Qing Tian, Songcan Chen, Xiaoyang Tan
Although leading to promotion of age estimation performance, such a concatenation not only likely confuses the semantics between the gender and age, but also ignores the aging discrepancy between the male and the female.
no code implementations • 18 May 2015 • Liping Wang, Songcan Chen
In this paper, a joint representation classification (JRC) for collective face recognition is proposed.
no code implementations • 12 Jan 2015 • Xiaoqian Qin, Xiaoyang Tan, Songcan Chen
One major challenge in computer vision is to go beyond the modeling of individual objects and to investigate the bi- (one-versus-one) or tri- (one-versus-two) relationship among multiple visual entities, answering such questions as whether a child in a photo belongs to given parents.