Search Results for author: Songcan Chen

Found 58 papers, 12 papers with code

PIE: Physics-inspired Low-light Enhancement

no code implementations6 Apr 2024 Dong Liang, Zhengyan Xu, Ling Li, Mingqiang Wei, Songcan Chen

In this paper, we propose a physics-inspired contrastive learning paradigm for low-light enhancement, called PIE.

Contrastive Learning Face Detection +1

All Beings Are Equal in Open Set Recognition

no code implementations31 Jan 2024 Chaohua Li, Enhao Zhang, Chuanxing Geng, Songcan Chen

In open-set recognition (OSR), a promising strategy is exploiting pseudo-unknown data outside given $K$ known classes as an additional $K$+$1$-th class to explicitly model potential open space.

Contrastive Learning Open Set Learning

TimesURL: Self-supervised Contrastive Learning for Universal Time Series Representation Learning

no code implementations25 Dec 2023 Jiexi Liu, Songcan Chen

Learning universal time series representations applicable to various types of downstream tasks is challenging but valuable in real applications.

Anomaly Detection Contrastive Learning +4

Improving Lens Flare Removal with General Purpose Pipeline and Multiple Light Sources Recovery

1 code implementation31 Aug 2023 Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li

In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.

Flare Removal Tone Mapping

ALL-E: Aesthetics-guided Low-light Image Enhancement

no code implementations28 Apr 2023 Ling Li, Dong Liang, Yuanhang Gao, Sheng-Jun Huang, Songcan Chen

In this paper, we propose a new paradigm, i. e., aesthetics-guided low-light image enhancement (ALL-E), which introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward.

Low-Light Image Enhancement valid

Near-Optimal Decentralized Momentum Method for Nonconvex-PL Minimax Problems

no code implementations21 Apr 2023 Feihu Huang, Songcan Chen

Moreover, we provide a solid convergence analysis for our DM-GDA method, and prove that it obtains a near-optimal gradient complexity of $O(\epsilon^{-3})$ for finding an $\epsilon$-stationary solution of the nonconvex-PL stochastic minimax problems, which reaches the lower bound of nonconvex stochastic optimization.

Stochastic Optimization

Pushing One Pair of Labels Apart Each Time in Multi-Label Learning: From Single Positive to Full Labels

no code implementations28 Feb 2023 Xiang Li, Xinrui Wang, Songcan Chen

In Multi-Label Learning (MLL), it is extremely challenging to accurately annotate every appearing object due to expensive costs and limited knowledge.

Multi-Label Learning

PointSmile: Point Self-supervised Learning via Curriculum Mutual Information

no code implementations30 Jan 2023 Xin Li, Mingqiang Wei, Songcan Chen

From the perspective of how-and-what-to-learn, PointSmile is designed to imitate human curriculum learning, i. e., starting with an easy curriculum and gradually increasing the difficulty of that curriculum.

Data Augmentation Self-Supervised Learning

Improving Lens Flare Removal with General-Purpose Pipeline and Multiple Light Sources Recovery

1 code implementation ICCV 2023 Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li

In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.

Flare Removal Tone Mapping

Adaptive Federated Minimax Optimization with Lower Complexities

no code implementations14 Nov 2022 Feihu Huang, Xinrui Wang, Junyi Li, Songcan Chen

To fill this gap, in the paper, we study a class of nonconvex minimax optimization, and propose an efficient adaptive federated minimax optimization algorithm (i. e., AdaFGDA) to solve these distributed minimax problems.

Federated Learning Privacy Preserving

Label Structure Preserving Contrastive Embedding for Multi-Label Learning with Missing Labels

1 code implementation3 Sep 2022 Zhongchen Ma, Lisha Li, Qirong Mao, Songcan Chen

However, these CL methods fail to be directly adapted to multi-label image classification due to the difficulty in defining the positive and negative instances to contrast a given anchor image in multi-label scenario, let the label missing one alone, implying that borrowing a commonly-used way from contrastive multi-class learning to define them will incur a lot of false negative instances unfavorable for learning.

Contrastive Learning Missing Labels +3

Class-Aware Universum Inspired Re-Balance Learning for Long-Tailed Recognition

no code implementations26 Jul 2022 Enhao Zhang, Chuanxing Geng, Songcan Chen

For these issues, we propose the Class-aware Universum Inspired Re-balance Learning(CaUIRL) for long-tailed recognition, which endows the Universum with class-aware ability to re-balance individual minority classes from both sample quantity and quality.

Data Augmentation

Towards Adaptive Unknown Authentication for Universal Domain Adaptation by Classifier Paradox

no code implementations10 Jul 2022 Yunyun Wang, Yao Liu, Songcan Chen

In this paper, we propose a new UniDA method with adaptive Unknown Authentication by Classifier Paradox (UACP), considering that samples with paradoxical predictions are probably unknowns belonging to none of the source classes.

Universal Domain Adaptation Unsupervised Domain Adaptation

Dual-Correction Adaptation Network for Noisy Knowledge Transfer

no code implementations10 Jul 2022 Yunyun Wang, Weiwen Zheng, Songcan Chen

Previous unsupervised domain adaptation (UDA) methods aim to promote target learning via a single-directional knowledge transfer from label-rich source domain to unlabeled target domain, while its reverse adaption from target to source has not jointly been considered yet so far.

Transfer Learning Unsupervised Domain Adaptation

Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks

no code implementations10 May 2022 Jiaqiang Zhang, Senzhang Wang, Songcan Chen

Detecting abnormal nodes from attributed networks is of great importance in many real applications, such as financial fraud detection and cyber security.

Anomaly Detection Attribute +3

A Novel Splitting Criterion Inspired by Geometric Mean Metric Learning for Decision Tree

no code implementations23 Apr 2022 Dan Li, Songcan Chen

Decision tree (DT) attracts persistent research attention due to its impressive empirical performance and interpretability in numerous applications.

Metric Learning

Universum-inspired Supervised Contrastive Learning

1 code implementation22 Apr 2022 Aiyang Han, Chuanxing Geng, Songcan Chen

In this paper, inspired by Universum Learning which uses out-of-class samples to assist the target tasks, we investigate Mixup from a largely under-explored perspective - the potential to generate in-domain samples that belong to none of the target classes, that is, universum.

Contrastive Learning Data Augmentation

Learning Downstream Task by Selectively Capturing Complementary Knowledge from Multiple Self-supervisedly Learning Pretexts

no code implementations11 Apr 2022 Jiayu Yao, Qingyuan Wu, Quan Feng, Songcan Chen

Self-supervised learning (SSL), as a newly emerging unsupervised representation learning paradigm, generally follows a two-stage learning pipeline: 1) learning invariant and discriminative representations with auto-annotation pretext(s), then 2) transferring the representations to assist downstream task(s).

Representation Learning Self-Supervised Learning

Jacobian Norm for Unsupervised Source-Free Domain Adaptation

no code implementations7 Apr 2022 Weikai Li, Meng Cao, Songcan Chen

Unsupervised Source (data) Free domain adaptation (USFDA) aims to transfer knowledge from a well-trained source model to a related but unlabeled target domain.

Source-Free Domain Adaptation

A Similarity-based Framework for Classification Task

no code implementations5 Mar 2022 Zhongchen Ma, Songcan Chen

Similarity-based method gives rise to a new class of methods for multi-label learning and also achieves promising performance.

Classification Multi-Label Learning

Can Adversarial Training Be Manipulated By Non-Robust Features?

1 code implementation31 Jan 2022 Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.

Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big Task

no code implementations7 Jan 2022 Quan Feng, Songcan Chen

Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.

Multi-Task Learning

Partial Domain Adaptation without Domain Alignment

1 code implementation29 Aug 2021 Weikai Li, Songcan Chen

Considering the difficulty of perfect alignment in solving PDA, we turn to focus on the model smoothness while discard the riskier domain alignment to enhance the adaptability of the model.

Partial Domain Adaptation Unsupervised Domain Adaptation

Rectified Euler k-means and Beyond

no code implementations6 Aug 2021 Yunxia Lin, Songcan Chen

To eliminate the deviation, we propose two Rectified Euler k-means methods, i. e., REK1 and REK2, which retain the merits of EulerK while acquire real centroids residing on the mapped space to better characterize the data structures.

Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries

no code implementations27 Mar 2021 Kun-Peng Ning, Lue Tao, Songcan Chen, Sheng-Jun Huang

Recently, much research has been devoted to improving the model robustness by training with noise perturbations.

Active Learning

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

2 code implementations NeurIPS 2021 Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.

Learning Twofold Heterogeneous Multi-Task by Sharing Similar Convolution Kernel Pairs

no code implementations29 Jan 2021 Quan Feng, Songcan Chen

However, to the best of our knowledge, there is limited study on twofold heterogeneous MTL (THMTL) scenario where the input and the output spaces are both inconsistent or heterogeneous.

Multi-Task Learning

With False Friends Like These, Who Can Notice Mistakes?

1 code implementation29 Dec 2020 Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen

In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.

Leave Zero Out: Towards a No-Cross-Validation Approach for Model Selection

1 code implementation24 Dec 2020 Weikai Li, Chuanxing Geng, Songcan Chen

On the one hand, for small data cases, CV suffers a conservatively biased estimation, since some part of the limited data has to hold out for validation.

Model Selection

With False Friends Like These, Who Can Have Self-Knowledge?

no code implementations28 Sep 2020 Lue Tao, Songcan Chen

In this paper, we formalize the hypocritical risk for the first time and propose a defense method specialized for hypocritical examples by minimizing the tradeoff between natural risk and an upper bound of hypocritical risk.

Convex Subspace Clustering by Adaptive Block Diagonal Representation

no code implementations20 Sep 2020 Yunxia Lin, Songcan Chen

While the latter directly or explicitly imposes the block diagonal structure prior such as block diagonal representation (BDR) to ensure so-desired block diagonalty even if the data is noisy but at the expense of losing the convexity that the former's objective possesses.

Clustering

Unsupervised Domain Adaptation with Progressive Adaptation of Subspaces

1 code implementation1 Sep 2020 Weikai Li, Songcan Chen

Unsupervised Domain Adaptation (UDA) aims to classify unlabeled target domain by transferring knowledge from labeled source domain with domain shift.

Partial Domain Adaptation Transfer Learning +1

Faster Stochastic Alternating Direction Method of Multipliers for Nonconvex Optimization

no code implementations4 Aug 2020 Feihu Huang, Songcan Chen, Heng Huang

Our theoretical analysis shows that the online SPIDER-ADMM has the IFO complexity of $\mathcal{O}(\epsilon^{-\frac{3}{2}})$, which improves the existing best results by a factor of $\mathcal{O}(\epsilon^{-\frac{1}{2}})$.

Accelerated Stochastic Gradient-free and Projection-free Methods

1 code implementation ICML 2020 Feihu Huang, Lue Tao, Songcan Chen

To relax the large batches required in the Acc-SZOFW, we further propose a novel accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW*) based on a new variance reduced technique of STORM, which still reaches the function query complexity of $O(d\epsilon^{-3})$ in the stochastic problem without relying on any large batches.

Adversarial Attack

A Concise yet Effective model for Non-Aligned Incomplete Multi-view and Missing Multi-label Learning

1 code implementation3 May 2020 Xiang Li, Songcan Chen

In aligning, we characterize the global and local structures of multiple labels to be high-rank and low-rank, respectively.

Missing Labels Model Selection

A Centroid Auto-Fused Hierarchical Fuzzy c-Means Clustering

no code implementations27 Apr 2020 Yunxia Lin, Songcan Chen

Like k-means and Gaussian Mixture Model (GMM), fuzzy c-means (FCM) with soft partition has also become a popular clustering algorithm and still is extensively studied.

Clustering

A Multi-view Perspective of Self-supervised Learning

no code implementations22 Feb 2020 Chuanxing Geng, Zhenghao Tan, Songcan Chen

Specifically, a simple multi-view learning framework is specially designed (SSL-MV), which assists the feature learning of downstream tasks (original view) through the same tasks on the augmented views.

Data Augmentation MULTI-VIEW LEARNING +1

Visual and Semantic Prototypes-Jointly Guided CNN for Generalized Zero-shot Learning

no code implementations12 Aug 2019 Chuanxing Geng, Lue Tao, Songcan Chen

On the other hand, for G-OSR, introducing such semantic information of known classes not only improves the recognition performance but also endows OSR with the cognitive ability of unknown classes.

Generalized Zero-Shot Learning Open Set Learning

Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization

no code implementations29 May 2019 Feihu Huang, Shangqian Gao, Songcan Chen, Heng Huang

In particular, our methods not only reach the best convergence rate $O(1/T)$ for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints.

Adversarial Attack BIG-bench Machine Learning +1

Doubly Aligned Incomplete Multi-view Clustering

no code implementations7 Mar 2019 Menglei Hu, Songcan Chen

Specifically, on the one hand, DAIMC utilizes the given instance alignment information to learn a common latent feature matrix for all the views.

Clustering Incomplete multi-view clustering +1

One-Pass Incomplete Multi-view Clustering

no code implementations2 Mar 2019 Menglei Hu, Songcan Chen

Real data are often with multiple modalities or from multiple heterogeneous sources, thus forming so-called multi-view data, which receives more and more attentions in machine learning.

Clustering Incomplete multi-view clustering

Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization

no code implementations16 Feb 2019 Feihu Huang, Bin Gu, Zhouyuan Huo, Songcan Chen, Heng Huang

Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems.

BIG-bench Machine Learning

Recent Advances in Open Set Recognition: A Survey

no code implementations21 Nov 2018 Chuanxing Geng, Sheng-Jun Huang, Songcan Chen

A more realistic scenario is open set recognition (OSR), where incomplete knowledge of the world exists at training time, and unknown classes can be submitted to an algorithm during testing, requiring the classifiers to not only accurately classify the seen classes, but also effectively deal with the unseen ones.

General Classification Open Set Learning

Multi-target Unsupervised Domain Adaptation without Exactly Shared Categories

no code implementations4 Sep 2018 Huanhuan Yu, Menglei Hu, Songcan Chen

Unsupervised domain adaptation (UDA) aims to learn the unlabeled target domain by transferring the knowledge of the labeled source domain.

Unsupervised Domain Adaptation

Collective decision for open set recognition

no code implementations29 Jun 2018 Chuanxing Geng, Songcan Chen

In open set recognition (OSR), almost all existing methods are designed specially for recognizing individual instances, even these instances are collectively coming in batch.

Open Set Learning

Active Feature Acquisition with Supervised Matrix Completion

no code implementations15 Feb 2018 Sheng-Jun Huang, Miao Xu, Ming-Kun Xie, Masashi Sugiyama, Gang Niu, Songcan Chen

Feature missing is a serious problem in many applications, which may lead to low quality of training data and further significantly degrade the learning performance.

Matrix Completion

Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization

no code implementations8 Feb 2018 Feihu Huang, Songcan Chen

Moreover, we extend the mini-batch stochastic gradient method to both the nonconvex SVRG-ADMM and SAGA-ADMM proposed in our initial manuscript \cite{huang2016stochastic}, and prove these mini-batch stochastic ADMMs also reaches the convergence rate of $O(1/T)$ without condition on the mini-batch size.

Linear Convergence of Accelerated Stochastic Gradient Descent for Nonconvex Nonsmooth Optimization

no code implementations26 Apr 2017 Feihu Huang, Songcan Chen

To the best of our knowledge, it is first proved that the accelerated SGD method converges linearly to the local minimum of the nonconvex optimization.

Stochastic Alternating Direction Method of Multipliers with Variance Reduction for Nonconvex Optimization

no code implementations10 Oct 2016 Feihu Huang, Songcan Chen, Zhaosong Lu

Specifically, the first class called the nonconvex stochastic variance reduced gradient ADMM (SVRG-ADMM), uses a multi-stage scheme to progressively reduce the variance of stochastic gradients.

A Unified Gender-Aware Age Estimation

no code implementations13 Sep 2016 Qing Tian, Songcan Chen, Xiaoyang Tan

Although leading to promotion of age estimation performance, such a concatenation not only likely confuses the semantics between the gender and age, but also ignores the aging discrepancy between the male and the female.

Age Estimation

Joint Representation Classification for Collective Face Recognition

no code implementations18 May 2015 Liping Wang, Songcan Chen

In this paper, a joint representation classification (JRC) for collective face recognition is proposed.

Classification Face Recognition +2

Tri-Subject Kinship Verification: Understanding the Core of A Family

no code implementations12 Jan 2015 Xiaoqian Qin, Xiaoyang Tan, Songcan Chen

One major challenge in computer vision is to go beyond the modeling of individual objects and to investigate the bi- (one-versus-one) or tri- (one-versus-two) relationship among multiple visual entities, answering such questions as whether a child in a photo belongs to given parents.

feature selection Kinship Verification

Cannot find the paper you are looking for? You can Submit a new open access paper.