Search Results for author: Jian Huang

Found 37 papers, 6 papers with code

Non-Asymptotic Error Bounds for Bidirectional GANs

no code implementations NeurIPS 2021 Shiao Liu, Yunfei Yang, Jian Huang, Yuling Jiao, Yang Wang

Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support.

A Learning-based Approach Towards Automated Tuning of SSD Configurations

no code implementations17 Oct 2021 Daixuan Li, Jian Huang

Thanks to the mature manufacturing techniques, solid-state drives (SSDs) are highly customizable for applications today, which brings opportunities to further improve their storage performance and resource utilization.

Relative Entropy Gradient Sampler for Unnormalized Distributions

no code implementations6 Oct 2021 Xingdong Feng, Yuan Gao, Jian Huang, Yuling Jiao, Xu Liu

We propose a relative entropy gradient sampler (REGS) for sampling from unnormalized distributions.

An error analysis of generative adversarial networks for learning distributions

no code implementations27 May 2021 Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang, Yunfei Yang

This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples.

Non-asymptotic Excess Risk Bounds for Classification with Deep Convolutional Neural Networks

no code implementations1 May 2021 Guohao Shen, Yuling Jiao, Yuanyuan Lin, Jian Huang

To establish these results, we derive an upper bound for the covering number for the class of general convolutional neural networks with a bias term in each convolutional layer, and derive new results on the approximation power of CNNs for any uniformly-continuous target functions.

Dynamic sensitivity of quantum Rabi model with quantum criticality

no code implementations5 Jan 2021 Ying Hu, Jian Huang, Jin-Feng Huang, Qiong-Tao Xie, Jie-Qiao Liao

We study the dynamic sensitivity of the quantum Rabi model, which exhibits quantum criticality in the finite-component-system case.

Quantum Physics

Toward Understanding Supervised Representation Learning with RKHS and GAN

no code implementations1 Jan 2021 Xu Liao, Jin Liu, Tianwen Wen, Yuling Jiao, Jian Huang

At the population level, we formulate the ideal representation learning task as that of finding a nonlinear map that minimizes the sum of losses characterizing conditional independence (with RKHS) and disentanglement (with GAN).

Ranked #2 on Image Classification on STL-10 (using extra training data)

Image Classification Representation Learning

Sufficient and Disentangled Representation Learning

no code implementations1 Jan 2021 Jian Huang, Yuling Jiao, Xu Liao, Jin Liu, Zhou Yu

We provide strong statistical guarantees for the learned representation by establishing an upper bound on the excess error of the objective function and show that it reaches the nonparametric minimax rate under mild conditions.

Representation Learning

Quantum simulation of a three-mode optomechanical system based on the Fredkin-type interaction

no code implementations17 Dec 2020 Jin Liu, Yue-Hui Zhou, Jian Huang, Jin-Feng Huang, Jie-Qiao Liao

The realization of multimode optomechanical interactions in the single-photon strong-coupling regime is a desired task in cavity optomechanics, but it remains a challenge in realistic physical systems.

Quantum Physics

Generative Learning With Euler Particle Transport

no code implementations11 Dec 2020 Yuan Gao, Jian Huang, Yuling Jiao, Jin Liu, Xiliang Lu, Zhijian Yang

The key task in training is the estimation of the density ratios or differences that determine the residual maps.

EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks

no code implementations30 Oct 2020 Lubin Meng, Jian Huang, Zhigang Zeng, Xue Jiang, Shan Yu, Tzyy-Ping Jung, Chin-Teng Lin, Ricardo Chavarriaga, Dongrui Wu

Test samples with the backdoor key will then be classified into the target class specified by the attacker.

EEG

Transfer Learning for Motor Imagery Based Brain-Computer Interfaces: A Complete Pipeline

1 code implementation3 Jul 2020 Dongrui Wu, Xue Jiang, Ruimin Peng, Wanzeng Kong, Jian Huang, Zhigang Zeng

Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject, and demonstrated promising performance.

Classification EEG +3

Deep Dimension Reduction for Supervised Representation Learning

1 code implementation10 Jun 2020 Jian Huang, Yuling Jiao, Xu Liao, Jin Liu, Zhou Yu

We propose a deep dimension reduction approach to learning representations with these characteristics.

Dimensionality Reduction Representation Learning

Efficient Use of heuristics for accelerating XCS-based Policy Learning in Markov Games

no code implementations26 May 2020 Hao Chen, Chang Wang, Jian Huang, Jianxing Gong

Besides, taking advantages of the condition representation and matching mechanism of XCS, the heuristic policies and the opponent model can provide guidance for situations with similar feature representation.

BoostTree and BoostForest for Ensemble Learning

no code implementations21 Mar 2020 Changming Zhao, Dongrui Wu, Jian Huang, Ye Yuan, Hai-Tao Zhang, Ruimin Peng, Zhenhua Shi, Chenfeng Guo

Bootstrap aggregating (Bagging) and boosting are two popular ensemble learning approaches, which combine multiple base learners to generate a composite model for more accurate and more reliable performance.

Ensemble Learning General Classification

Learning Implicit Generative Models with Theoretical Guarantees

no code implementations7 Feb 2020 Yuan Gao, Jian Huang, Yuling Jiao, Jin Liu

We then solve the McKean-Vlasov equation numerically using the forward Euler iteration, where the forward Euler map depends on the density ratio (density difference) between the distribution at current iteration and the underlying target distribution.

On Newton Screening

no code implementations27 Jan 2020 Jian Huang, Yuling Jiao, Lican Kang, Jin Liu, Yanyan Liu, Xiliang Lu, Yuanyuan Yang

Based on this KKT system, a built-in working set with a relatively small size is first determined using the sum of primal and dual variables generated from the previous iteration, then the primal variable is updated by solving a least-squares problem on the working set and the dual variable updated based on a closed-form expression.

Sparse Learning

A Support Detection and Root Finding Approach for Learning High-dimensional Generalized Linear Models

no code implementations16 Jan 2020 Jian Huang, Yuling Jiao, Lican Kang, Jin Liu, Yanyan Liu, Xiliang Lu

Feature selection is important for modeling high-dimensional data, where the number of variables can be much larger than the sample size.

Feature Selection

Supervised Discriminative Sparse PCA with Adaptive Neighbors for Dimensionality Reduction

1 code implementation9 Jan 2020 Zhenhua Shi, Dongrui Wu, Jian Huang, Yu-Kai Wang, Chin-Teng Lin

Approaches that preserve only the local data structure, such as locality preserving projections, are usually unsupervised (and hence cannot use label information) and uses a fixed similarity graph.

General Classification Supervised dimensionality reduction

Unsupervised Representation Learning with Future Observation Prediction for Speech Emotion Recognition

no code implementations24 Oct 2019 Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang

Prior works on speech emotion recognition utilize various unsupervised learning approaches to deal with low-resource samples.

Fine-tuning Speech Emotion Recognition +2

Conversational Emotion Analysis via Attention Mechanisms

no code implementations24 Oct 2019 Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang

Different from the emotion recognition in individual utterances, we propose a multimodal learning framework using relation and dependencies among the utterances for conversational emotion analysis.

Emotion Recognition

Domain adversarial learning for emotion recognition

no code implementations24 Oct 2019 Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang

The secondary task is to learn a common representation where speaker identities can not be distinguished.

Emotion Recognition

Expression Analysis Based on Face Regions in Read-world Conditions

no code implementations23 Oct 2019 Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang, Ming-Yue Niu

To sum up, the contributions of this paper lie in two areas: 1) We visualize concerned areas of human faces in emotion recognition; 2) We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.

Emotion Recognition Facial Expression Recognition

Speech Emotion Recognition via Contrastive Loss under Siamese Networks

no code implementations23 Oct 2019 Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang

It outperforms the baseline system that is optimized without the contrastive loss function with 1. 14% and 2. 55% in the weighted accuracy and the unweighted accuracy, respectively.

Feature Selection Motion Capture +1

Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch Normalization

1 code implementation1 Aug 2019 Yuqi Cui, Jian Huang, Dongrui Wu

Takagi-Sugeno-Kang (TSK) fuzzy systems are flexible and interpretable machine learning models; however, they may not be easily optimized when the data size is large, and/or the data dimensionality is high.

General Classification Interpretable Machine Learning

SNAP: A semismooth Newton algorithm for pathwise optimization with optimal local convergence rate and oracle properties

no code implementations9 Oct 2018 Jian Huang, Yuling Jiao, Xiliang Lu, Yueyong Shi, Qinglong Yang

We propose a semismooth Newton algorithm for pathwise optimization (SNAP) for the LASSO and Enet in sparse, high-dimensional linear regression.

Affect Estimation in 3D Space Using Multi-Task Active Learning for Regression

no code implementations8 Aug 2018 Dongrui Wu, Jian Huang

Acquisition of labeled training samples for affective computing is usually costly and time-consuming, as affects are intrinsically subjective, subtle and uncertain, and hence multiple human assessors are needed to evaluate each affective sample.

Active Learning

Active Learning for Regression Using Greedy Sampling

1 code implementation8 Aug 2018 Dongrui Wu, Chin-Teng Lin, Jian Huang

Active learning for regression (ALR) is a methodology to reduce the number of labeled samples, by selecting the most beneficial ones to label, instead of random selection.

Active Learning EEG

Photo-Guided Exploration of Volume Data Features

no code implementations18 Oct 2017 Mohammad Raji, Alok Hota, Robert Sisneros, Peter Messmer, Jian Huang

In this work, we pose the question of whether, by considering qualitative information such as a sample target image as input, one can produce a rendered image of scientific data that is similar to the target.

Semismooth Newton Coordinate Descent Algorithm for Elastic-Net Penalized Huber Loss Regression and Quantile Regression

no code implementations9 Sep 2015 Congrui Yi, Jian Huang

We propose an algorithm, semismooth Newton coordinate descent (SNCD), for the elastic-net penalized Huber loss regression and quantile regression in high dimensional settings.

A Unified Primal Dual Active Set Algorithm for Nonconvex Sparse Recovery

no code implementations4 Oct 2013 Jian Huang, Yuling Jiao, Bangti Jin, Jin Liu, Xiliang Lu, Can Yang

In this paper, we consider the problem of recovering a sparse signal based on penalized least squares formulations.

SCAD-penalized regression in high-dimensional partially linear models

no code implementations31 Mar 2009 Huiliang Xie, Jian Huang

We consider the problem of simultaneous variable selection and estimation in partially linear models with a divergent number of covariates in the linear part, under the assumption that the vector of regression coefficients is sparse.

Statistics Theory Statistics Theory 62J05, 62G08 (Primary) 62E20 (Secondary)

Cannot find the paper you are looking for? You can Submit a new open access paper.