Search Results for author: Hongjun Wang

Found 11 papers, 3 papers with code

Adversarial Training using Contrastive Divergence

no code implementations1 Jan 2021 Hongjun Wang, Guanbin Li, Liang Lin

To protect the security of machine learning models against adversarial examples, adversarial training becomes the most popular and powerful strategy against various adversarial attacks by injecting adversarial examples into training data.

A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning

no code implementations15 Oct 2020 Hongjun Wang, Guanbin Li, Xiaobai Liu, Liang Lin

Although deep convolutional neural networks (CNNs) have demonstrated remarkable performance on multiple computer vision tasks, researches on adversarial learning have shown that deep models are vulnerable to adversarial examples, which are crafted by adding visually imperceptible perturbations to the input images.

Adversarial Attack

Distributional Discrepancy: A Metric for Unconditional Text Generation

1 code implementation4 May 2020 Ping Cai, Xingyuan Chen, Peng Jin, Hongjun Wang, Tianrui Li

The purpose of unconditional text generation is to train a model with real sentences, then generate novel sentences of the same quality and diversity as the training data.

Language Modelling Text Generation

Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

1 code implementation CVPR 2020 Hongjun Wang, Guangrun Wang, Ya Li, Dongyu Zhang, Liang Lin

To examine the robustness of ReID systems is rather important because the insecurity of ReID systems may cause severe losses, e. g., the criminals may use the adversarial perturbations to cheat the CCTV systems.

Adversarial Attack Person Re-Identification

Adding A Filter Based on The Discriminator to Improve Unconditional Text Generation

1 code implementation5 Apr 2020 Xingyuan Chen, Ping Cai, Peng Jin, Hongjun Wang, Xin-yu Dai, Jia-Jun Chen

To alleviate the exposure bias, generative adversarial networks (GAN) use the discriminator to update the generator's parameters directly, but they fail by being evaluated precisely.

Language Modelling Text Generation

Micro-supervised Disturbance Learning: A Perspective of Representation Probability Distribution

no code implementations13 Mar 2020 Jielei Chu, Jing Liu, Hongjun Wang, Meng Hua, Zhiguo Gong, Tianrui Li

To explore the representation learning capability under the continuous stimulation of the SPI, we present a deep Micro-supervised Disturbance Learning (Micro-DL) framework based on the Micro-DGRBM and Micro-DRBM models and compare it with a similar deep structure which has not any external stimulation.

Representation Learning

The Detection of Distributional Discrepancy for Text Generation

no code implementations28 Sep 2019 Xingyuan Chen, Ping Cai, Peng Jin, Haokun Du, Hongjun Wang, Xingyu Dai, Jia-Jun Chen

In this paper, we theoretically propose two metric functions to measure the distributional difference between real text and generated text.

Language Modelling Text Generation

Multi-local Collaborative AutoEncoder

no code implementations12 Jun 2019 Jielei Chu, Hongjun Wang, Jing Liu, Zhiguo Gong, Tianrui Li

In mcrRBM and mcrGRBM models, the structure and multi-local collaborative relationships of unlabeled data are integrated into their encoding procedure.

Representation Learning

Unsupervised Feature Learning Architecture with Multi-clustering Integration RBM

no code implementations5 Dec 2018 Jielei Chu, Hongjun Wang, Jing Liu, Zhiguo Gong, Tianrui Li

In this paper, we present a novel unsupervised feature learning architecture, which consists of a multi-clustering integration module and a variant of RBM termed multi-clustering integration RBM (MIRBM).

Crowd Counting using Deep Recurrent Spatial-Aware Network

no code implementations2 Jul 2018 Lingbo Liu, Hongjun Wang, Guanbin Li, Wanli Ouyang, Liang Lin

Crowd counting from unconstrained scene images is a crucial task in many real-world applications like urban surveillance and management, but it is greatly challenged by the camera's perspective that causes huge appearance variations in people's scales and rotations.

Crowd Counting

Restricted Boltzmann Machines with Gaussian Visible Units Guided by Pairwise Constraints

no code implementations13 Jan 2017 Jielei Chu, Hongjun Wang, Hua Meng, Peng Jin, Tianrui Li

To enhance the expression ability of traditional RBMs, in this paper, we propose pairwise constraints restricted Boltzmann machine with Gaussian visible units (pcGRBM) model, in which the learning procedure is guided by pairwise constraints and the process of encoding is conducted under these guidances.

Cannot find the paper you are looking for? You can Submit a new open access paper.