1 code implementation • CVPR 2020 • Liheng Zhang, Guo-Jun Qi
The WCP can be minimized on both labeled and unlabeled data so that networks can be trained in a semi-supervised fashion.
no code implementations • 19 Jun 2019 • Guo-Jun Qi, Liheng Zhang, Xiao Wang
Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of {\em translation} equivariance underlying the success of Convolutional Neural Networks (CNNs).
1 code implementation • ICCV 2019 • Guo-Jun Qi, Liheng Zhang, Chang Wen Chen, Qi Tian
This ensures the resultant TERs of individual images contain the {\em intrinsic} information about their visual structures that would equivary {\em extricably} under various transformations in a generalized {\em nonlinear} case.
1 code implementation • CVPR 2019 • Liheng Zhang, Guo-Jun Qi, Liqiang Wang, Jiebo Luo
The success of deep neural networks often relies on a large amount of labeled examples, which can be difficult to obtain in many real scenarios.
1 code implementation • 21 Nov 2018 • Dongdong Chen, Mingming He, Qingnan Fan, Jing Liao, Liheng Zhang, Dongdong Hou, Lu Yuan, Gang Hua
Image dehazing aims to recover the uncorrupted content from a hazy image.
Ranked #1 on Rain Removal on DID-MDN
no code implementations • NeurIPS 2018 • Liheng Zhang, Marzieh Edraki, Guo-Jun Qi
In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples.
2 code implementations • CVPR 2018 • Guo-Jun Qi, Liheng Zhang, Hao Hu, Marzieh Edraki, Jingdong Wang, Xian-Sheng Hua
In this paper, we present a novel localized Generative Adversarial Net (GAN) to learn on the manifold of real data.
1 code implementation • 13 Aug 2017 • Liheng Zhang, Charu Aggarwal, Guo-Jun Qi
Then the future stock prices are predicted as a nonlinear mapping of the combination of these components in an Inverse Fourier Transform (IFT) fashion.