no code implementations • 13 Feb 2023 • Jiajun Huang, Xinqi Zhu, Chengbin Du, Siqi Ma, Surya Nepal, Chang Xu
To enhance the performance for such models, we consider the weak compressed and strong compressed data as two views of the original data and they should have similar representation and relationships with other samples.
no code implementations • 14 Dec 2022 • Xinqi Zhu, Chang Xu, DaCheng Tao
In this paper, we propose a model that automates this process and achieves state-of-the-art semantic discovery performance.
1 code implementation • 7 Jun 2021 • Xinqi Zhu, Chang Xu, DaCheng Tao
Instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations.
1 code implementation • CVPR 2021 • Xinqi Zhu, Chang Xu, DaCheng Tao
We thus impose a perturbation on a certain dimension of the latent code, and expect to identify the perturbation along this dimension from the generated images so that the encoding of simple variations can be enforced.
1 code implementation • ICCV 2019 • Xinqi Zhu, Chang Xu, Langwen Hui, Cewu Lu, DaCheng Tao
Specifically, we show how two-layer subnets in CNNs can be converted to temporal bilinear modules by adding an auxiliary-branch.
1 code implementation • ECCV 2020 • Xinqi Zhu, Chang Xu, DaCheng Tao
Given image pairs generated by latent codes varying in a single dimension, this varied dimension could be closely correlated with these image pairs if the representation is well disentangled.
4 code implementations • 28 Sep 2017 • Xinqi Zhu, Michael Bain
In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance.