no code implementations • 18 Oct 2023 • Shiye Wang, Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang
Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e. g., SGD or Adam) to learn network parameters, which makes training very time- and resource-intensive.
no code implementations • 26 Apr 2022 • Shiye Wang, Changsheng Li, Yanming Li, Ye Yuan, Guoren Wang
Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views.
no code implementations • 28 Oct 2021 • Yanming Li, Changsheng Li, Shiye Wang, Ye Yuan, Guoren Wang
In this paper, we propose a new deep subspace clustering framework, motivated by the energy-based models.