1 code implementation • 29 Jan 2024 • Zhemin Zhang, Xun Gong
Specifically, we create a conditional Gaussian distribution for each class and then sample multiple sub-centers from that distribution to extend the linear classifier.
no code implementations • 10 Nov 2023 • Zhemin Zhang, Xun Gong
Inspired by one of the most successful transformers-based models for NLP: Big Bird, we propose a novel sparse attention mechanism for Vision Transformers (ViT).
no code implementations • 13 Apr 2023 • Zhemin Zhang, Xun Gong
Recently, Transformers have shown promising performance in various vision tasks.
no code implementations • 19 Sep 2022 • Zhemin Zhang, Xun Gong
Recently, Transformers have shown promising performance in various vision tasks.
no code implementations • 15 Jun 2022 • Jinyi Wu, Xun Gong, Zhemin Zhang
To verify the effectiveness of SSIA, we performed a particular implementation (called an SSIA block) in convolutional neural network models and validated it on several image classification datasets.
no code implementations • 10 Jun 2022 • Zhemin Zhang, Xun Gong
Positional encoding is important for vision transformer (ViT) to capture the spatial structure of the input image.
no code implementations • 30 Mar 2022 • Zhemin Zhang, Xun Gong, Jinyi Wu
In this way, ReplaceBlock can effectively simulate the feature map of the occluded image.
no code implementations • 24 Mar 2022 • Zhemin Zhang, Xun Gong
The F-SC specifically, first samples a class center Ui for each class from a uniform distribution, and then generates a normal distribution for each class, where the mean is equal to Ui.
no code implementations • 27 Sep 2018 • Peize Zhao, Danfeng Cai, Shaokun Zhang, Feng Chen, Zhemin Zhang, Cheng Wang, Jonathan Li
To forecast the traffic flow across a wide area and overcome the mentioned challenges, we design and propose a promising forecasting model called Layerwise Recurrent Autoencoder (LRA), in which a three-layer stacked autoencoder (SAE) architecture is used to obtain temporal traffic correlations and a recurrent neural networks (RNNs) model for prediction.