Search Results for author: Francis EH Tay

Found 4 papers, 2 papers with code

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

13 code implementations ICCV 2021 Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, Shuicheng Yan

To overcome such limitations, we propose a new Tokens-To-Token Vision Transformer (T2T-ViT), which incorporates 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformer motivated by CNN architecture design after empirical study.

Image Classification Language Modelling

Central Similarity Quantization for Efficient Image and Video Retrieval

1 code implementation CVPR 2020 Li Yuan, Tao Wang, Xiaopeng Zhang, Francis EH Tay, Zequn Jie, Wei Liu, Jiashi Feng

In this work, we propose a new \emph{global} similarity metric, termed as \emph{central similarity}, with which the hash codes of similar data pairs are encouraged to approach a common center and those for dissimilar pairs to converge to different centers, to improve hash learning efficiency and retrieval accuracy.

Quantization Retrieval +1

Cycle-SUM: Cycle-consistent Adversarial LSTM Networks for Unsupervised Video Summarization

no code implementations17 Apr 2019 Li Yuan, Francis EH Tay, Ping Li, Li Zhou, Jiashi Feng

The evaluator defines a learnable information preserving metric between original video and summary video and "supervises" the selector to identify the most informative frames to form the summary video.

Unsupervised Video Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.