1 code implementation • 29 Sep 2022 • Ruchi Guo, Shuhao Cao, Long Chen
A Transformer-based deep direct sampling method is proposed for electrical impedance tomography, a well-known severely ill-posed nonlinear boundary value inverse problem.
no code implementations • 8 Feb 2022 • Shuhao Cao, Peng Xu, David A. Clifton
"Masked Autoencoders (MAE) Are Scalable Vision Learners" revolutionizes the self-supervised learning method in that it not only achieves the state-of-the-art for image pre-training, but is also a milestone that bridges the gap between visual and linguistic masked autoencoding (BERT-style) pre-trainings.
1 code implementation • NeurIPS 2021 • Shuhao Cao
Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length.