Search Results for author: Shaheer Mohamed

Found 4 papers, 3 papers with code

Spectral-Enhanced Transformers: Leveraging Large-Scale Pretrained Models for Hyperspectral Object Tracking

no code implementations26 Feb 2025 Shaheer Mohamed, Tharindu Fernando, Sridha Sridharan, Peyman Moghadam, Clinton Fookes

This is particularly critical for complex tasks like object tracking, and the scarcity of large datasets in the hyperspectral domain acts as a bottleneck in achieving the full potential of powerful transformer models.

Object Object Tracking

Pre-training with Random Orthogonal Projection Image Modeling

1 code implementation28 Oct 2023 Maryam Haghighat, Peyman Moghadam, Shaheer Mohamed, Piotr Koniusz

In this paper, we propose an Image Modeling framework based on random orthogonal projection instead of binary masking as in MIM.

Decoder

FactoFormer: Factorized Hyperspectral Transformers with Self-Supervised Pretraining

1 code implementation18 Sep 2023 Shaheer Mohamed, Maryam Haghighat, Tharindu Fernando, Sridha Sridharan, Clinton Fookes, Peyman Moghadam

However, current state-of-the-art hyperspectral transformers only tokenize the input HSI sample along the spectral dimension, resulting in the under-utilization of spatial information.

Cannot find the paper you are looking for? You can Submit a new open access paper.