no code implementations • 23 Oct 2024 • Jiantao Wu, Shentong Mo, ZhenHua Feng, Sara Atito, Josef Kitler, Muhammad Awais
We challenge this assumption by proposing to learn from arbitrary pairs, allowing any pair of samples to be positive within our framework. The primary challenge of the proposed approach lies in applying contrastive learning to disparate pairs which are semantically distant.
1 code implementation • 10 Oct 2024 • Muhammad Awais, Ali Husain Salem Abdulla Alharthi, Amandeep Kumar, Hisham Cholakkal, Rao Muhammad Anwer
In this work, we propose an approach to construct instruction-tuning data that harnesses vision-only data for the agriculture domain.
1 code implementation • 2 Oct 2024 • Umair Nawaz, Muhammad Awais, Hanan Gani, Muzammal Naseer, Fahad Khan, Salman Khan, Rao Muhammad Anwer
Further, this domain desires fine-grained feature learning due to the subtle nature of the downstream tasks (e. g, nutrient deficiency detection, livestock breed classification).
no code implementations • 23 Sep 2024 • Wenhua Dong, Xiao-Jun Wu, ZhenHua Feng, Sara Atito, Muhammad Awais, Josef Kittler
In most existing multi-view modeling scenarios, cross-view correspondence (CVC) between instances of the same target from different views, like paired image-text data, is a crucial prerequisite for effortlessly deriving a consistent representation.
1 code implementation • 14 Aug 2024 • Asif Hanif, Fahad Shamshad, Muhammad Awais, Muzammal Naseer, Fahad Shahbaz Khan, Karthik Nandakumar, Salman Khan, Rao Muhammad Anwer
Inspired by the latest developments in learnable prompts, this work introduces a method to embed a backdoor into the medical foundation model during the prompt learning phase.
1 code implementation • 8 Jul 2024 • Rongchang Li, ZhenHua Feng, Tianyang Xu, Linze Li, Xiao-Jun Wu, Muhammad Awais, Sara Atito, Josef Kittler
For evaluating the task, we construct a new benchmark, Something-composition (Sth-com), based on the widely used Something-Something V2 dataset.
1 code implementation • 27 Jun 2024 • Muhammad Awais, Mehaboobathunnisa Sahul Hameed, Bidisha Bhattacharya, Orly Reiner, Rao Muhammad Anwer
Quantifying cellular processes like mitosis in these organoids offers insights into neurodevelopmental disorders, but the manual analysis is time-consuming, and existing datasets lack specific details for brain organoid studies.
no code implementations • 25 Jun 2024 • Srinivasa Rao Nandam, Sara Atito, ZhenHua Feng, Josef Kittler, Muhammad Awais
The targets for pseudo labelling and reconstruction needs to be generated by a teacher network.
no code implementations • 25 Jun 2024 • Srinivasa Rao Nandam, Sara Atito, ZhenHua Feng, Josef Kittler, Muhammad Awais
Vision transformers combined with self-supervised learning have enabled the development of models which scale across large datasets for several downstream tasks like classification, segmentation and detection.
1 code implementation • 6 Jun 2024 • Amandeep Kumar, Muhammad Awais, Sanath Narayan, Hisham Cholakkal, Salman Khan, Rao Muhammad Anwer
The LAE harnesses a pre-trained vision-language model to find text-guided attribute-specific editing direction in the latent space of any pre-trained 3D-aware GAN.
1 code implementation • 30 Apr 2024 • Zhangyong Tang, Tianyang Xu, ZhenHua Feng, XueFeng Zhu, He Wang, Pengcheng Shao, Chunyang Cheng, Xiao-Jun Wu, Muhammad Awais, Sara Atito, Josef Kittler
We propose a new method based on a mixture of experts, namely MoETrack, as a baseline fusion strategy.
Ranked #3 on Rgb-T Tracking on GTOT
1 code implementation • ICASSP 2024 • Tony Alex, Sara Ahmed, Armin Mustafa, Muhammad Awais, Philip JB Jackson
In the domain of audio transformer architectures, prior research has extensively investigated isotropic architectures that capture the global context through full self-attention and hierarchical architectures that progressively transition from local to global context utilising hierarchical structures with convolutions or window-based attention.
Ranked #23 on Audio Classification on AudioSet
1 code implementation • 31 Mar 2024 • Jiantao Wu, Shentong Mo, Sara Atito, ZhenHua Feng, Josef Kittler, Muhammad Awais
Recently, masked image modeling (MIM), an important self-supervised learning (SSL) method, has drawn attention for its effectiveness in learning data representation from unlabeled data.
1 code implementation • AAAI 2024 • Tony Alex, Sara Ahmed, Armin Mustafa, Muhammad Awais, Philip JB Jackson
Convolutional neural networks (CNNs) and Transformer-based networks have recently enjoyed significant attention for various audio classification and tagging tasks following their wide adoption in the computer vision domain.
Ranked #15 on Audio Classification on AudioSet
no code implementations • 22 Feb 2024 • Abhijeet Parida, Daniel Capellan-Martin, Sara Atito, Muhammad Awais, Maria J. Ledesma-Carbayo, Marius G. Linguraru, Syed Muhammad Anwar
In this context, we introduce Diverse Concept Modeling (DiCoM), a novel self-supervised training paradigm that leverages a student teacher framework for learning diverse concepts and hence effective representation of the CXR data.
no code implementations • 2 Dec 2023 • Jiantao Wu, Shentong Mo, Sara Atito, Josef Kittler, ZhenHua Feng, Muhammad Awais
Recently, self-supervised metric learning has raised attention for the potential to learn a generic distance function.
no code implementations • 13 Nov 2023 • Umar Marikkar, Sara Atito, Muhammad Awais, Adam Mahdi
Vision Transformers (ViTs) are widely adopted in medical imaging tasks, and some existing efforts have been directed towards vision-language training for Chest X-rays (CXRs).
no code implementations • 11 Sep 2023 • Cong Wu, Xiao-Jun Wu, Josef Kittler, Tianyang Xu, Sara Atito, Muhammad Awais, ZhenHua Feng
Contrastive learning has achieved great success in skeleton-based action recognition.
no code implementations • 22 Aug 2023 • Jiantao Wu, Shentong Mo, Muhammad Awais, Sara Atito, ZhenHua Feng, Josef Kittler
Self-supervised pretraining (SSP) has emerged as a popular technique in machine learning, enabling the extraction of meaningful feature representations without labelled data.
1 code implementation • 25 Jul 2023 • Muhammad Awais, Muzammal Naseer, Salman Khan, Rao Muhammad Anwer, Hisham Cholakkal, Mubarak Shah, Ming-Hsuan Yang, Fahad Shahbaz Khan
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
1 code implementation • 22 Mar 2023 • Jiantao Wu, Shentong Mo, Muhammad Awais, Sara Atito, Xingshen Zhang, Lin Wang, Xiang Yang
One major challenge of disentanglement learning with variational autoencoders is the trade-off between disentanglement and reconstruction fidelity.
no code implementations • 23 Nov 2022 • Syed Muhammad Anwar, Abhijeet Parida, Sara Atito, Muhammad Awais, Gustavo Nino, Josef Kitler, Marius George Linguraru
However, the traditional diagnostic tool design methods based on supervised learning are burdened by the need to provide training data annotation, which should be of good quality for better clinical outcomes.
Ranked #1 on Semantic Segmentation on Montgomery County X-ray Set
1 code implementation • 23 Nov 2022 • Sara Atito, Muhammad Awais, Wenwu Wang, Mark D Plumbley, Josef Kittler
Transformers, which were originally developed for natural language processing, have recently generated significant interest in the computer vision and audio communities due to their flexibility in learning long-range relationships.
no code implementations • 29 Aug 2022 • Sara Atito, Syed Muhammad Anwar, Muhammad Awais, Josef Kitler
The availability of large scale data with high quality ground truth labels is a challenge when developing supervised machine learning solutions for healthcare domain.
1 code implementation • 30 May 2022 • Sara Atito, Muhammad Awais, Josef Kittler
This has motivated the research in self-supervised transformer pretraining, which does not need to decode the semantic information conveyed by labels to link it to the image properties, but rather focuses directly on extracting a concise representation of the image data that reflects the notion of similarity, and is invariant to nuisance factors.
no code implementations • 30 Nov 2021 • Sara Atito, Muhammad Awais, Ammarah Farooq, ZhenHua Feng, Josef Kittler
In this aspect the proposed SSL frame-work MC-SSL0. 0 is a step towards Multi-Concept Self-Supervised Learning (MC-SSL) that goes beyond modelling single dominant label in an image to effectively utilise the information from all the concepts present in it.
no code implementations • 25 Nov 2021 • Ammarah Farooq, Muhammad Awais, Sara Ahmed, Josef Kittler
Hence, most of the learning is independent of the image patches $(N)$ in the higher layers, and the class embedding is learned solely based on the Super tokens $(N/M^2)$ where $M^2$ is the window size.
no code implementations • NeurIPS 2021 • Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li
First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation.
no code implementations • ICCV 2021 • Muhammad Awais, Fengwei Zhou, Hang Xu, Lanqing Hong, Ping Luo, Sung-Ho Bae, Zhenguo Li
Extensive Unsupervised Domain Adaptation (UDA) studies have shown great success in practice by learning transferable representations across a labeled source domain and an unlabeled target domain with deep models.
2 code implementations • 8 Apr 2021 • Sara Atito, Muhammad Awais, Josef Kittler
We also observed that SiT is good for few shot learning and also showed that it is learning useful representation by simply training a linear classifier on top of the learned features from SiT.
no code implementations • 5 Mar 2021 • Syed Safwan Khalid, Muhammad Awais, Chi-Ho Chan, ZhenHua Feng, Ammarah Farooq, Ali Akbari, Josef Kittler
One key ingredient of DCNN-based FR is the appropriate design of a loss function that ensures discrimination between various identities.
no code implementations • 19 Jan 2021 • Ammarah Farooq, Muhammad Awais, Josef Kittler, Syed Safwan Khalid
Our framework is novel in its ability to implicitly learn aligned semantics between modalities during the feature learning stage.
Ranked #14 on Text based Person Retrieval on CUHK-PEDES
Cross-Modal Person Re-Identification Cross-Modal Person Re-Identification +3
no code implementations • 20 Oct 2020 • Ali Akbari, Muhammad Awais, Zhen-Hua Feng, Ammarah Farooq, Josef Kittler
Compared with existing loss functions, the lower gradient of the proposed loss function leads to the convergence of SGD to a better optimum point, and consequently a better generalisation.
no code implementations • 7 Sep 2020 • Sara Atito Ali Ahmed, Cemre Zor, Berrin Yanikoglu, Muhammad Awais, Josef Kittler
Deep neural networks have enhanced the performance of decision making systems in many applications including image understanding, and further gains can be achieved by constructing ensembles.
1 code implementation • 19 Jun 2020 • Muhammad Awais, Fahad Shamshad, Sung-Ho Bae
In this paper, we investigate how BatchNorm causes this vulnerability and proposed new normalization that is robust to adversarial attacks.
no code implementations • 20 Feb 2020 • Ammarah Farooq, Muhammad Awais, Fei Yan, Josef Kittler, Ali Akbari, Syed Safwan Khalid
However, in real-world surveillance scenarios, frequently no visual information will be available about the queried person.
1 code implementation • 29 Nov 2018 • Fahad Shamshad, Muhammad Awais, Muhammad Asim, Zain ul Aabidin Lodhi, Muhammad Umair, Ali Ahmed
Among the plethora of techniques devised to curb the prevalence of noise in medical images, deep learning based approaches have shown the most promise.
6 code implementations • CVPR 2018 • Zhen-Hua Feng, Josef Kittler, Muhammad Awais, Patrik Huber, Xiao-Jun Wu
We present a new loss function, namely Wing loss, for robust facial landmark localisation with Convolutional Neural Networks (CNNs).
Ranked #1 on Face Alignment on 300W (NME_inter-pupil (%, Common) metric)
no code implementations • 4 Sep 2017 • Syed Muhammad Anwar, Muhammad Majid, Adnan Qayyum, Muhammad Awais, Majdi Alnowami, Muhammad Khurram Khan
Deep learning is successfully used as a tool for machine learning, where a neural network is capable of automatically learning features.
1 code implementation • 23 Aug 2017 • Anil Bas, Patrik Huber, William A. P. Smith, Muhammad Awais, Josef Kittler
In this paper, we show how a 3D Morphable Model (i. e. a statistical model of the 3D shape of a class of objects such as faces) can be used to spatially transform input data as a module (a 3DMM-STN) within a convolutional neural network.
no code implementations • 5 May 2017 • Zhen-Hua Feng, Josef Kittler, Muhammad Awais, Patrik Huber, Xiao-Jun Wu
The framework has four stages: face detection, bounding box aggregation, pose estimation and landmark localisation.
1 code implementation • 24 Mar 2017 • Adnan Qayyum, Syed Muhammad Anwar, Muhammad Awais, Muhammad Majid
The learned features and the classification results are used to retrieve medical images.